Nov 29 01:17:07 np0005539564 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 01:17:07 np0005539564 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 01:17:07 np0005539564 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:17:07 np0005539564 kernel: BIOS-provided physical RAM map:
Nov 29 01:17:07 np0005539564 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 01:17:07 np0005539564 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 01:17:07 np0005539564 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 01:17:07 np0005539564 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 01:17:07 np0005539564 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 01:17:07 np0005539564 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 01:17:07 np0005539564 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 01:17:07 np0005539564 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 01:17:07 np0005539564 kernel: NX (Execute Disable) protection: active
Nov 29 01:17:07 np0005539564 kernel: APIC: Static calls initialized
Nov 29 01:17:07 np0005539564 kernel: SMBIOS 2.8 present.
Nov 29 01:17:07 np0005539564 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 01:17:07 np0005539564 kernel: Hypervisor detected: KVM
Nov 29 01:17:07 np0005539564 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 01:17:07 np0005539564 kernel: kvm-clock: using sched offset of 3725241200 cycles
Nov 29 01:17:07 np0005539564 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 01:17:07 np0005539564 kernel: tsc: Detected 2799.998 MHz processor
Nov 29 01:17:07 np0005539564 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 01:17:07 np0005539564 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 01:17:07 np0005539564 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 01:17:07 np0005539564 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 01:17:07 np0005539564 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 01:17:07 np0005539564 kernel: Using GB pages for direct mapping
Nov 29 01:17:07 np0005539564 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 01:17:07 np0005539564 kernel: ACPI: Early table checksum verification disabled
Nov 29 01:17:07 np0005539564 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 01:17:07 np0005539564 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:07 np0005539564 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:07 np0005539564 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:07 np0005539564 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 01:17:07 np0005539564 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:07 np0005539564 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:07 np0005539564 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 01:17:07 np0005539564 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 01:17:07 np0005539564 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 01:17:07 np0005539564 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 01:17:07 np0005539564 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 01:17:07 np0005539564 kernel: No NUMA configuration found
Nov 29 01:17:07 np0005539564 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 01:17:07 np0005539564 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 01:17:07 np0005539564 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 01:17:07 np0005539564 kernel: Zone ranges:
Nov 29 01:17:07 np0005539564 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 01:17:07 np0005539564 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 01:17:07 np0005539564 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:17:07 np0005539564 kernel:  Device   empty
Nov 29 01:17:07 np0005539564 kernel: Movable zone start for each node
Nov 29 01:17:07 np0005539564 kernel: Early memory node ranges
Nov 29 01:17:07 np0005539564 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 01:17:07 np0005539564 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 01:17:07 np0005539564 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:17:07 np0005539564 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 01:17:07 np0005539564 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 01:17:07 np0005539564 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 01:17:07 np0005539564 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 01:17:07 np0005539564 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 01:17:07 np0005539564 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 01:17:07 np0005539564 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 01:17:07 np0005539564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 01:17:07 np0005539564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 01:17:07 np0005539564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 01:17:07 np0005539564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 01:17:07 np0005539564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 01:17:07 np0005539564 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 01:17:07 np0005539564 kernel: TSC deadline timer available
Nov 29 01:17:07 np0005539564 kernel: CPU topo: Max. logical packages:   8
Nov 29 01:17:07 np0005539564 kernel: CPU topo: Max. logical dies:       8
Nov 29 01:17:07 np0005539564 kernel: CPU topo: Max. dies per package:   1
Nov 29 01:17:07 np0005539564 kernel: CPU topo: Max. threads per core:   1
Nov 29 01:17:07 np0005539564 kernel: CPU topo: Num. cores per package:     1
Nov 29 01:17:07 np0005539564 kernel: CPU topo: Num. threads per package:   1
Nov 29 01:17:07 np0005539564 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 01:17:07 np0005539564 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 01:17:07 np0005539564 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 01:17:07 np0005539564 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 01:17:07 np0005539564 kernel: Booting paravirtualized kernel on KVM
Nov 29 01:17:07 np0005539564 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 01:17:07 np0005539564 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 01:17:07 np0005539564 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 01:17:07 np0005539564 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 01:17:07 np0005539564 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:17:07 np0005539564 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 01:17:07 np0005539564 kernel: random: crng init done
Nov 29 01:17:07 np0005539564 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: Fallback order for Node 0: 0 
Nov 29 01:17:07 np0005539564 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 01:17:07 np0005539564 kernel: Policy zone: Normal
Nov 29 01:17:07 np0005539564 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 01:17:07 np0005539564 kernel: software IO TLB: area num 8.
Nov 29 01:17:07 np0005539564 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 01:17:07 np0005539564 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 01:17:07 np0005539564 kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 01:17:07 np0005539564 kernel: Dynamic Preempt: voluntary
Nov 29 01:17:07 np0005539564 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 01:17:07 np0005539564 kernel: rcu: #011RCU event tracing is enabled.
Nov 29 01:17:07 np0005539564 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 01:17:07 np0005539564 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 29 01:17:07 np0005539564 kernel: #011Rude variant of Tasks RCU enabled.
Nov 29 01:17:07 np0005539564 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 29 01:17:07 np0005539564 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 01:17:07 np0005539564 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 01:17:07 np0005539564 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:17:07 np0005539564 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:17:07 np0005539564 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:17:07 np0005539564 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 01:17:07 np0005539564 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 01:17:07 np0005539564 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 01:17:07 np0005539564 kernel: Console: colour VGA+ 80x25
Nov 29 01:17:07 np0005539564 kernel: printk: console [ttyS0] enabled
Nov 29 01:17:07 np0005539564 kernel: ACPI: Core revision 20230331
Nov 29 01:17:07 np0005539564 kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 01:17:07 np0005539564 kernel: x2apic enabled
Nov 29 01:17:07 np0005539564 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 01:17:07 np0005539564 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 01:17:07 np0005539564 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 29 01:17:07 np0005539564 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 01:17:07 np0005539564 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 01:17:07 np0005539564 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 01:17:07 np0005539564 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 01:17:07 np0005539564 kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 01:17:07 np0005539564 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 01:17:07 np0005539564 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 01:17:07 np0005539564 kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 01:17:07 np0005539564 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 01:17:07 np0005539564 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 01:17:07 np0005539564 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 01:17:07 np0005539564 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 01:17:07 np0005539564 kernel: x86/bugs: return thunk changed
Nov 29 01:17:07 np0005539564 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 01:17:07 np0005539564 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 01:17:07 np0005539564 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 01:17:07 np0005539564 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 01:17:07 np0005539564 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 01:17:07 np0005539564 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 01:17:07 np0005539564 kernel: Freeing SMP alternatives memory: 40K
Nov 29 01:17:07 np0005539564 kernel: pid_max: default: 32768 minimum: 301
Nov 29 01:17:07 np0005539564 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 01:17:07 np0005539564 kernel: landlock: Up and running.
Nov 29 01:17:07 np0005539564 kernel: Yama: becoming mindful.
Nov 29 01:17:07 np0005539564 kernel: SELinux:  Initializing.
Nov 29 01:17:07 np0005539564 kernel: LSM support for eBPF active
Nov 29 01:17:07 np0005539564 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 01:17:07 np0005539564 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 01:17:07 np0005539564 kernel: ... version:                0
Nov 29 01:17:07 np0005539564 kernel: ... bit width:              48
Nov 29 01:17:07 np0005539564 kernel: ... generic registers:      6
Nov 29 01:17:07 np0005539564 kernel: ... value mask:             0000ffffffffffff
Nov 29 01:17:07 np0005539564 kernel: ... max period:             00007fffffffffff
Nov 29 01:17:07 np0005539564 kernel: ... fixed-purpose events:   0
Nov 29 01:17:07 np0005539564 kernel: ... event mask:             000000000000003f
Nov 29 01:17:07 np0005539564 kernel: signal: max sigframe size: 1776
Nov 29 01:17:07 np0005539564 kernel: rcu: Hierarchical SRCU implementation.
Nov 29 01:17:07 np0005539564 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 29 01:17:07 np0005539564 kernel: smp: Bringing up secondary CPUs ...
Nov 29 01:17:07 np0005539564 kernel: smpboot: x86: Booting SMP configuration:
Nov 29 01:17:07 np0005539564 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 01:17:07 np0005539564 kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 01:17:07 np0005539564 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 29 01:17:07 np0005539564 kernel: node 0 deferred pages initialised in 15ms
Nov 29 01:17:07 np0005539564 kernel: Memory: 7765924K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616272K reserved, 0K cma-reserved)
Nov 29 01:17:07 np0005539564 kernel: devtmpfs: initialized
Nov 29 01:17:07 np0005539564 kernel: x86/mm: Memory block size: 128MB
Nov 29 01:17:07 np0005539564 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 01:17:07 np0005539564 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 01:17:07 np0005539564 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 01:17:07 np0005539564 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 01:17:07 np0005539564 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 01:17:07 np0005539564 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 01:17:07 np0005539564 kernel: audit: initializing netlink subsys (disabled)
Nov 29 01:17:07 np0005539564 kernel: audit: type=2000 audit(1764397025.205:1): state=initialized audit_enabled=0 res=1
Nov 29 01:17:07 np0005539564 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 01:17:07 np0005539564 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 01:17:07 np0005539564 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 01:17:07 np0005539564 kernel: cpuidle: using governor menu
Nov 29 01:17:07 np0005539564 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 01:17:07 np0005539564 kernel: PCI: Using configuration type 1 for base access
Nov 29 01:17:07 np0005539564 kernel: PCI: Using configuration type 1 for extended access
Nov 29 01:17:07 np0005539564 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 01:17:07 np0005539564 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 01:17:07 np0005539564 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 01:17:07 np0005539564 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 01:17:07 np0005539564 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 01:17:07 np0005539564 kernel: Demotion targets for Node 0: null
Nov 29 01:17:07 np0005539564 kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 01:17:07 np0005539564 kernel: ACPI: Added _OSI(Module Device)
Nov 29 01:17:07 np0005539564 kernel: ACPI: Added _OSI(Processor Device)
Nov 29 01:17:07 np0005539564 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 01:17:07 np0005539564 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 01:17:07 np0005539564 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 01:17:07 np0005539564 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 01:17:07 np0005539564 kernel: ACPI: Interpreter enabled
Nov 29 01:17:07 np0005539564 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 01:17:07 np0005539564 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 01:17:07 np0005539564 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 01:17:07 np0005539564 kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 01:17:07 np0005539564 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 01:17:07 np0005539564 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 01:17:07 np0005539564 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [3] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [4] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [5] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [6] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [7] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [8] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [9] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [10] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [11] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [12] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [13] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [14] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [15] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [16] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [17] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [18] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [19] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [20] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [21] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [22] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [23] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [24] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [25] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [26] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [27] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [28] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [29] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [30] registered
Nov 29 01:17:07 np0005539564 kernel: acpiphp: Slot [31] registered
Nov 29 01:17:07 np0005539564 kernel: PCI host bridge to bus 0000:00
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 01:17:07 np0005539564 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 01:17:07 np0005539564 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 01:17:07 np0005539564 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 01:17:07 np0005539564 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 01:17:07 np0005539564 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 01:17:07 np0005539564 kernel: iommu: Default domain type: Translated
Nov 29 01:17:07 np0005539564 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 01:17:07 np0005539564 kernel: SCSI subsystem initialized
Nov 29 01:17:07 np0005539564 kernel: ACPI: bus type USB registered
Nov 29 01:17:07 np0005539564 kernel: usbcore: registered new interface driver usbfs
Nov 29 01:17:07 np0005539564 kernel: usbcore: registered new interface driver hub
Nov 29 01:17:07 np0005539564 kernel: usbcore: registered new device driver usb
Nov 29 01:17:07 np0005539564 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 01:17:07 np0005539564 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 01:17:07 np0005539564 kernel: PTP clock support registered
Nov 29 01:17:07 np0005539564 kernel: EDAC MC: Ver: 3.0.0
Nov 29 01:17:07 np0005539564 kernel: NetLabel: Initializing
Nov 29 01:17:07 np0005539564 kernel: NetLabel:  domain hash size = 128
Nov 29 01:17:07 np0005539564 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 01:17:07 np0005539564 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 01:17:07 np0005539564 kernel: PCI: Using ACPI for IRQ routing
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 01:17:07 np0005539564 kernel: vgaarb: loaded
Nov 29 01:17:07 np0005539564 kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 01:17:07 np0005539564 kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 01:17:07 np0005539564 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 01:17:07 np0005539564 kernel: pnp: PnP ACPI init
Nov 29 01:17:07 np0005539564 kernel: pnp: PnP ACPI: found 5 devices
Nov 29 01:17:07 np0005539564 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 01:17:07 np0005539564 kernel: NET: Registered PF_INET protocol family
Nov 29 01:17:07 np0005539564 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 01:17:07 np0005539564 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:17:07 np0005539564 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 01:17:07 np0005539564 kernel: NET: Registered PF_XDP protocol family
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 01:17:07 np0005539564 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 01:17:07 np0005539564 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 01:17:07 np0005539564 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71867 usecs
Nov 29 01:17:07 np0005539564 kernel: PCI: CLS 0 bytes, default 64
Nov 29 01:17:07 np0005539564 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 01:17:07 np0005539564 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 01:17:07 np0005539564 kernel: ACPI: bus type thunderbolt registered
Nov 29 01:17:07 np0005539564 kernel: Trying to unpack rootfs image as initramfs...
Nov 29 01:17:07 np0005539564 kernel: Initialise system trusted keyrings
Nov 29 01:17:07 np0005539564 kernel: Key type blacklist registered
Nov 29 01:17:07 np0005539564 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 01:17:07 np0005539564 kernel: zbud: loaded
Nov 29 01:17:07 np0005539564 kernel: integrity: Platform Keyring initialized
Nov 29 01:17:07 np0005539564 kernel: integrity: Machine keyring initialized
Nov 29 01:17:07 np0005539564 kernel: Freeing initrd memory: 85868K
Nov 29 01:17:07 np0005539564 kernel: NET: Registered PF_ALG protocol family
Nov 29 01:17:07 np0005539564 kernel: xor: automatically using best checksumming function   avx       
Nov 29 01:17:07 np0005539564 kernel: Key type asymmetric registered
Nov 29 01:17:07 np0005539564 kernel: Asymmetric key parser 'x509' registered
Nov 29 01:17:07 np0005539564 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 01:17:07 np0005539564 kernel: io scheduler mq-deadline registered
Nov 29 01:17:07 np0005539564 kernel: io scheduler kyber registered
Nov 29 01:17:07 np0005539564 kernel: io scheduler bfq registered
Nov 29 01:17:07 np0005539564 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 01:17:07 np0005539564 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 01:17:07 np0005539564 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 01:17:07 np0005539564 kernel: ACPI: button: Power Button [PWRF]
Nov 29 01:17:07 np0005539564 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 01:17:07 np0005539564 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 01:17:07 np0005539564 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 01:17:07 np0005539564 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 01:17:07 np0005539564 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 01:17:07 np0005539564 kernel: Non-volatile memory driver v1.3
Nov 29 01:17:07 np0005539564 kernel: rdac: device handler registered
Nov 29 01:17:07 np0005539564 kernel: hp_sw: device handler registered
Nov 29 01:17:07 np0005539564 kernel: emc: device handler registered
Nov 29 01:17:07 np0005539564 kernel: alua: device handler registered
Nov 29 01:17:07 np0005539564 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 01:17:07 np0005539564 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 01:17:07 np0005539564 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 01:17:07 np0005539564 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 01:17:07 np0005539564 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 01:17:07 np0005539564 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 01:17:07 np0005539564 kernel: usb usb1: Product: UHCI Host Controller
Nov 29 01:17:07 np0005539564 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 01:17:07 np0005539564 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 01:17:07 np0005539564 kernel: hub 1-0:1.0: USB hub found
Nov 29 01:17:07 np0005539564 kernel: hub 1-0:1.0: 2 ports detected
Nov 29 01:17:07 np0005539564 kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 01:17:07 np0005539564 kernel: usbserial: USB Serial support registered for generic
Nov 29 01:17:07 np0005539564 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 01:17:07 np0005539564 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 01:17:07 np0005539564 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 01:17:07 np0005539564 kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 01:17:07 np0005539564 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 01:17:07 np0005539564 kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 01:17:07 np0005539564 kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T06:17:06 UTC (1764397026)
Nov 29 01:17:07 np0005539564 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 01:17:07 np0005539564 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 01:17:07 np0005539564 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 01:17:07 np0005539564 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 01:17:07 np0005539564 kernel: usbcore: registered new interface driver usbhid
Nov 29 01:17:07 np0005539564 kernel: usbhid: USB HID core driver
Nov 29 01:17:07 np0005539564 kernel: drop_monitor: Initializing network drop monitor service
Nov 29 01:17:07 np0005539564 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 01:17:07 np0005539564 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 01:17:07 np0005539564 kernel: Initializing XFRM netlink socket
Nov 29 01:17:07 np0005539564 kernel: NET: Registered PF_INET6 protocol family
Nov 29 01:17:07 np0005539564 kernel: Segment Routing with IPv6
Nov 29 01:17:07 np0005539564 kernel: NET: Registered PF_PACKET protocol family
Nov 29 01:17:07 np0005539564 kernel: mpls_gso: MPLS GSO support
Nov 29 01:17:07 np0005539564 kernel: IPI shorthand broadcast: enabled
Nov 29 01:17:07 np0005539564 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 01:17:07 np0005539564 kernel: AES CTR mode by8 optimization enabled
Nov 29 01:17:07 np0005539564 kernel: sched_clock: Marking stable (1942038794, 151513724)->(2241602170, -148049652)
Nov 29 01:17:07 np0005539564 kernel: registered taskstats version 1
Nov 29 01:17:07 np0005539564 kernel: Loading compiled-in X.509 certificates
Nov 29 01:17:07 np0005539564 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:17:07 np0005539564 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 01:17:07 np0005539564 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 01:17:07 np0005539564 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 01:17:07 np0005539564 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 01:17:07 np0005539564 kernel: Demotion targets for Node 0: null
Nov 29 01:17:07 np0005539564 kernel: page_owner is disabled
Nov 29 01:17:07 np0005539564 kernel: Key type .fscrypt registered
Nov 29 01:17:07 np0005539564 kernel: Key type fscrypt-provisioning registered
Nov 29 01:17:07 np0005539564 kernel: Key type big_key registered
Nov 29 01:17:07 np0005539564 kernel: Key type encrypted registered
Nov 29 01:17:07 np0005539564 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 01:17:07 np0005539564 kernel: Loading compiled-in module X.509 certificates
Nov 29 01:17:07 np0005539564 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:17:07 np0005539564 kernel: ima: Allocated hash algorithm: sha256
Nov 29 01:17:07 np0005539564 kernel: ima: No architecture policies found
Nov 29 01:17:07 np0005539564 kernel: evm: Initialising EVM extended attributes:
Nov 29 01:17:07 np0005539564 kernel: evm: security.selinux
Nov 29 01:17:07 np0005539564 kernel: evm: security.SMACK64 (disabled)
Nov 29 01:17:07 np0005539564 kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 01:17:07 np0005539564 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 01:17:07 np0005539564 kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 01:17:07 np0005539564 kernel: evm: security.apparmor (disabled)
Nov 29 01:17:07 np0005539564 kernel: evm: security.ima
Nov 29 01:17:07 np0005539564 kernel: evm: security.capability
Nov 29 01:17:07 np0005539564 kernel: evm: HMAC attrs: 0x1
Nov 29 01:17:07 np0005539564 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 01:17:07 np0005539564 kernel: Running certificate verification RSA selftest
Nov 29 01:17:07 np0005539564 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 01:17:07 np0005539564 kernel: Running certificate verification ECDSA selftest
Nov 29 01:17:07 np0005539564 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 01:17:07 np0005539564 kernel: clk: Disabling unused clocks
Nov 29 01:17:07 np0005539564 kernel: Freeing unused decrypted memory: 2028K
Nov 29 01:17:07 np0005539564 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 01:17:07 np0005539564 kernel: Write protecting the kernel read-only data: 30720k
Nov 29 01:17:07 np0005539564 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 01:17:07 np0005539564 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 01:17:07 np0005539564 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 01:17:07 np0005539564 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 01:17:07 np0005539564 kernel: usb 1-1: Manufacturer: QEMU
Nov 29 01:17:07 np0005539564 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 01:17:07 np0005539564 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 01:17:07 np0005539564 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 01:17:07 np0005539564 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 01:17:07 np0005539564 kernel: Run /init as init process
Nov 29 01:17:07 np0005539564 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:17:07 np0005539564 systemd: Detected virtualization kvm.
Nov 29 01:17:07 np0005539564 systemd: Detected architecture x86-64.
Nov 29 01:17:07 np0005539564 systemd: Running in initrd.
Nov 29 01:17:07 np0005539564 systemd: No hostname configured, using default hostname.
Nov 29 01:17:07 np0005539564 systemd: Hostname set to <localhost>.
Nov 29 01:17:07 np0005539564 systemd: Initializing machine ID from VM UUID.
Nov 29 01:17:07 np0005539564 systemd: Queued start job for default target Initrd Default Target.
Nov 29 01:17:07 np0005539564 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:17:07 np0005539564 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:17:07 np0005539564 systemd: Reached target Initrd /usr File System.
Nov 29 01:17:07 np0005539564 systemd: Reached target Local File Systems.
Nov 29 01:17:07 np0005539564 systemd: Reached target Path Units.
Nov 29 01:17:07 np0005539564 systemd: Reached target Slice Units.
Nov 29 01:17:07 np0005539564 systemd: Reached target Swaps.
Nov 29 01:17:07 np0005539564 systemd: Reached target Timer Units.
Nov 29 01:17:07 np0005539564 systemd: Listening on D-Bus System Message Bus Socket.
Nov 29 01:17:07 np0005539564 systemd: Listening on Journal Socket (/dev/log).
Nov 29 01:17:07 np0005539564 systemd: Listening on Journal Socket.
Nov 29 01:17:07 np0005539564 systemd: Listening on udev Control Socket.
Nov 29 01:17:07 np0005539564 systemd: Listening on udev Kernel Socket.
Nov 29 01:17:07 np0005539564 systemd: Reached target Socket Units.
Nov 29 01:17:07 np0005539564 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:17:07 np0005539564 systemd: Starting Journal Service...
Nov 29 01:17:07 np0005539564 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:17:07 np0005539564 systemd: Starting Apply Kernel Variables...
Nov 29 01:17:07 np0005539564 systemd: Starting Create System Users...
Nov 29 01:17:07 np0005539564 systemd: Starting Setup Virtual Console...
Nov 29 01:17:07 np0005539564 systemd: Finished Create List of Static Device Nodes.
Nov 29 01:17:07 np0005539564 systemd: Finished Apply Kernel Variables.
Nov 29 01:17:07 np0005539564 systemd: Finished Create System Users.
Nov 29 01:17:07 np0005539564 systemd-journald[304]: Journal started
Nov 29 01:17:07 np0005539564 systemd-journald[304]: Runtime Journal (/run/log/journal/2e85876132924a17b38fa169c3064289) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:17:07 np0005539564 systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 29 01:17:07 np0005539564 systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 29 01:17:07 np0005539564 systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 01:17:07 np0005539564 systemd: Started Journal Service.
Nov 29 01:17:07 np0005539564 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:17:07 np0005539564 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:17:07 np0005539564 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:17:07 np0005539564 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:17:07 np0005539564 systemd[1]: Finished Setup Virtual Console.
Nov 29 01:17:07 np0005539564 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 01:17:07 np0005539564 systemd[1]: Starting dracut cmdline hook...
Nov 29 01:17:07 np0005539564 dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 01:17:07 np0005539564 dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:17:07 np0005539564 systemd[1]: Finished dracut cmdline hook.
Nov 29 01:17:07 np0005539564 systemd[1]: Starting dracut pre-udev hook...
Nov 29 01:17:07 np0005539564 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 01:17:07 np0005539564 kernel: device-mapper: uevent: version 1.0.3
Nov 29 01:17:07 np0005539564 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 01:17:07 np0005539564 kernel: RPC: Registered named UNIX socket transport module.
Nov 29 01:17:07 np0005539564 kernel: RPC: Registered udp transport module.
Nov 29 01:17:07 np0005539564 kernel: RPC: Registered tcp transport module.
Nov 29 01:17:07 np0005539564 kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 01:17:07 np0005539564 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 01:17:07 np0005539564 rpc.statd[440]: Version 2.5.4 starting
Nov 29 01:17:07 np0005539564 rpc.statd[440]: Initializing NSM state
Nov 29 01:17:07 np0005539564 rpc.idmapd[445]: Setting log level to 0
Nov 29 01:17:07 np0005539564 systemd[1]: Finished dracut pre-udev hook.
Nov 29 01:17:07 np0005539564 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:17:07 np0005539564 systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:17:07 np0005539564 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:17:07 np0005539564 systemd[1]: Starting dracut pre-trigger hook...
Nov 29 01:17:07 np0005539564 systemd[1]: Finished dracut pre-trigger hook.
Nov 29 01:17:08 np0005539564 systemd[1]: Starting Coldplug All udev Devices...
Nov 29 01:17:08 np0005539564 systemd[1]: Created slice Slice /system/modprobe.
Nov 29 01:17:08 np0005539564 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:17:08 np0005539564 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:17:08 np0005539564 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:17:08 np0005539564 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:17:08 np0005539564 systemd[1]: Mounting Kernel Configuration File System...
Nov 29 01:17:08 np0005539564 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:17:08 np0005539564 systemd[1]: Reached target Network.
Nov 29 01:17:08 np0005539564 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:17:08 np0005539564 systemd[1]: Starting dracut initqueue hook...
Nov 29 01:17:08 np0005539564 systemd[1]: Mounted Kernel Configuration File System.
Nov 29 01:17:08 np0005539564 systemd[1]: Reached target System Initialization.
Nov 29 01:17:08 np0005539564 systemd[1]: Reached target Basic System.
Nov 29 01:17:08 np0005539564 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 01:17:08 np0005539564 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 01:17:08 np0005539564 kernel: vda: vda1
Nov 29 01:17:08 np0005539564 kernel: scsi host0: ata_piix
Nov 29 01:17:08 np0005539564 kernel: scsi host1: ata_piix
Nov 29 01:17:08 np0005539564 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 01:17:08 np0005539564 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 01:17:08 np0005539564 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:17:08 np0005539564 systemd[1]: Reached target Initrd Root Device.
Nov 29 01:17:08 np0005539564 kernel: ata1: found unknown device (class 0)
Nov 29 01:17:08 np0005539564 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 01:17:08 np0005539564 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 01:17:08 np0005539564 systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:17:08 np0005539564 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 01:17:08 np0005539564 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 01:17:08 np0005539564 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 01:17:08 np0005539564 systemd[1]: Finished dracut initqueue hook.
Nov 29 01:17:08 np0005539564 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:17:08 np0005539564 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 01:17:08 np0005539564 systemd[1]: Reached target Remote File Systems.
Nov 29 01:17:08 np0005539564 systemd[1]: Starting dracut pre-mount hook...
Nov 29 01:17:08 np0005539564 systemd[1]: Finished dracut pre-mount hook.
Nov 29 01:17:08 np0005539564 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 01:17:08 np0005539564 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 01:17:08 np0005539564 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:17:08 np0005539564 systemd[1]: Mounting /sysroot...
Nov 29 01:17:09 np0005539564 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 01:17:09 np0005539564 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 01:17:09 np0005539564 kernel: XFS (vda1): Ending clean mount
Nov 29 01:17:09 np0005539564 systemd[1]: Mounted /sysroot.
Nov 29 01:17:09 np0005539564 systemd[1]: Reached target Initrd Root File System.
Nov 29 01:17:09 np0005539564 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 01:17:09 np0005539564 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 01:17:09 np0005539564 systemd[1]: Reached target Initrd File Systems.
Nov 29 01:17:09 np0005539564 systemd[1]: Reached target Initrd Default Target.
Nov 29 01:17:09 np0005539564 systemd[1]: Starting dracut mount hook...
Nov 29 01:17:09 np0005539564 systemd[1]: Finished dracut mount hook.
Nov 29 01:17:09 np0005539564 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 01:17:09 np0005539564 rpc.idmapd[445]: exiting on signal 15
Nov 29 01:17:09 np0005539564 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 01:17:09 np0005539564 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Network.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Timer Units.
Nov 29 01:17:09 np0005539564 systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 01:17:09 np0005539564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Initrd Default Target.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Basic System.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Initrd Root Device.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Initrd /usr File System.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Path Units.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Remote File Systems.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Slice Units.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Socket Units.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target System Initialization.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Local File Systems.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Swaps.
Nov 29 01:17:09 np0005539564 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped dracut mount hook.
Nov 29 01:17:09 np0005539564 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped dracut pre-mount hook.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 01:17:09 np0005539564 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped dracut initqueue hook.
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 01:17:09 np0005539564 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Setup Virtual Console.
Nov 29 01:17:09 np0005539564 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Closed udev Control Socket.
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Closed udev Kernel Socket.
Nov 29 01:17:09 np0005539564 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped dracut pre-udev hook.
Nov 29 01:17:09 np0005539564 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped dracut cmdline hook.
Nov 29 01:17:09 np0005539564 systemd[1]: Starting Cleanup udev Database...
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 01:17:09 np0005539564 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 01:17:09 np0005539564 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Stopped Create System Users.
Nov 29 01:17:09 np0005539564 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 01:17:09 np0005539564 systemd[1]: Finished Cleanup udev Database.
Nov 29 01:17:09 np0005539564 systemd[1]: Reached target Switch Root.
Nov 29 01:17:09 np0005539564 systemd[1]: Starting Switch Root...
Nov 29 01:17:09 np0005539564 systemd[1]: Switching root.
Nov 29 01:17:09 np0005539564 systemd-journald[304]: Journal stopped
Nov 29 01:17:10 np0005539564 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 29 01:17:10 np0005539564 kernel: audit: type=1404 audit(1764397029.736:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 01:17:10 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:17:10 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:17:10 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:17:10 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:17:10 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:17:10 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:17:10 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:17:10 np0005539564 kernel: audit: type=1403 audit(1764397029.859:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 01:17:10 np0005539564 systemd: Successfully loaded SELinux policy in 126.747ms.
Nov 29 01:17:10 np0005539564 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.303ms.
Nov 29 01:17:10 np0005539564 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:17:10 np0005539564 systemd: Detected virtualization kvm.
Nov 29 01:17:10 np0005539564 systemd: Detected architecture x86-64.
Nov 29 01:17:10 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:17:10 np0005539564 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 29 01:17:10 np0005539564 systemd: Stopped Switch Root.
Nov 29 01:17:10 np0005539564 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 01:17:10 np0005539564 systemd: Created slice Slice /system/getty.
Nov 29 01:17:10 np0005539564 systemd: Created slice Slice /system/serial-getty.
Nov 29 01:17:10 np0005539564 systemd: Created slice Slice /system/sshd-keygen.
Nov 29 01:17:10 np0005539564 systemd: Created slice User and Session Slice.
Nov 29 01:17:10 np0005539564 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:17:10 np0005539564 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 29 01:17:10 np0005539564 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 01:17:10 np0005539564 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:17:10 np0005539564 systemd: Stopped target Switch Root.
Nov 29 01:17:10 np0005539564 systemd: Stopped target Initrd File Systems.
Nov 29 01:17:10 np0005539564 systemd: Stopped target Initrd Root File System.
Nov 29 01:17:10 np0005539564 systemd: Reached target Local Integrity Protected Volumes.
Nov 29 01:17:10 np0005539564 systemd: Reached target Path Units.
Nov 29 01:17:10 np0005539564 systemd: Reached target rpc_pipefs.target.
Nov 29 01:17:10 np0005539564 systemd: Reached target Slice Units.
Nov 29 01:17:10 np0005539564 systemd: Reached target Swaps.
Nov 29 01:17:10 np0005539564 systemd: Reached target Local Verity Protected Volumes.
Nov 29 01:17:10 np0005539564 systemd: Listening on RPCbind Server Activation Socket.
Nov 29 01:17:10 np0005539564 systemd: Reached target RPC Port Mapper.
Nov 29 01:17:10 np0005539564 systemd: Listening on Process Core Dump Socket.
Nov 29 01:17:10 np0005539564 systemd: Listening on initctl Compatibility Named Pipe.
Nov 29 01:17:10 np0005539564 systemd: Listening on udev Control Socket.
Nov 29 01:17:10 np0005539564 systemd: Listening on udev Kernel Socket.
Nov 29 01:17:10 np0005539564 systemd: Mounting Huge Pages File System...
Nov 29 01:17:10 np0005539564 systemd: Mounting POSIX Message Queue File System...
Nov 29 01:17:10 np0005539564 systemd: Mounting Kernel Debug File System...
Nov 29 01:17:10 np0005539564 systemd: Mounting Kernel Trace File System...
Nov 29 01:17:10 np0005539564 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:17:10 np0005539564 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:17:10 np0005539564 systemd: Starting Load Kernel Module configfs...
Nov 29 01:17:10 np0005539564 systemd: Starting Load Kernel Module drm...
Nov 29 01:17:10 np0005539564 systemd: Starting Load Kernel Module efi_pstore...
Nov 29 01:17:10 np0005539564 systemd: Starting Load Kernel Module fuse...
Nov 29 01:17:10 np0005539564 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 01:17:10 np0005539564 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 29 01:17:10 np0005539564 systemd: Stopped File System Check on Root Device.
Nov 29 01:17:10 np0005539564 systemd: Stopped Journal Service.
Nov 29 01:17:10 np0005539564 systemd: Starting Journal Service...
Nov 29 01:17:10 np0005539564 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:17:10 np0005539564 systemd: Starting Generate network units from Kernel command line...
Nov 29 01:17:10 np0005539564 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:17:10 np0005539564 systemd: Starting Remount Root and Kernel File Systems...
Nov 29 01:17:10 np0005539564 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 01:17:10 np0005539564 systemd: Starting Apply Kernel Variables...
Nov 29 01:17:10 np0005539564 systemd: Starting Coldplug All udev Devices...
Nov 29 01:17:10 np0005539564 kernel: fuse: init (API version 7.37)
Nov 29 01:17:10 np0005539564 systemd: Mounted Huge Pages File System.
Nov 29 01:17:10 np0005539564 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:10 np0005539564 systemd: Mounted POSIX Message Queue File System.
Nov 29 01:17:10 np0005539564 systemd: Mounted Kernel Debug File System.
Nov 29 01:17:10 np0005539564 systemd: Mounted Kernel Trace File System.
Nov 29 01:17:10 np0005539564 systemd: Finished Create List of Static Device Nodes.
Nov 29 01:17:10 np0005539564 systemd-journald[677]: Journal started
Nov 29 01:17:10 np0005539564 systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:17:10 np0005539564 systemd[1]: Queued start job for default target Multi-User System.
Nov 29 01:17:10 np0005539564 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 01:17:10 np0005539564 systemd: Started Journal Service.
Nov 29 01:17:10 np0005539564 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:17:10 np0005539564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 01:17:10 np0005539564 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Load Kernel Module fuse.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 01:17:10 np0005539564 kernel: ACPI: bus type drm_connector registered
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 01:17:10 np0005539564 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Load Kernel Module drm.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Apply Kernel Variables.
Nov 29 01:17:10 np0005539564 systemd[1]: Mounting FUSE Control File System...
Nov 29 01:17:10 np0005539564 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Rebuild Hardware Database...
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 01:17:10 np0005539564 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Create System Users...
Nov 29 01:17:10 np0005539564 systemd[1]: Mounted FUSE Control File System.
Nov 29 01:17:10 np0005539564 systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:17:10 np0005539564 systemd-journald[677]: Received client request to flush runtime journal.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 01:17:10 np0005539564 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Create System Users.
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:17:10 np0005539564 systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 01:17:10 np0005539564 systemd[1]: Reached target Local File Systems.
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 01:17:10 np0005539564 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 01:17:10 np0005539564 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 01:17:10 np0005539564 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 01:17:10 np0005539564 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:17:10 np0005539564 bootctl[696]: Couldn't find EFI system partition, skipping.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Security Auditing Service...
Nov 29 01:17:10 np0005539564 systemd[1]: Starting RPC Bind...
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 01:17:10 np0005539564 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 01:17:10 np0005539564 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 01:17:10 np0005539564 augenrules[707]: /sbin/augenrules: No change
Nov 29 01:17:10 np0005539564 augenrules[722]: No rules
Nov 29 01:17:10 np0005539564 augenrules[722]: enabled 1
Nov 29 01:17:10 np0005539564 augenrules[722]: failure 1
Nov 29 01:17:10 np0005539564 augenrules[722]: pid 702
Nov 29 01:17:10 np0005539564 augenrules[722]: rate_limit 0
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_limit 8192
Nov 29 01:17:10 np0005539564 augenrules[722]: lost 0
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog 3
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_wait_time 60000
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:17:10 np0005539564 augenrules[722]: enabled 1
Nov 29 01:17:10 np0005539564 augenrules[722]: failure 1
Nov 29 01:17:10 np0005539564 augenrules[722]: pid 702
Nov 29 01:17:10 np0005539564 augenrules[722]: rate_limit 0
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_limit 8192
Nov 29 01:17:10 np0005539564 augenrules[722]: lost 0
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog 0
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_wait_time 60000
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:17:10 np0005539564 augenrules[722]: enabled 1
Nov 29 01:17:10 np0005539564 augenrules[722]: failure 1
Nov 29 01:17:10 np0005539564 augenrules[722]: pid 702
Nov 29 01:17:10 np0005539564 augenrules[722]: rate_limit 0
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_limit 8192
Nov 29 01:17:10 np0005539564 augenrules[722]: lost 0
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog 1
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_wait_time 60000
Nov 29 01:17:10 np0005539564 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:17:10 np0005539564 systemd[1]: Started Security Auditing Service.
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 01:17:10 np0005539564 systemd[1]: Started RPC Bind.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 01:17:10 np0005539564 systemd[1]: Finished Rebuild Hardware Database.
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:17:10 np0005539564 systemd[1]: Starting Update is Completed...
Nov 29 01:17:11 np0005539564 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:17:11 np0005539564 systemd[1]: Finished Update is Completed.
Nov 29 01:17:11 np0005539564 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:17:11 np0005539564 systemd[1]: Reached target System Initialization.
Nov 29 01:17:11 np0005539564 systemd[1]: Started dnf makecache --timer.
Nov 29 01:17:11 np0005539564 systemd[1]: Started Daily rotation of log files.
Nov 29 01:17:11 np0005539564 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 01:17:11 np0005539564 systemd[1]: Reached target Timer Units.
Nov 29 01:17:11 np0005539564 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 01:17:11 np0005539564 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 01:17:11 np0005539564 systemd[1]: Reached target Socket Units.
Nov 29 01:17:11 np0005539564 systemd[1]: Starting D-Bus System Message Bus...
Nov 29 01:17:11 np0005539564 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:17:11 np0005539564 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 01:17:11 np0005539564 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:17:11 np0005539564 systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:17:11 np0005539564 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:17:11 np0005539564 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:17:11 np0005539564 systemd[1]: Started D-Bus System Message Bus.
Nov 29 01:17:11 np0005539564 systemd[1]: Reached target Basic System.
Nov 29 01:17:11 np0005539564 dbus-broker-lau[757]: Ready
Nov 29 01:17:11 np0005539564 systemd[1]: Starting NTP client/server...
Nov 29 01:17:11 np0005539564 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 01:17:11 np0005539564 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 01:17:11 np0005539564 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 01:17:11 np0005539564 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 01:17:11 np0005539564 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 01:17:11 np0005539564 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 01:17:11 np0005539564 systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 01:17:11 np0005539564 systemd[1]: Started irqbalance daemon.
Nov 29 01:17:11 np0005539564 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 01:17:11 np0005539564 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:17:11 np0005539564 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:17:11 np0005539564 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:17:11 np0005539564 systemd[1]: Reached target sshd-keygen.target.
Nov 29 01:17:11 np0005539564 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 01:17:11 np0005539564 systemd[1]: Reached target User and Group Name Lookups.
Nov 29 01:17:11 np0005539564 systemd[1]: Starting User Login Management...
Nov 29 01:17:11 np0005539564 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 01:17:11 np0005539564 chronyd[794]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 01:17:11 np0005539564 chronyd[794]: Loaded 0 symmetric keys
Nov 29 01:17:11 np0005539564 chronyd[794]: Using right/UTC timezone to obtain leap second data
Nov 29 01:17:11 np0005539564 chronyd[794]: Loaded seccomp filter (level 2)
Nov 29 01:17:11 np0005539564 systemd[1]: Started NTP client/server.
Nov 29 01:17:11 np0005539564 systemd-logind[785]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 01:17:11 np0005539564 systemd-logind[785]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 01:17:11 np0005539564 systemd-logind[785]: New seat seat0.
Nov 29 01:17:11 np0005539564 systemd[1]: Started User Login Management.
Nov 29 01:17:11 np0005539564 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 01:17:11 np0005539564 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 01:17:11 np0005539564 kernel: kvm_amd: TSC scaling supported
Nov 29 01:17:11 np0005539564 kernel: kvm_amd: Nested Virtualization enabled
Nov 29 01:17:11 np0005539564 kernel: kvm_amd: Nested Paging enabled
Nov 29 01:17:11 np0005539564 kernel: kvm_amd: LBR virtualization supported
Nov 29 01:17:11 np0005539564 kernel: Console: switching to colour dummy device 80x25
Nov 29 01:17:11 np0005539564 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 01:17:11 np0005539564 kernel: [drm] features: -context_init
Nov 29 01:17:11 np0005539564 kernel: [drm] number of scanouts: 1
Nov 29 01:17:11 np0005539564 kernel: [drm] number of cap sets: 0
Nov 29 01:17:11 np0005539564 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 01:17:11 np0005539564 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 01:17:11 np0005539564 kernel: Console: switching to colour frame buffer device 128x48
Nov 29 01:17:11 np0005539564 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 01:17:11 np0005539564 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 01:17:11 np0005539564 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 01:17:11 np0005539564 iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Nov 29 01:17:11 np0005539564 systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 01:17:11 np0005539564 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 06:17:11 +0000. Up 6.99 seconds.
Nov 29 01:17:11 np0005539564 systemd[1]: run-cloud\x2dinit-tmp-tmphtxy0mcp.mount: Deactivated successfully.
Nov 29 01:17:11 np0005539564 systemd[1]: Starting Hostname Service...
Nov 29 01:17:11 np0005539564 systemd[1]: Started Hostname Service.
Nov 29 01:17:11 np0005539564 systemd-hostnamed[853]: Hostname set to <np0005539564.novalocal> (static)
Nov 29 01:17:12 np0005539564 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 01:17:12 np0005539564 systemd[1]: Reached target Preparation for Network.
Nov 29 01:17:12 np0005539564 systemd[1]: Starting Network Manager...
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1585] NetworkManager (version 1.54.1-1.el9) is starting... (boot:e076cd5a-06db-46a2-b985-67d3134914f7)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1593] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1680] manager[0x55a98591b080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1720] hostname: hostname: using hostnamed
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1721] hostname: static hostname changed from (none) to "np0005539564.novalocal"
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1726] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1845] manager[0x55a98591b080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1846] manager[0x55a98591b080]: rfkill: WWAN hardware radio set enabled
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1888] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1889] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1889] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1890] manager: Networking is enabled by state file
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1891] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1902] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1939] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:17:12 np0005539564 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1962] dhcp: init: Using DHCP client 'internal'
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1966] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.1988] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2001] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2011] device (lo): Activation: starting connection 'lo' (033a2694-84e1-4d41-84bd-0e9f1d5dbe29)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2024] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2028] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2070] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2077] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2081] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2085] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2090] device (eth0): carrier: link connected
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2095] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2108] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2115] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2124] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2125] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2127] manager: NetworkManager state is now CONNECTING
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2129] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2139] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2144] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:17:12 np0005539564 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2197] dhcp4 (eth0): state changed new lease, address=38.102.83.162
Nov 29 01:17:12 np0005539564 systemd[1]: Started Network Manager.
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2209] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2230] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:17:12 np0005539564 systemd[1]: Reached target Network.
Nov 29 01:17:12 np0005539564 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2335] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2337] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2338] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2343] device (lo): Activation: successful, device activated.
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2348] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2352] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2356] device (eth0): Activation: successful, device activated.
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2361] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:17:12 np0005539564 NetworkManager[858]: <info>  [1764397032.2363] manager: startup complete
Nov 29 01:17:12 np0005539564 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 01:17:12 np0005539564 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:17:12 np0005539564 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 01:17:12 np0005539564 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:17:12 np0005539564 systemd[1]: Reached target NFS client services.
Nov 29 01:17:12 np0005539564 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:17:12 np0005539564 systemd[1]: Reached target Remote File Systems.
Nov 29 01:17:12 np0005539564 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:17:12 np0005539564 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:17:12 np0005539564 systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 01:17:12 np0005539564 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 06:17:12 +0000. Up 8.00 seconds.
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.162         | 255.255.255.0 | global | fa:16:3e:cc:e0:8e |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fecc:e08e/64 |       .       |  link  | fa:16:3e:cc:e0:8e |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 01:17:12 np0005539564 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:17:14 np0005539564 cloud-init[920]: Generating public/private rsa key pair.
Nov 29 01:17:14 np0005539564 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 01:17:14 np0005539564 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 01:17:14 np0005539564 cloud-init[920]: The key fingerprint is:
Nov 29 01:17:14 np0005539564 cloud-init[920]: SHA256:1gOI08MBA2xpn2m6IF+O1PmW8X3qVMSCxUTqr2/AY3w root@np0005539564.novalocal
Nov 29 01:17:14 np0005539564 cloud-init[920]: The key's randomart image is:
Nov 29 01:17:14 np0005539564 cloud-init[920]: +---[RSA 3072]----+
Nov 29 01:17:14 np0005539564 cloud-init[920]: | ..oo..  ++      |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |  =  = o +..     |
Nov 29 01:17:14 np0005539564 cloud-init[920]: | o .oo= + . o    |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |    =. o o o     |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |   + . oS o .    |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |o o + ..*.Eo     |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |.+ = . = =o      |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |  + . + .oo .    |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |     .  .++o     |
Nov 29 01:17:14 np0005539564 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:17:14 np0005539564 cloud-init[920]: Generating public/private ecdsa key pair.
Nov 29 01:17:14 np0005539564 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 01:17:14 np0005539564 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 01:17:14 np0005539564 cloud-init[920]: The key fingerprint is:
Nov 29 01:17:14 np0005539564 cloud-init[920]: SHA256:b4Ta1myCnH/BUuYrXLTcXPIabxi+OqJ5l4Jljvch+Ug root@np0005539564.novalocal
Nov 29 01:17:14 np0005539564 cloud-init[920]: The key's randomart image is:
Nov 29 01:17:14 np0005539564 cloud-init[920]: +---[ECDSA 256]---+
Nov 29 01:17:14 np0005539564 cloud-init[920]: |                 |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |                 |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |                 |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |         .+ . .  |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |        SB.+ +   |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |     . =+*B = .  |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |      =BEoB= *   |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |      o=OB*.+ o  |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |      o+oB+o.o   |
Nov 29 01:17:14 np0005539564 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:17:14 np0005539564 cloud-init[920]: Generating public/private ed25519 key pair.
Nov 29 01:17:14 np0005539564 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 01:17:14 np0005539564 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 01:17:14 np0005539564 cloud-init[920]: The key fingerprint is:
Nov 29 01:17:14 np0005539564 cloud-init[920]: SHA256:NSbeZgbMZBZMXLDa2KkMTgnDRVFUIYz/G+3hekVvgMQ root@np0005539564.novalocal
Nov 29 01:17:14 np0005539564 cloud-init[920]: The key's randomart image is:
Nov 29 01:17:14 np0005539564 cloud-init[920]: +--[ED25519 256]--+
Nov 29 01:17:14 np0005539564 cloud-init[920]: |   .+*+=X*.      |
Nov 29 01:17:14 np0005539564 cloud-init[920]: | . .. .Bo.E      |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |  +  .  *.+.     |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |   o ..* B..o    |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |    + o.S.=. o   |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |   o o .o+o . o  |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |    . o  = o .   |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |        . +      |
Nov 29 01:17:14 np0005539564 cloud-init[920]: |        .o       |
Nov 29 01:17:14 np0005539564 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:17:14 np0005539564 systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 01:17:14 np0005539564 systemd[1]: Reached target Cloud-config availability.
Nov 29 01:17:14 np0005539564 systemd[1]: Reached target Network is Online.
Nov 29 01:17:14 np0005539564 systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 01:17:14 np0005539564 systemd[1]: Starting Crash recovery kernel arming...
Nov 29 01:17:14 np0005539564 systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 01:17:14 np0005539564 systemd[1]: Starting System Logging Service...
Nov 29 01:17:14 np0005539564 systemd[1]: Starting OpenSSH server daemon...
Nov 29 01:17:14 np0005539564 systemd[1]: Starting Permit User Sessions...
Nov 29 01:17:14 np0005539564 sm-notify[1002]: Version 2.5.4 starting
Nov 29 01:17:14 np0005539564 systemd[1]: Started Notify NFS peers of a restart.
Nov 29 01:17:14 np0005539564 systemd[1]: Finished Permit User Sessions.
Nov 29 01:17:14 np0005539564 systemd[1]: Started OpenSSH server daemon.
Nov 29 01:17:14 np0005539564 rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Nov 29 01:17:14 np0005539564 rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 01:17:14 np0005539564 systemd[1]: Started Command Scheduler.
Nov 29 01:17:14 np0005539564 systemd[1]: Started Getty on tty1.
Nov 29 01:17:14 np0005539564 systemd[1]: Started Serial Getty on ttyS0.
Nov 29 01:17:14 np0005539564 systemd[1]: Reached target Login Prompts.
Nov 29 01:17:14 np0005539564 systemd[1]: Started System Logging Service.
Nov 29 01:17:14 np0005539564 systemd[1]: Reached target Multi-User System.
Nov 29 01:17:14 np0005539564 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 01:17:14 np0005539564 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 01:17:14 np0005539564 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 01:17:14 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:17:14 np0005539564 kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Nov 29 01:17:14 np0005539564 kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 01:17:14 np0005539564 cloud-init[1131]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 06:17:14 +0000. Up 9.79 seconds.
Nov 29 01:17:14 np0005539564 systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 01:17:14 np0005539564 systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 01:17:14 np0005539564 dracut[1264]: dracut-057-102.git20250818.el9
Nov 29 01:17:14 np0005539564 dracut[1266]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 01:17:14 np0005539564 cloud-init[1294]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 06:17:14 +0000. Up 10.27 seconds.
Nov 29 01:17:15 np0005539564 cloud-init[1336]: #############################################################
Nov 29 01:17:15 np0005539564 cloud-init[1340]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 01:17:15 np0005539564 cloud-init[1348]: 256 SHA256:b4Ta1myCnH/BUuYrXLTcXPIabxi+OqJ5l4Jljvch+Ug root@np0005539564.novalocal (ECDSA)
Nov 29 01:17:15 np0005539564 cloud-init[1351]: 256 SHA256:NSbeZgbMZBZMXLDa2KkMTgnDRVFUIYz/G+3hekVvgMQ root@np0005539564.novalocal (ED25519)
Nov 29 01:17:15 np0005539564 cloud-init[1355]: 3072 SHA256:1gOI08MBA2xpn2m6IF+O1PmW8X3qVMSCxUTqr2/AY3w root@np0005539564.novalocal (RSA)
Nov 29 01:17:15 np0005539564 cloud-init[1356]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 01:17:15 np0005539564 cloud-init[1357]: #############################################################
Nov 29 01:17:15 np0005539564 cloud-init[1294]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 06:17:15 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.46 seconds
Nov 29 01:17:15 np0005539564 systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 01:17:15 np0005539564 systemd[1]: Reached target Cloud-init target.
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:17:15 np0005539564 dracut[1266]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: memstrack is not available
Nov 29 01:17:16 np0005539564 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:17:16 np0005539564 dracut[1266]: memstrack is not available
Nov 29 01:17:16 np0005539564 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:17:16 np0005539564 dracut[1266]: *** Including module: systemd ***
Nov 29 01:17:16 np0005539564 dracut[1266]: *** Including module: fips ***
Nov 29 01:17:17 np0005539564 dracut[1266]: *** Including module: systemd-initrd ***
Nov 29 01:17:17 np0005539564 dracut[1266]: *** Including module: i18n ***
Nov 29 01:17:17 np0005539564 dracut[1266]: *** Including module: drm ***
Nov 29 01:17:17 np0005539564 chronyd[794]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Nov 29 01:17:17 np0005539564 chronyd[794]: System clock TAI offset set to 37 seconds
Nov 29 01:17:17 np0005539564 dracut[1266]: *** Including module: prefixdevname ***
Nov 29 01:17:17 np0005539564 dracut[1266]: *** Including module: kernel-modules ***
Nov 29 01:17:17 np0005539564 kernel: block vda: the capability attribute has been deprecated.
Nov 29 01:17:18 np0005539564 dracut[1266]: *** Including module: kernel-modules-extra ***
Nov 29 01:17:18 np0005539564 dracut[1266]: *** Including module: qemu ***
Nov 29 01:17:18 np0005539564 dracut[1266]: *** Including module: fstab-sys ***
Nov 29 01:17:18 np0005539564 dracut[1266]: *** Including module: rootfs-block ***
Nov 29 01:17:18 np0005539564 dracut[1266]: *** Including module: terminfo ***
Nov 29 01:17:18 np0005539564 dracut[1266]: *** Including module: udev-rules ***
Nov 29 01:17:18 np0005539564 dracut[1266]: Skipping udev rule: 91-permissions.rules
Nov 29 01:17:18 np0005539564 dracut[1266]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 01:17:18 np0005539564 dracut[1266]: *** Including module: virtiofs ***
Nov 29 01:17:19 np0005539564 dracut[1266]: *** Including module: dracut-systemd ***
Nov 29 01:17:19 np0005539564 dracut[1266]: *** Including module: usrmount ***
Nov 29 01:17:19 np0005539564 dracut[1266]: *** Including module: base ***
Nov 29 01:17:19 np0005539564 dracut[1266]: *** Including module: fs-lib ***
Nov 29 01:17:19 np0005539564 dracut[1266]: *** Including module: kdumpbase ***
Nov 29 01:17:19 np0005539564 dracut[1266]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 01:17:19 np0005539564 dracut[1266]:  microcode_ctl module: mangling fw_dir
Nov 29 01:17:19 np0005539564 dracut[1266]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 01:17:19 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 01:17:19 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel" is ignored
Nov 29 01:17:19 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 01:17:19 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 01:17:19 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 01:17:19 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 01:17:19 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 01:17:20 np0005539564 dracut[1266]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 01:17:20 np0005539564 dracut[1266]: *** Including module: openssl ***
Nov 29 01:17:20 np0005539564 dracut[1266]: *** Including module: shutdown ***
Nov 29 01:17:20 np0005539564 dracut[1266]: *** Including module: squash ***
Nov 29 01:17:20 np0005539564 dracut[1266]: *** Including modules done ***
Nov 29 01:17:20 np0005539564 dracut[1266]: *** Installing kernel module dependencies ***
Nov 29 01:17:21 np0005539564 dracut[1266]: *** Installing kernel module dependencies done ***
Nov 29 01:17:21 np0005539564 dracut[1266]: *** Resolving executable dependencies ***
Nov 29 01:17:21 np0005539564 irqbalance[783]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 29 01:17:21 np0005539564 irqbalance[783]: IRQ 25 affinity is now unmanaged
Nov 29 01:17:21 np0005539564 irqbalance[783]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 01:17:21 np0005539564 irqbalance[783]: IRQ 31 affinity is now unmanaged
Nov 29 01:17:21 np0005539564 irqbalance[783]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 01:17:21 np0005539564 irqbalance[783]: IRQ 28 affinity is now unmanaged
Nov 29 01:17:21 np0005539564 irqbalance[783]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 01:17:21 np0005539564 irqbalance[783]: IRQ 32 affinity is now unmanaged
Nov 29 01:17:21 np0005539564 irqbalance[783]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 01:17:21 np0005539564 irqbalance[783]: IRQ 30 affinity is now unmanaged
Nov 29 01:17:21 np0005539564 irqbalance[783]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 01:17:21 np0005539564 irqbalance[783]: IRQ 29 affinity is now unmanaged
Nov 29 01:17:22 np0005539564 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:17:22 np0005539564 dracut[1266]: *** Resolving executable dependencies done ***
Nov 29 01:17:22 np0005539564 dracut[1266]: *** Generating early-microcode cpio image ***
Nov 29 01:17:22 np0005539564 dracut[1266]: *** Store current command line parameters ***
Nov 29 01:17:22 np0005539564 dracut[1266]: Stored kernel commandline:
Nov 29 01:17:22 np0005539564 dracut[1266]: No dracut internal kernel commandline stored in the initramfs
Nov 29 01:17:22 np0005539564 dracut[1266]: *** Install squash loader ***
Nov 29 01:17:23 np0005539564 dracut[1266]: *** Squashing the files inside the initramfs ***
Nov 29 01:17:25 np0005539564 dracut[1266]: *** Squashing the files inside the initramfs done ***
Nov 29 01:17:25 np0005539564 dracut[1266]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 01:17:25 np0005539564 dracut[1266]: *** Hardlinking files ***
Nov 29 01:17:25 np0005539564 dracut[1266]: *** Hardlinking files done ***
Nov 29 01:17:25 np0005539564 dracut[1266]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 01:17:25 np0005539564 kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Nov 29 01:17:25 np0005539564 kdumpctl[1016]: kdump: Starting kdump: [OK]
Nov 29 01:17:25 np0005539564 systemd[1]: Finished Crash recovery kernel arming.
Nov 29 01:17:25 np0005539564 systemd[1]: Startup finished in 2.308s (kernel) + 2.814s (initrd) + 16.187s (userspace) = 21.310s.
Nov 29 01:17:42 np0005539564 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:17:44 np0005539564 systemd[1]: Created slice User Slice of UID 1000.
Nov 29 01:17:44 np0005539564 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 01:17:44 np0005539564 systemd-logind[785]: New session 1 of user zuul.
Nov 29 01:17:44 np0005539564 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 01:17:44 np0005539564 systemd[1]: Starting User Manager for UID 1000...
Nov 29 01:17:45 np0005539564 systemd[4300]: Queued start job for default target Main User Target.
Nov 29 01:17:45 np0005539564 systemd[4300]: Created slice User Application Slice.
Nov 29 01:17:45 np0005539564 systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 01:17:45 np0005539564 systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 01:17:45 np0005539564 systemd[4300]: Reached target Paths.
Nov 29 01:17:45 np0005539564 systemd[4300]: Reached target Timers.
Nov 29 01:17:45 np0005539564 systemd[4300]: Starting D-Bus User Message Bus Socket...
Nov 29 01:17:45 np0005539564 systemd[4300]: Starting Create User's Volatile Files and Directories...
Nov 29 01:17:45 np0005539564 systemd[4300]: Finished Create User's Volatile Files and Directories.
Nov 29 01:17:45 np0005539564 systemd[4300]: Listening on D-Bus User Message Bus Socket.
Nov 29 01:17:45 np0005539564 systemd[4300]: Reached target Sockets.
Nov 29 01:17:45 np0005539564 systemd[4300]: Reached target Basic System.
Nov 29 01:17:45 np0005539564 systemd[4300]: Reached target Main User Target.
Nov 29 01:17:45 np0005539564 systemd[4300]: Startup finished in 126ms.
Nov 29 01:17:45 np0005539564 systemd[1]: Started User Manager for UID 1000.
Nov 29 01:17:45 np0005539564 systemd[1]: Started Session 1 of User zuul.
Nov 29 01:17:45 np0005539564 python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:17:50 np0005539564 python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:17:57 np0005539564 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:17:59 np0005539564 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 01:18:01 np0005539564 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO009q8XvhgDCH/CYntn/Nj7apUjGycgerKyxcYKwqlrsQqtgZ+4b1AwoiDJ6ACRb/89P698Zu8SgdnR/v9pn0LFMXEa2g1lWeFaQovDGpqBz4mYtyZIbvWOJAPw3VQm6HJnXakvw8LrVDql95W2i6anqAeBFXq/hs4EAkNzhNR4pua8lJHwAgkexNQ+7fdWwTNsd+E5A23VTA0NzgPyGjZyo5PcuqueNFdk/JaekH4GB/BVWyh0KIH6JnPu98++RaPl1C8BRj9wWE/zvooiZsXPQCOfW1oql3StPekqBwJti2jRygs685e4eHPE+tO1VzwfPTyXZfQAe9dOPlZsWdnKtIw5H/2tajn7DELzA77VUbsuA1U+jNJ9sE0PwaWj6JsBqDB9tBbb31S7B12ZvrS250Qc0Q/c4Qv/WdSE87jti5CrwLfsjPX2DOo37gqMfu2EB90zV1L+h9vMlmkg3g8rOzpQK5jspXBfUIO2Pq0Nyyj9IORN7HLSKyZmK+teE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:01 np0005539564 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:02 np0005539564 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:02 np0005539564 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397081.8927717-252-94435419943172/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=6e6c58d2ce3447e2bcc44a9308b07ccb_id_rsa follow=False checksum=d281ebb5f24e5d8783693a36170923a3c25cbd23 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:03 np0005539564 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:03 np0005539564 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397082.8714988-307-154374643241875/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=6e6c58d2ce3447e2bcc44a9308b07ccb_id_rsa.pub follow=False checksum=96e05a798ef30e23c5626e997638db7097fc90b9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:04 np0005539564 python3[4971]: ansible-ping Invoked with data=pong
Nov 29 01:18:05 np0005539564 python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:18:08 np0005539564 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 01:18:09 np0005539564 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:09 np0005539564 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:09 np0005539564 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:10 np0005539564 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:10 np0005539564 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:10 np0005539564 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:12 np0005539564 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:13 np0005539564 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:13 np0005539564 python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397092.7519863-32-262882622974659/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:14 np0005539564 python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:14 np0005539564 python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:15 np0005539564 python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:15 np0005539564 python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:15 np0005539564 python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:15 np0005539564 python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:16 np0005539564 python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:16 np0005539564 python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:16 np0005539564 python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:17 np0005539564 python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:17 np0005539564 python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:17 np0005539564 python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:17 np0005539564 python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:18 np0005539564 python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:18 np0005539564 python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:18 np0005539564 python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:19 np0005539564 python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:19 np0005539564 python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:19 np0005539564 python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:19 np0005539564 python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:20 np0005539564 python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:20 np0005539564 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:20 np0005539564 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:21 np0005539564 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:21 np0005539564 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:21 np0005539564 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:23 np0005539564 chronyd[794]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Nov 29 01:18:24 np0005539564 python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 01:18:24 np0005539564 systemd[1]: Starting Time & Date Service...
Nov 29 01:18:24 np0005539564 systemd[1]: Started Time & Date Service.
Nov 29 01:18:24 np0005539564 systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Nov 29 01:18:26 np0005539564 python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:26 np0005539564 python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:27 np0005539564 python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764397106.3970773-252-187765963353137/source _original_basename=tmpl4fednwd follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:27 np0005539564 python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:27 np0005539564 python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397107.3086145-303-42933274404191/source _original_basename=tmpas1_esod follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:28 np0005539564 python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:29 np0005539564 python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397108.476171-382-179943863995383/source _original_basename=tmp37oksrvx follow=False checksum=855cc62dd01fe7364cfd0c3455e65e7a945e3fa8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:29 np0005539564 python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:30 np0005539564 python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:30 np0005539564 python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:30 np0005539564 python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397110.274581-452-166128366030075/source _original_basename=tmpajtiu99i follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:31 np0005539564 python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-beaa-304e-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:32 np0005539564 python3[6885]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-beaa-304e-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 01:18:33 np0005539564 python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:51 np0005539564 python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:54 np0005539564 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 01:19:51 np0005539564 systemd-logind[785]: Session 1 logged out. Waiting for processes to exit.
Nov 29 01:20:00 np0005539564 systemd[4300]: Starting Mark boot as successful...
Nov 29 01:20:00 np0005539564 systemd[4300]: Finished Mark boot as successful.
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 01:20:02 np0005539564 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 01:20:02 np0005539564 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8105] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:20:02 np0005539564 systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8270] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8298] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8301] device (eth1): carrier: link connected
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8303] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8308] policy: auto-activating connection 'Wired connection 1' (a06317bf-79e9-3ad3-bec1-72ebec07c777)
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8312] device (eth1): Activation: starting connection 'Wired connection 1' (a06317bf-79e9-3ad3-bec1-72ebec07c777)
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8312] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8315] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8318] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:20:02 np0005539564 NetworkManager[858]: <info>  [1764397202.8322] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:20:03 np0005539564 systemd-logind[785]: New session 3 of user zuul.
Nov 29 01:20:03 np0005539564 systemd[1]: Started Session 3 of User zuul.
Nov 29 01:20:04 np0005539564 python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-957e-49b2-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:20:14 np0005539564 python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:20:14 np0005539564 python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397213.78764-155-23459291548018/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d7469f3f9e73ff1f19a8fe47dd10654e63db7ac4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:20:15 np0005539564 python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:20:15 np0005539564 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 01:20:15 np0005539564 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 01:20:15 np0005539564 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 01:20:15 np0005539564 systemd[1]: Stopping Network Manager...
Nov 29 01:20:15 np0005539564 NetworkManager[858]: <info>  [1764397215.1698] caught SIGTERM, shutting down normally.
Nov 29 01:20:15 np0005539564 NetworkManager[858]: <info>  [1764397215.1711] dhcp4 (eth0): canceled DHCP transaction
Nov 29 01:20:15 np0005539564 NetworkManager[858]: <info>  [1764397215.1712] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:20:15 np0005539564 NetworkManager[858]: <info>  [1764397215.1712] dhcp4 (eth0): state changed no lease
Nov 29 01:20:15 np0005539564 NetworkManager[858]: <info>  [1764397215.1715] manager: NetworkManager state is now CONNECTING
Nov 29 01:20:15 np0005539564 NetworkManager[858]: <info>  [1764397215.1869] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:20:15 np0005539564 NetworkManager[858]: <info>  [1764397215.1870] dhcp4 (eth1): state changed no lease
Nov 29 01:20:15 np0005539564 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:20:15 np0005539564 NetworkManager[858]: <info>  [1764397215.1963] exiting (success)
Nov 29 01:20:15 np0005539564 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:20:15 np0005539564 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 01:20:15 np0005539564 systemd[1]: Stopped Network Manager.
Nov 29 01:20:15 np0005539564 systemd[1]: NetworkManager.service: Consumed 1.438s CPU time, 10.0M memory peak.
Nov 29 01:20:15 np0005539564 systemd[1]: Starting Network Manager...
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.2580] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e076cd5a-06db-46a2-b985-67d3134914f7)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.2583] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.2653] manager[0x557734b29070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:20:15 np0005539564 systemd[1]: Starting Hostname Service...
Nov 29 01:20:15 np0005539564 systemd[1]: Started Hostname Service.
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3912] hostname: hostname: using hostnamed
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3912] hostname: static hostname changed from (none) to "np0005539564.novalocal"
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3918] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3924] manager[0x557734b29070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3925] manager[0x557734b29070]: rfkill: WWAN hardware radio set enabled
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3951] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3951] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3952] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3953] manager: Networking is enabled by state file
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3956] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3959] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.3986] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4002] dhcp: init: Using DHCP client 'internal'
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4007] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4020] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4033] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4047] device (lo): Activation: starting connection 'lo' (033a2694-84e1-4d41-84bd-0e9f1d5dbe29)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4063] device (eth0): carrier: link connected
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4070] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4081] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4082] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4097] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4111] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4122] device (eth1): carrier: link connected
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4129] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4140] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (a06317bf-79e9-3ad3-bec1-72ebec07c777) (indicated)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4140] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4152] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4165] device (eth1): Activation: starting connection 'Wired connection 1' (a06317bf-79e9-3ad3-bec1-72ebec07c777)
Nov 29 01:20:15 np0005539564 systemd[1]: Started Network Manager.
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4175] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4183] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4190] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4195] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4202] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4215] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4218] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4225] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4235] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4265] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4272] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4290] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4298] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4326] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4333] dhcp4 (eth0): state changed new lease, address=38.102.83.162
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4342] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:20:15 np0005539564 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4365] device (lo): Activation: successful, device activated.
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4391] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4510] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4556] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4559] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4564] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4569] device (eth0): Activation: successful, device activated.
Nov 29 01:20:15 np0005539564 NetworkManager[7189]: <info>  [1764397215.4575] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:20:15 np0005539564 python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-957e-49b2-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:20:25 np0005539564 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:20:45 np0005539564 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6133] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:21:00 np0005539564 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:21:00 np0005539564 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6383] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6386] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6396] device (eth1): Activation: successful, device activated.
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6405] manager: startup complete
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6407] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <warn>  [1764397260.6419] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6432] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 01:21:00 np0005539564 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6579] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6579] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6580] dhcp4 (eth1): state changed no lease
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6598] policy: auto-activating connection 'ci-private-network' (f5db7348-8863-5268-8258-6f6fd4d307ec)
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6603] device (eth1): Activation: starting connection 'ci-private-network' (f5db7348-8863-5268-8258-6f6fd4d307ec)
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6605] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6609] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6616] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6625] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6671] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6673] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:21:00 np0005539564 NetworkManager[7189]: <info>  [1764397260.6682] device (eth1): Activation: successful, device activated.
Nov 29 01:21:10 np0005539564 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:21:15 np0005539564 systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 01:21:15 np0005539564 systemd[1]: session-3.scope: Consumed 1.767s CPU time.
Nov 29 01:21:15 np0005539564 systemd-logind[785]: Session 3 logged out. Waiting for processes to exit.
Nov 29 01:21:15 np0005539564 systemd-logind[785]: Removed session 3.
Nov 29 01:21:56 np0005539564 systemd-logind[785]: New session 4 of user zuul.
Nov 29 01:21:56 np0005539564 systemd[1]: Started Session 4 of User zuul.
Nov 29 01:21:56 np0005539564 python3[7374]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:21:57 np0005539564 python3[7447]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397316.641476-373-151434179207540/source _original_basename=tmplocx_ddl follow=False checksum=c97a8eb9e2d79dc37ef10c85c5553d4edb376b92 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:22:00 np0005539564 systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 01:22:00 np0005539564 systemd-logind[785]: Session 4 logged out. Waiting for processes to exit.
Nov 29 01:22:00 np0005539564 systemd-logind[785]: Removed session 4.
Nov 29 01:23:00 np0005539564 systemd[4300]: Created slice User Background Tasks Slice.
Nov 29 01:23:00 np0005539564 systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 01:23:00 np0005539564 systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 01:29:00 np0005539564 systemd-logind[785]: New session 5 of user zuul.
Nov 29 01:29:00 np0005539564 systemd[1]: Started Session 5 of User zuul.
Nov 29 01:29:00 np0005539564 python3[7508]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-534d-d776-000000000ca8-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:01 np0005539564 python3[7537]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:01 np0005539564 python3[7564]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:01 np0005539564 python3[7590]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:02 np0005539564 python3[7616]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:02 np0005539564 python3[7642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:03 np0005539564 python3[7720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:29:04 np0005539564 python3[7793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397743.412558-369-153155473994802/source _original_basename=tmppavhlwwx follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:05 np0005539564 python3[7843]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:29:05 np0005539564 systemd[1]: Reloading.
Nov 29 01:29:05 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:29:44 np0005539564 python3[7898]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 01:29:46 np0005539564 python3[7924]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:47 np0005539564 python3[7952]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:47 np0005539564 python3[7980]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:47 np0005539564 python3[8008]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:48 np0005539564 python3[8035]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-534d-d776-000000000caf-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:48 np0005539564 python3[8065]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:29:51 np0005539564 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 01:29:51 np0005539564 systemd[1]: session-5.scope: Consumed 4.283s CPU time.
Nov 29 01:29:51 np0005539564 systemd-logind[785]: Session 5 logged out. Waiting for processes to exit.
Nov 29 01:29:51 np0005539564 systemd-logind[785]: Removed session 5.
Nov 29 01:29:53 np0005539564 systemd-logind[785]: New session 6 of user zuul.
Nov 29 01:29:53 np0005539564 systemd[1]: Started Session 6 of User zuul.
Nov 29 01:29:53 np0005539564 python3[8099]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 01:30:19 np0005539564 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:30:19 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:19 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:19 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:19 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:19 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:19 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:19 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:30:28 np0005539564 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:30:28 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:28 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:28 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:28 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:28 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:28 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:28 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:30:37 np0005539564 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:30:37 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:37 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:37 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:37 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:37 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:37 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:37 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:30:39 np0005539564 setsebool[8166]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 01:30:39 np0005539564 setsebool[8166]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 01:30:51 np0005539564 kernel: SELinux:  Converting 388 SID table entries...
Nov 29 01:30:51 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:51 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:51 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:51 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:51 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:51 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:51 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:31:10 np0005539564 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:31:10 np0005539564 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:31:10 np0005539564 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:31:10 np0005539564 systemd[1]: Reloading.
Nov 29 01:31:10 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:31:11 np0005539564 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:31:14 np0005539564 python3[11458]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-59ff-55d2-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:31:15 np0005539564 kernel: evm: overlay not supported
Nov 29 01:31:15 np0005539564 systemd[4300]: Starting D-Bus User Message Bus...
Nov 29 01:31:15 np0005539564 dbus-broker-launch[12190]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 01:31:15 np0005539564 dbus-broker-launch[12190]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 01:31:15 np0005539564 systemd[4300]: Started D-Bus User Message Bus.
Nov 29 01:31:15 np0005539564 dbus-broker-lau[12190]: Ready
Nov 29 01:31:15 np0005539564 systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:31:15 np0005539564 systemd[4300]: Created slice Slice /user.
Nov 29 01:31:15 np0005539564 systemd[4300]: podman-12056.scope: unit configures an IP firewall, but not running as root.
Nov 29 01:31:15 np0005539564 systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 01:31:15 np0005539564 systemd[4300]: Started podman-12056.scope.
Nov 29 01:31:15 np0005539564 systemd[4300]: Started podman-pause-2f4dc874.scope.
Nov 29 01:31:16 np0005539564 python3[12863]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.39:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.39:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:16 np0005539564 python3[12863]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 01:31:16 np0005539564 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 01:31:16 np0005539564 systemd[1]: session-6.scope: Consumed 1min 3.229s CPU time.
Nov 29 01:31:16 np0005539564 systemd-logind[785]: Session 6 logged out. Waiting for processes to exit.
Nov 29 01:31:16 np0005539564 systemd-logind[785]: Removed session 6.
Nov 29 01:31:21 np0005539564 irqbalance[783]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 29 01:31:21 np0005539564 irqbalance[783]: IRQ 27 affinity is now unmanaged
Nov 29 01:31:40 np0005539564 systemd-logind[785]: New session 7 of user zuul.
Nov 29 01:31:40 np0005539564 systemd[1]: Started Session 7 of User zuul.
Nov 29 01:31:41 np0005539564 python3[22440]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:41 np0005539564 python3[22623]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:42 np0005539564 python3[22974]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539564.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 01:31:42 np0005539564 python3[23142]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:43 np0005539564 python3[23416]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:31:43 np0005539564 python3[23670]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397903.0192404-168-58182223455525/source _original_basename=tmpb3yqxi6g follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:44 np0005539564 python3[24076]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Nov 29 01:31:44 np0005539564 systemd[1]: Starting Hostname Service...
Nov 29 01:31:44 np0005539564 systemd[1]: Started Hostname Service.
Nov 29 01:31:44 np0005539564 systemd-hostnamed[24112]: Changed pretty hostname to 'compute-1'
Nov 29 01:31:44 np0005539564 systemd-hostnamed[24112]: Hostname set to <compute-1> (static)
Nov 29 01:31:44 np0005539564 NetworkManager[7189]: <info>  [1764397904.8407] hostname: static hostname changed from "np0005539564.novalocal" to "compute-1"
Nov 29 01:31:44 np0005539564 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:31:44 np0005539564 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:31:45 np0005539564 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 01:31:45 np0005539564 systemd[1]: session-7.scope: Consumed 2.305s CPU time.
Nov 29 01:31:45 np0005539564 systemd-logind[785]: Session 7 logged out. Waiting for processes to exit.
Nov 29 01:31:45 np0005539564 systemd-logind[785]: Removed session 7.
Nov 29 01:31:54 np0005539564 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:32:04 np0005539564 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 01:32:04 np0005539564 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:32:04 np0005539564 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:32:04 np0005539564 systemd[1]: man-db-cache-update.service: Consumed 58.911s CPU time.
Nov 29 01:32:04 np0005539564 systemd[1]: run-r32f803b216b245dca3f558b5e364a059.service: Deactivated successfully.
Nov 29 01:32:04 np0005539564 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 01:32:04 np0005539564 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 01:32:04 np0005539564 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 01:32:14 np0005539564 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:35:00 np0005539564 systemd[1]: Starting dnf makecache...
Nov 29 01:35:00 np0005539564 dnf[29922]: Failed determining last makecache time.
Nov 29 01:35:00 np0005539564 dnf[29922]: CentOS Stream 9 - BaseOS                         32 kB/s | 7.3 kB     00:00
Nov 29 01:35:01 np0005539564 dnf[29922]: CentOS Stream 9 - AppStream                      79 kB/s | 7.4 kB     00:00
Nov 29 01:35:01 np0005539564 dnf[29922]: CentOS Stream 9 - CRB                            75 kB/s | 7.2 kB     00:00
Nov 29 01:35:01 np0005539564 dnf[29922]: CentOS Stream 9 - Extras packages                47 kB/s | 8.3 kB     00:00
Nov 29 01:35:01 np0005539564 dnf[29922]: Metadata cache created.
Nov 29 01:35:01 np0005539564 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 01:35:01 np0005539564 systemd[1]: Finished dnf makecache.
Nov 29 01:35:48 np0005539564 systemd-logind[785]: New session 8 of user zuul.
Nov 29 01:35:48 np0005539564 systemd[1]: Started Session 8 of User zuul.
Nov 29 01:35:49 np0005539564 python3[30003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:35:51 np0005539564 python3[30119]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:52 np0005539564 python3[30192]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1769714-34061-235104521465819/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:52 np0005539564 python3[30218]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:52 np0005539564 python3[30291]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1769714-34061-235104521465819/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:53 np0005539564 python3[30317]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:53 np0005539564 python3[30390]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1769714-34061-235104521465819/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:53 np0005539564 python3[30416]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:54 np0005539564 python3[30489]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1769714-34061-235104521465819/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:54 np0005539564 python3[30515]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:54 np0005539564 python3[30588]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1769714-34061-235104521465819/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:55 np0005539564 python3[30614]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:55 np0005539564 python3[30687]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1769714-34061-235104521465819/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:55 np0005539564 python3[30713]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:56 np0005539564 python3[30786]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1769714-34061-235104521465819/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:09 np0005539564 python3[30834]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:09 np0005539564 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 01:41:09 np0005539564 systemd[1]: session-8.scope: Consumed 5.307s CPU time.
Nov 29 01:41:09 np0005539564 systemd-logind[785]: Session 8 logged out. Waiting for processes to exit.
Nov 29 01:41:09 np0005539564 systemd-logind[785]: Removed session 8.
Nov 29 01:57:49 np0005539564 systemd-logind[785]: New session 9 of user zuul.
Nov 29 01:57:49 np0005539564 systemd[1]: Started Session 9 of User zuul.
Nov 29 01:57:50 np0005539564 python3.9[31045]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:57:51 np0005539564 python3.9[31226]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:58:18 np0005539564 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 01:58:18 np0005539564 systemd[1]: session-9.scope: Consumed 9.184s CPU time.
Nov 29 01:58:18 np0005539564 systemd-logind[785]: Session 9 logged out. Waiting for processes to exit.
Nov 29 01:58:18 np0005539564 systemd-logind[785]: Removed session 9.
Nov 29 01:58:43 np0005539564 systemd-logind[785]: New session 10 of user zuul.
Nov 29 01:58:43 np0005539564 systemd[1]: Started Session 10 of User zuul.
Nov 29 01:58:44 np0005539564 python3.9[31443]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 01:58:45 np0005539564 python3.9[31617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:58:47 np0005539564 python3.9[31769]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:58:49 np0005539564 python3.9[31922]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:58:50 np0005539564 python3.9[32074]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:58:51 np0005539564 python3.9[32226]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:58:51 np0005539564 python3.9[32349]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399530.7363505-183-220166259872297/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:58:52 np0005539564 python3.9[32501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:58:53 np0005539564 python3.9[32657]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:58:54 np0005539564 python3.9[32809]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:58:55 np0005539564 python3.9[32959]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:58:59 np0005539564 python3.9[33212]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:59:00 np0005539564 python3.9[33362]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:59:02 np0005539564 python3.9[33516]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:59:03 np0005539564 python3.9[33674]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:59:04 np0005539564 python3.9[33758]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:59:49 np0005539564 systemd[1]: Reloading.
Nov 29 01:59:49 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:59:49 np0005539564 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 01:59:49 np0005539564 systemd[1]: Reloading.
Nov 29 01:59:49 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:59:49 np0005539564 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 01:59:49 np0005539564 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 01:59:49 np0005539564 systemd[1]: Reloading.
Nov 29 01:59:49 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:59:50 np0005539564 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 01:59:50 np0005539564 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 29 01:59:50 np0005539564 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 29 01:59:50 np0005539564 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 29 02:01:06 np0005539564 kernel: SELinux:  Converting 2718 SID table entries...
Nov 29 02:01:06 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:01:06 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:01:06 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:01:06 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:01:06 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:01:06 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:01:06 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:01:06 np0005539564 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 02:01:07 np0005539564 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:01:07 np0005539564 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:01:07 np0005539564 systemd[1]: Reloading.
Nov 29 02:01:07 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:01:07 np0005539564 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:01:08 np0005539564 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:01:08 np0005539564 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:01:08 np0005539564 systemd[1]: man-db-cache-update.service: Consumed 1.269s CPU time.
Nov 29 02:01:08 np0005539564 systemd[1]: run-r773e1ccd3c2548a987dec9e40edd4123.service: Deactivated successfully.
Nov 29 02:01:19 np0005539564 python3.9[35300]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:01:21 np0005539564 python3.9[35581]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:01:22 np0005539564 python3.9[35733]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:01:26 np0005539564 python3.9[35887]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:37 np0005539564 python3.9[36039]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:01:38 np0005539564 python3.9[36191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:01:40 np0005539564 python3.9[36343]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:01:40 np0005539564 python3.9[36466]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399699.5460913-672-261305710227196/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:47 np0005539564 python3.9[36620]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:01:48 np0005539564 python3.9[36772]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:01:49 np0005539564 python3.9[36925]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:50 np0005539564 python3.9[37077]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:01:50 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:01:50 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:01:51 np0005539564 python3.9[37231]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:01:52 np0005539564 python3.9[37389]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:01:53 np0005539564 python3.9[37549]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:01:54 np0005539564 python3.9[37702]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:01:55 np0005539564 python3.9[37860]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:01:56 np0005539564 python3.9[38012]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:02:00 np0005539564 python3.9[38165]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:02:01 np0005539564 python3.9[38317]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:02:02 np0005539564 python3.9[38440]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399720.9243557-1029-195925137051711/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:02:03 np0005539564 python3.9[38592]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:02:04 np0005539564 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:02:04 np0005539564 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 02:02:04 np0005539564 kernel: Bridge firewalling registered
Nov 29 02:02:04 np0005539564 systemd-modules-load[38596]: Inserted module 'br_netfilter'
Nov 29 02:02:04 np0005539564 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:02:05 np0005539564 python3.9[38752]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:02:06 np0005539564 python3.9[38875]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399724.7596974-1098-177794705144711/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:02:07 np0005539564 python3.9[39027]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:02:11 np0005539564 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 29 02:02:11 np0005539564 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 29 02:02:11 np0005539564 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:02:11 np0005539564 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:02:11 np0005539564 systemd[1]: Reloading.
Nov 29 02:02:11 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:11 np0005539564 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:02:13 np0005539564 python3.9[40694]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:02:14 np0005539564 python3.9[41742]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:02:15 np0005539564 python3.9[42831]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:02:15 np0005539564 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:02:15 np0005539564 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:02:15 np0005539564 systemd[1]: man-db-cache-update.service: Consumed 5.232s CPU time.
Nov 29 02:02:15 np0005539564 systemd[1]: run-rb23a26714a754131b7409b670ebfff63.service: Deactivated successfully.
Nov 29 02:02:16 np0005539564 python3.9[43259]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:16 np0005539564 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:02:17 np0005539564 systemd[1]: Starting Authorization Manager...
Nov 29 02:02:17 np0005539564 polkitd[43476]: Started polkitd version 0.117
Nov 29 02:02:17 np0005539564 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:02:17 np0005539564 systemd[1]: Started Authorization Manager.
Nov 29 02:02:18 np0005539564 python3.9[43646]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:02:18 np0005539564 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:02:18 np0005539564 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:02:18 np0005539564 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:02:18 np0005539564 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:02:18 np0005539564 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:02:19 np0005539564 python3.9[43808]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:02:23 np0005539564 python3.9[43962]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:02:23 np0005539564 systemd[1]: Reloading.
Nov 29 02:02:23 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:24 np0005539564 python3.9[44150]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:02:25 np0005539564 systemd[1]: Reloading.
Nov 29 02:02:25 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:26 np0005539564 python3.9[44339]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:26 np0005539564 python3.9[44492]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:26 np0005539564 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 02:02:27 np0005539564 python3.9[44645]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:29 np0005539564 python3.9[44809]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:30 np0005539564 python3.9[44962]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:02:30 np0005539564 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 02:02:30 np0005539564 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 02:02:30 np0005539564 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 02:02:30 np0005539564 systemd[1]: Starting Apply Kernel Variables...
Nov 29 02:02:30 np0005539564 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 02:02:30 np0005539564 systemd[1]: Finished Apply Kernel Variables.
Nov 29 02:02:31 np0005539564 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 02:02:31 np0005539564 systemd[1]: session-10.scope: Consumed 2min 18.464s CPU time.
Nov 29 02:02:31 np0005539564 systemd-logind[785]: Session 10 logged out. Waiting for processes to exit.
Nov 29 02:02:31 np0005539564 systemd-logind[785]: Removed session 10.
Nov 29 02:02:36 np0005539564 systemd-logind[785]: New session 11 of user zuul.
Nov 29 02:02:36 np0005539564 systemd[1]: Started Session 11 of User zuul.
Nov 29 02:02:37 np0005539564 python3.9[45145]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:02:39 np0005539564 python3.9[45301]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:02:40 np0005539564 python3.9[45454]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:02:42 np0005539564 python3.9[45612]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:02:43 np0005539564 python3.9[45772]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:02:44 np0005539564 python3.9[45856]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:02:48 np0005539564 python3.9[46020]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:03:00 np0005539564 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 02:03:00 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:03:00 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:03:00 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:03:00 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:03:00 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:03:00 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:03:00 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:03:00 np0005539564 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 02:03:00 np0005539564 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 02:03:01 np0005539564 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:03:01 np0005539564 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:03:01 np0005539564 systemd[1]: Reloading.
Nov 29 02:03:01 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:01 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:03:02 np0005539564 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:03:03 np0005539564 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:03:03 np0005539564 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:03:03 np0005539564 systemd[1]: run-radc246b9c7b44804aa45f7edc0d4ae5e.service: Deactivated successfully.
Nov 29 02:03:13 np0005539564 python3.9[47119]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:03:13 np0005539564 systemd[1]: Reloading.
Nov 29 02:03:13 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:13 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:03:13 np0005539564 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 02:03:13 np0005539564 chown[47161]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 02:03:14 np0005539564 ovs-ctl[47166]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 02:03:14 np0005539564 ovs-ctl[47166]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 02:03:14 np0005539564 ovs-ctl[47166]: Starting ovsdb-server [  OK  ]
Nov 29 02:03:14 np0005539564 ovs-vsctl[47215]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 02:03:14 np0005539564 ovs-vsctl[47231]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"011fdddc-8681-4ece-b276-7e821dffaec6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 02:03:14 np0005539564 ovs-ctl[47166]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 02:03:14 np0005539564 ovs-ctl[47166]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:03:14 np0005539564 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 02:03:14 np0005539564 ovs-vsctl[47241]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 29 02:03:14 np0005539564 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 02:03:14 np0005539564 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 02:03:14 np0005539564 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 02:03:14 np0005539564 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 02:03:14 np0005539564 ovs-ctl[47285]: Inserting openvswitch module [  OK  ]
Nov 29 02:03:14 np0005539564 ovs-ctl[47254]: Starting ovs-vswitchd [  OK  ]
Nov 29 02:03:14 np0005539564 ovs-ctl[47254]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:03:14 np0005539564 ovs-vsctl[47302]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 29 02:03:14 np0005539564 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 02:03:14 np0005539564 systemd[1]: Starting Open vSwitch...
Nov 29 02:03:14 np0005539564 systemd[1]: Finished Open vSwitch.
Nov 29 02:03:15 np0005539564 python3.9[47454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:03:16 np0005539564 python3.9[47606]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:03:18 np0005539564 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 02:03:18 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:03:18 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:03:18 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:03:18 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:03:18 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:03:18 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:03:18 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:03:20 np0005539564 python3.9[47761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:03:21 np0005539564 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 02:03:21 np0005539564 python3.9[47919]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:03:23 np0005539564 python3.9[48072]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:03:25 np0005539564 python3.9[48359]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:03:26 np0005539564 python3.9[48509]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:03:26 np0005539564 python3.9[48663]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:03:29 np0005539564 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:03:29 np0005539564 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:03:29 np0005539564 systemd[1]: Reloading.
Nov 29 02:03:29 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:29 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:03:29 np0005539564 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:03:32 np0005539564 python3.9[48979]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:03:32 np0005539564 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 02:03:32 np0005539564 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 02:03:32 np0005539564 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 02:03:32 np0005539564 systemd[1]: Stopping Network Manager...
Nov 29 02:03:32 np0005539564 NetworkManager[7189]: <info>  [1764399812.5351] caught SIGTERM, shutting down normally.
Nov 29 02:03:32 np0005539564 NetworkManager[7189]: <info>  [1764399812.5367] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:03:32 np0005539564 NetworkManager[7189]: <info>  [1764399812.5367] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:03:32 np0005539564 NetworkManager[7189]: <info>  [1764399812.5367] dhcp4 (eth0): state changed no lease
Nov 29 02:03:32 np0005539564 NetworkManager[7189]: <info>  [1764399812.5370] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:03:32 np0005539564 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:03:32 np0005539564 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:03:32 np0005539564 NetworkManager[7189]: <info>  [1764399812.8316] exiting (success)
Nov 29 02:03:32 np0005539564 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 02:03:32 np0005539564 systemd[1]: Stopped Network Manager.
Nov 29 02:03:32 np0005539564 systemd[1]: NetworkManager.service: Consumed 18.950s CPU time, 4.1M memory peak, read 0B from disk, written 9.5K to disk.
Nov 29 02:03:32 np0005539564 systemd[1]: Starting Network Manager...
Nov 29 02:03:32 np0005539564 NetworkManager[48997]: <info>  [1764399812.8979] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e076cd5a-06db-46a2-b985-67d3134914f7)
Nov 29 02:03:32 np0005539564 NetworkManager[48997]: <info>  [1764399812.8980] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 02:03:32 np0005539564 NetworkManager[48997]: <info>  [1764399812.9040] manager[0x55e9ad662090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 02:03:32 np0005539564 systemd[1]: Starting Hostname Service...
Nov 29 02:03:32 np0005539564 systemd[1]: Started Hostname Service.
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0016] hostname: hostname: using hostnamed
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0020] hostname: static hostname changed from (none) to "compute-1"
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0025] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0029] manager[0x55e9ad662090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0029] manager[0x55e9ad662090]: rfkill: WWAN hardware radio set enabled
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0049] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0058] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0058] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0059] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0059] manager: Networking is enabled by state file
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0061] settings: Loaded settings plugin: keyfile (internal)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0064] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0092] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0101] dhcp: init: Using DHCP client 'internal'
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0109] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0113] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0118] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0125] device (lo): Activation: starting connection 'lo' (033a2694-84e1-4d41-84bd-0e9f1d5dbe29)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0131] device (eth0): carrier: link connected
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0135] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0138] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0138] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0148] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0154] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0161] device (eth1): carrier: link connected
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0164] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0174] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f5db7348-8863-5268-8258-6f6fd4d307ec) (indicated)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0175] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0180] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0188] device (eth1): Activation: starting connection 'ci-private-network' (f5db7348-8863-5268-8258-6f6fd4d307ec)
Nov 29 02:03:33 np0005539564 systemd[1]: Started Network Manager.
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0200] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0208] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0211] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0213] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0216] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0220] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0223] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0226] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0231] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0237] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0238] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0246] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0259] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0268] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0270] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0275] device (lo): Activation: successful, device activated.
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0282] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0285] dhcp4 (eth0): state changed new lease, address=38.102.83.162
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0288] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0291] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0294] device (eth1): Activation: successful, device activated.
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.0303] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 02:03:33 np0005539564 systemd[1]: Starting Network Manager Wait Online...
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.2723] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.2814] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.2815] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.2818] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.2821] device (eth0): Activation: successful, device activated.
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.2825] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 02:03:33 np0005539564 NetworkManager[48997]: <info>  [1764399813.3700] manager: startup complete
Nov 29 02:03:33 np0005539564 systemd[1]: Finished Network Manager Wait Online.
Nov 29 02:03:33 np0005539564 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:03:33 np0005539564 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:03:33 np0005539564 systemd[1]: run-r0987a4240dad4cab9296143abfe1135b.service: Deactivated successfully.
Nov 29 02:03:33 np0005539564 python3.9[49206]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:03:43 np0005539564 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:03:49 np0005539564 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:03:49 np0005539564 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:03:49 np0005539564 systemd[1]: Reloading.
Nov 29 02:03:49 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:49 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:03:50 np0005539564 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:03:54 np0005539564 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:03:54 np0005539564 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:03:54 np0005539564 systemd[1]: run-r05355071480a4bfd9bacc5319ac3a25e.service: Deactivated successfully.
Nov 29 02:03:54 np0005539564 python3.9[49669]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:03:55 np0005539564 python3.9[49821]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:56 np0005539564 python3.9[49975]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:57 np0005539564 python3.9[50127]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:58 np0005539564 python3.9[50279]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:58 np0005539564 python3.9[50431]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:59 np0005539564 python3.9[50583]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:00 np0005539564 python3.9[50706]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399839.1611795-653-82980349485388/.source _original_basename=.mc0k3l11 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:01 np0005539564 python3.9[50858]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:01 np0005539564 python3.9[51010]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 02:04:02 np0005539564 python3.9[51162]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:03 np0005539564 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 02:04:05 np0005539564 python3.9[51592]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 02:04:06 np0005539564 ansible-async_wrapper.py[51767]: Invoked with j884682759072 300 /home/zuul/.ansible/tmp/ansible-tmp-1764399845.508238-851-202396660149144/AnsiballZ_edpm_os_net_config.py _
Nov 29 02:04:06 np0005539564 ansible-async_wrapper.py[51770]: Starting module and watcher
Nov 29 02:04:06 np0005539564 ansible-async_wrapper.py[51770]: Start watching 51771 (300)
Nov 29 02:04:06 np0005539564 ansible-async_wrapper.py[51771]: Start module (51771)
Nov 29 02:04:06 np0005539564 ansible-async_wrapper.py[51767]: Return async_wrapper task started.
Nov 29 02:04:06 np0005539564 python3.9[51772]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 02:04:07 np0005539564 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 02:04:07 np0005539564 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 02:04:07 np0005539564 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 02:04:07 np0005539564 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 02:04:07 np0005539564 kernel: cfg80211: failed to load regulatory.db
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5307] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5323] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5829] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5830] audit: op="connection-add" uuid="f84575ab-e593-430d-8585-c9616b6c2306" name="br-ex-br" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5845] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5846] audit: op="connection-add" uuid="2ba05c92-6a79-4708-b162-35f903cee328" name="br-ex-port" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5857] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5858] audit: op="connection-add" uuid="a6b6e1d0-3730-46a6-80d3-2486accad922" name="eth1-port" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5869] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5870] audit: op="connection-add" uuid="e9b9c084-989b-465f-bbc2-7f9df8ff0f0c" name="vlan20-port" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5880] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5882] audit: op="connection-add" uuid="12d9ce9b-6915-45c1-94bf-a55d04a9dedc" name="vlan21-port" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5895] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5897] audit: op="connection-add" uuid="c453d247-8d44-4344-a11b-9d3cf9b36165" name="vlan22-port" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5908] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5910] audit: op="connection-add" uuid="7cb72cda-f733-44c7-9b62-74670cdcad7c" name="vlan23-port" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5930] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5946] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.5947] audit: op="connection-add" uuid="e1ca8166-6142-4526-a3b9-44b78f977629" name="br-ex-if" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6008] audit: op="connection-update" uuid="f5db7348-8863-5268-8258-6f6fd4d307ec" name="ci-private-network" args="ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.routes,ipv6.method,ovs-external-ids.data,connection.controller,connection.master,connection.timestamp,connection.port-type,connection.slave-type,ipv4.routing-rules,ipv4.dns,ipv4.addresses,ipv4.routes,ipv4.method,ipv4.never-default,ovs-interface.type" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6030] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6032] audit: op="connection-add" uuid="3b7f1e14-e41c-4810-8a40-5836757818f3" name="vlan20-if" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6052] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6054] audit: op="connection-add" uuid="e7af0864-bd0d-43be-bb16-556c340680d7" name="vlan21-if" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6074] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6076] audit: op="connection-add" uuid="5196521e-1ed8-4c0f-b353-1d9a3bb3f957" name="vlan22-if" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6095] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6097] audit: op="connection-add" uuid="0ead7b89-4d35-4405-8707-1dd5df44297b" name="vlan23-if" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6112] audit: op="connection-delete" uuid="a06317bf-79e9-3ad3-bec1-72ebec07c777" name="Wired connection 1" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6125] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6137] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6140] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f84575ab-e593-430d-8585-c9616b6c2306)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6141] audit: op="connection-activate" uuid="f84575ab-e593-430d-8585-c9616b6c2306" name="br-ex-br" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6142] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6147] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6149] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (2ba05c92-6a79-4708-b162-35f903cee328)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6150] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6154] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6157] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a6b6e1d0-3730-46a6-80d3-2486accad922)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6158] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6162] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6165] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (e9b9c084-989b-465f-bbc2-7f9df8ff0f0c)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6166] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6171] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6175] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (12d9ce9b-6915-45c1-94bf-a55d04a9dedc)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6176] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6183] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6186] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c453d247-8d44-4344-a11b-9d3cf9b36165)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6187] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6192] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6195] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (7cb72cda-f733-44c7-9b62-74670cdcad7c)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6196] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6198] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6199] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6204] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6207] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6210] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e1ca8166-6142-4526-a3b9-44b78f977629)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6211] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6213] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6215] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6216] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6217] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6225] device (eth1): disconnecting for new activation request.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6225] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6228] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6230] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6231] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6233] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6236] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6239] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3b7f1e14-e41c-4810-8a40-5836757818f3)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6239] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6241] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6242] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6243] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6245] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6248] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6250] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e7af0864-bd0d-43be-bb16-556c340680d7)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6251] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6253] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6254] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6255] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6256] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6259] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6262] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (5196521e-1ed8-4c0f-b353-1d9a3bb3f957)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6262] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6264] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6265] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6266] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6268] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6271] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6274] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (0ead7b89-4d35-4405-8707-1dd5df44297b)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6275] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6276] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6277] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6278] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6279] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6289] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6291] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6293] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6294] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6299] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6301] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6304] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6306] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6307] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6311] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6313] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6316] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6318] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 kernel: ovs-system: entered promiscuous mode
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6322] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6325] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6327] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6329] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6334] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6337] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6341] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6342] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6345] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 kernel: Timeout policy base is empty
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6349] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6349] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6349] dhcp4 (eth0): state changed no lease
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6351] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6360] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6363] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51773 uid=0 result="fail" reason="Device is not activated"
Nov 29 02:04:08 np0005539564 systemd-udevd[51777]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:04:08 np0005539564 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6417] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6419] dhcp4 (eth0): state changed new lease, address=38.102.83.162
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6478] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6485] device (eth1): disconnecting for new activation request.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6485] audit: op="connection-activate" uuid="f5db7348-8863-5268-8258-6f6fd4d307ec" name="ci-private-network" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6492] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6498] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6538] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51773 uid=0 result="success"
Nov 29 02:04:08 np0005539564 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6564] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6686] device (eth1): Activation: starting connection 'ci-private-network' (f5db7348-8863-5268-8258-6f6fd4d307ec)
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6705] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6708] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6716] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6717] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6718] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6719] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6720] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6721] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6722] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6734] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6739] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6744] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6748] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6752] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6755] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6758] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6762] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6765] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6767] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6771] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6774] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 kernel: br-ex: entered promiscuous mode
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6791] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6795] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6799] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6804] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6810] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6860] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6861] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6866] device (eth1): Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 kernel: vlan22: entered promiscuous mode
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6922] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539564 systemd-udevd[51779]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6939] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6962] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6963] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.6967] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 kernel: vlan23: entered promiscuous mode
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7037] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7047] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 kernel: vlan21: entered promiscuous mode
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7092] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7094] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7100] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7139] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539564 kernel: vlan20: entered promiscuous mode
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7164] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7188] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7197] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7207] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7208] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7213] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7262] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7263] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7267] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7276] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7286] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7334] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7335] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539564 NetworkManager[48997]: <info>  [1764399848.7340] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:09 np0005539564 NetworkManager[48997]: <info>  [1764399849.8535] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51773 uid=0 result="success"
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.0273] checkpoint[0x55e9ad638950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.0276] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51773 uid=0 result="success"
Nov 29 02:04:10 np0005539564 python3.9[52130]: ansible-ansible.legacy.async_status Invoked with jid=j884682759072.51767 mode=status _async_dir=/root/.ansible_async
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.3728] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51773 uid=0 result="success"
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.3738] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51773 uid=0 result="success"
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.5862] audit: op="networking-control" arg="global-dns-configuration" pid=51773 uid=0 result="success"
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.5886] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.5917] audit: op="networking-control" arg="global-dns-configuration" pid=51773 uid=0 result="success"
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.5946] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51773 uid=0 result="success"
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.7420] checkpoint[0x55e9ad638a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 02:04:10 np0005539564 NetworkManager[48997]: <info>  [1764399850.7425] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51773 uid=0 result="success"
Nov 29 02:04:10 np0005539564 ansible-async_wrapper.py[51771]: Module complete (51771)
Nov 29 02:04:11 np0005539564 ansible-async_wrapper.py[51770]: Done in kid B.
Nov 29 02:04:11 np0005539564 irqbalance[783]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 29 02:04:11 np0005539564 irqbalance[783]: IRQ 26 affinity is now unmanaged
Nov 29 02:04:13 np0005539564 python3.9[52236]: ansible-ansible.legacy.async_status Invoked with jid=j884682759072.51767 mode=status _async_dir=/root/.ansible_async
Nov 29 02:04:14 np0005539564 python3.9[52336]: ansible-ansible.legacy.async_status Invoked with jid=j884682759072.51767 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:04:14 np0005539564 python3.9[52488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:15 np0005539564 python3.9[52611]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399854.4756389-932-276174168293067/.source.returncode _original_basename=.h25xv5ed follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:16 np0005539564 python3.9[52763]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:17 np0005539564 python3.9[52886]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399856.0454164-980-24661855518882/.source.cfg _original_basename=.nems56xu follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:18 np0005539564 python3.9[53039]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:04:18 np0005539564 systemd[1]: Reloading Network Manager...
Nov 29 02:04:18 np0005539564 NetworkManager[48997]: <info>  [1764399858.0986] audit: op="reload" arg="0" pid=53043 uid=0 result="success"
Nov 29 02:04:18 np0005539564 NetworkManager[48997]: <info>  [1764399858.0995] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 02:04:18 np0005539564 systemd[1]: Reloaded Network Manager.
Nov 29 02:04:18 np0005539564 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 02:04:18 np0005539564 systemd-logind[785]: Session 11 logged out. Waiting for processes to exit.
Nov 29 02:04:18 np0005539564 systemd[1]: session-11.scope: Consumed 52.147s CPU time.
Nov 29 02:04:18 np0005539564 systemd-logind[785]: Removed session 11.
Nov 29 02:04:23 np0005539564 systemd-logind[785]: New session 12 of user zuul.
Nov 29 02:04:23 np0005539564 systemd[1]: Started Session 12 of User zuul.
Nov 29 02:04:24 np0005539564 python3.9[53229]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:04:25 np0005539564 python3.9[53383]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:04:27 np0005539564 python3.9[53577]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:27 np0005539564 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 02:04:27 np0005539564 systemd[1]: session-12.scope: Consumed 2.576s CPU time.
Nov 29 02:04:27 np0005539564 systemd-logind[785]: Session 12 logged out. Waiting for processes to exit.
Nov 29 02:04:27 np0005539564 systemd-logind[785]: Removed session 12.
Nov 29 02:04:28 np0005539564 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:04:33 np0005539564 systemd-logind[785]: New session 13 of user zuul.
Nov 29 02:04:33 np0005539564 systemd[1]: Started Session 13 of User zuul.
Nov 29 02:04:35 np0005539564 python3.9[53759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:04:36 np0005539564 python3.9[53913]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:04:37 np0005539564 python3.9[54070]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:04:37 np0005539564 python3.9[54154]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:40 np0005539564 python3.9[54310]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:04:41 np0005539564 python3.9[54505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:42 np0005539564 python3.9[54657]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:42 np0005539564 systemd[1]: var-lib-containers-storage-overlay-compat4150012993-merged.mount: Deactivated successfully.
Nov 29 02:04:43 np0005539564 podman[54658]: 2025-11-29 07:04:43.22490685 +0000 UTC m=+0.551552755 system refresh
Nov 29 02:04:43 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:04:44 np0005539564 python3.9[54821]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:45 np0005539564 python3.9[54944]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399884.098241-203-38419521573795/.source.json follow=False _original_basename=podman_network_config.j2 checksum=fdb7ca5c34d3b887bd22ed7200a70612d3d065db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:46 np0005539564 python3.9[55096]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:46 np0005539564 python3.9[55219]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399885.7886236-248-200695302737344/.source.conf follow=False _original_basename=registries.conf.j2 checksum=f27f86218e398aa50b444b0bf8b9e443f3d2c120 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:47 np0005539564 python3.9[55371]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:48 np0005539564 python3.9[55523]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:49 np0005539564 python3.9[55675]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:50 np0005539564 python3.9[55827]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:51 np0005539564 python3.9[55979]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:54 np0005539564 python3.9[56132]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:04:55 np0005539564 python3.9[56286]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:04:56 np0005539564 python3.9[56438]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:04:57 np0005539564 python3.9[56590]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:58 np0005539564 python3.9[56743]: ansible-service_facts Invoked
Nov 29 02:04:58 np0005539564 network[56760]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:04:58 np0005539564 network[56761]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:04:58 np0005539564 network[56762]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:05:03 np0005539564 python3.9[57216]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:05:06 np0005539564 python3.9[57371]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:05:08 np0005539564 python3.9[57523]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:08 np0005539564 python3.9[57648]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399907.6374483-680-58923180062724/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:10 np0005539564 python3.9[57802]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:10 np0005539564 python3.9[57927]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399909.3005757-725-5012270497070/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:12 np0005539564 python3.9[58081]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:14 np0005539564 python3.9[58235]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:05:15 np0005539564 python3.9[58319]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:17 np0005539564 python3.9[58473]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:05:18 np0005539564 python3.9[58557]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:05:18 np0005539564 chronyd[794]: chronyd exiting
Nov 29 02:05:18 np0005539564 systemd[1]: Stopping NTP client/server...
Nov 29 02:05:18 np0005539564 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 02:05:18 np0005539564 systemd[1]: Stopped NTP client/server.
Nov 29 02:05:18 np0005539564 systemd[1]: Starting NTP client/server...
Nov 29 02:05:18 np0005539564 chronyd[58565]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 02:05:18 np0005539564 chronyd[58565]: Frequency -26.659 +/- 0.128 ppm read from /var/lib/chrony/drift
Nov 29 02:05:18 np0005539564 chronyd[58565]: Loaded seccomp filter (level 2)
Nov 29 02:05:18 np0005539564 systemd[1]: Started NTP client/server.
Nov 29 02:05:19 np0005539564 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 02:05:19 np0005539564 systemd[1]: session-13.scope: Consumed 26.592s CPU time.
Nov 29 02:05:19 np0005539564 systemd-logind[785]: Session 13 logged out. Waiting for processes to exit.
Nov 29 02:05:19 np0005539564 systemd-logind[785]: Removed session 13.
Nov 29 02:05:24 np0005539564 systemd-logind[785]: New session 14 of user zuul.
Nov 29 02:05:24 np0005539564 systemd[1]: Started Session 14 of User zuul.
Nov 29 02:05:25 np0005539564 python3.9[58746]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:26 np0005539564 python3.9[58898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:26 np0005539564 python3.9[59021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399925.4593937-68-16670072874754/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:27 np0005539564 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 02:05:27 np0005539564 systemd[1]: session-14.scope: Consumed 1.793s CPU time.
Nov 29 02:05:27 np0005539564 systemd-logind[785]: Session 14 logged out. Waiting for processes to exit.
Nov 29 02:05:27 np0005539564 systemd-logind[785]: Removed session 14.
Nov 29 02:05:32 np0005539564 systemd-logind[785]: New session 15 of user zuul.
Nov 29 02:05:32 np0005539564 systemd[1]: Started Session 15 of User zuul.
Nov 29 02:05:33 np0005539564 python3.9[59199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:05:34 np0005539564 python3.9[59355]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:35 np0005539564 python3.9[59530]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:36 np0005539564 python3.9[59653]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764399934.8405209-89-270736755345558/.source.json _original_basename=.nemloj__ follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:37 np0005539564 python3.9[59805]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:38 np0005539564 python3.9[59928]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399937.3124318-158-110955863404129/.source _original_basename=.o23ndh43 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:39 np0005539564 python3.9[60080]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:05:40 np0005539564 python3.9[60232]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:40 np0005539564 python3.9[60355]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399939.5459344-230-122128450558466/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:05:41 np0005539564 python3.9[60507]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:41 np0005539564 python3.9[60630]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399940.8612132-230-241997038692556/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:05:42 np0005539564 python3.9[60782]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:43 np0005539564 python3.9[60934]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:43 np0005539564 python3.9[61057]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399942.8499475-341-235100546632064/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:44 np0005539564 python3.9[61209]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:45 np0005539564 python3.9[61332]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399944.1765587-386-184709953472223/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:47 np0005539564 python3.9[61484]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:47 np0005539564 systemd[1]: Reloading.
Nov 29 02:05:47 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:47 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:47 np0005539564 systemd[1]: Reloading.
Nov 29 02:05:47 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:47 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:47 np0005539564 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 02:05:47 np0005539564 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 02:05:48 np0005539564 python3.9[61713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:49 np0005539564 python3.9[61836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399947.8429499-455-302233934869/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:49 np0005539564 python3.9[61988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:50 np0005539564 python3.9[62111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399949.2778502-500-128541936146220/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:51 np0005539564 python3.9[62263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:51 np0005539564 systemd[1]: Reloading.
Nov 29 02:05:51 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:51 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:51 np0005539564 systemd[1]: Reloading.
Nov 29 02:05:51 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:51 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:51 np0005539564 systemd[1]: Starting Create netns directory...
Nov 29 02:05:51 np0005539564 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:05:51 np0005539564 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:05:51 np0005539564 systemd[1]: Finished Create netns directory.
Nov 29 02:05:52 np0005539564 python3.9[62491]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:05:52 np0005539564 network[62508]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:05:52 np0005539564 network[62509]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:05:52 np0005539564 network[62510]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:05:57 np0005539564 python3.9[62772]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:57 np0005539564 systemd[1]: Reloading.
Nov 29 02:05:57 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:57 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:58 np0005539564 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 02:05:58 np0005539564 iptables.init[62813]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 02:05:58 np0005539564 iptables.init[62813]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 02:05:58 np0005539564 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 02:05:58 np0005539564 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 02:05:59 np0005539564 python3.9[63009]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:06:00 np0005539564 python3.9[63163]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:06:00 np0005539564 systemd[1]: Reloading.
Nov 29 02:06:00 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:00 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:00 np0005539564 systemd[1]: Starting Netfilter Tables...
Nov 29 02:06:00 np0005539564 systemd[1]: Finished Netfilter Tables.
Nov 29 02:06:01 np0005539564 python3.9[63355]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:02 np0005539564 python3.9[63508]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:03 np0005539564 python3.9[63633]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399962.2735288-707-151865486629016/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:04 np0005539564 python3.9[63786]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:06:04 np0005539564 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 02:06:04 np0005539564 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 02:06:05 np0005539564 python3.9[63942]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:05 np0005539564 python3.9[64094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:06 np0005539564 python3.9[64217]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399965.3820963-800-227941320622681/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:07 np0005539564 python3.9[64369]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:06:07 np0005539564 systemd[1]: Starting Time & Date Service...
Nov 29 02:06:07 np0005539564 systemd[1]: Started Time & Date Service.
Nov 29 02:06:08 np0005539564 python3.9[64525]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:09 np0005539564 python3.9[64677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:10 np0005539564 python3.9[64800]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399968.8738298-906-238150743311949/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:10 np0005539564 python3.9[64952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:11 np0005539564 python3.9[65077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399970.3181672-951-234146499709894/.source.yaml _original_basename=.c7hiqhf7 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:12 np0005539564 python3.9[65229]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:12 np0005539564 python3.9[65352]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399971.7914686-995-236756521327233/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:13 np0005539564 python3.9[65504]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:14 np0005539564 python3.9[65657]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:15 np0005539564 python3[65810]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:06:16 np0005539564 python3.9[65962]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:16 np0005539564 python3.9[66085]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399975.5212464-1112-187918851571331/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:17 np0005539564 python3.9[66239]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:18 np0005539564 python3.9[66362]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399976.9113686-1157-268402358560715/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:19 np0005539564 python3.9[66514]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:19 np0005539564 python3.9[66637]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399978.7170832-1202-140321123081138/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:20 np0005539564 python3.9[66789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:21 np0005539564 python3.9[66912]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399979.9970896-1247-184168090202868/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:21 np0005539564 python3.9[67064]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:22 np0005539564 python3.9[67187]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399981.3143587-1292-189081766591421/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:23 np0005539564 python3.9[67339]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:23 np0005539564 python3.9[67491]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:24 np0005539564 python3.9[67650]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:25 np0005539564 python3.9[67803]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:26 np0005539564 python3.9[67955]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:27 np0005539564 python3.9[68107]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:06:28 np0005539564 python3.9[68260]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:06:28 np0005539564 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 02:06:28 np0005539564 systemd[1]: session-15.scope: Consumed 36.877s CPU time.
Nov 29 02:06:28 np0005539564 systemd-logind[785]: Session 15 logged out. Waiting for processes to exit.
Nov 29 02:06:28 np0005539564 systemd-logind[785]: Removed session 15.
Nov 29 02:06:33 np0005539564 systemd-logind[785]: New session 16 of user zuul.
Nov 29 02:06:34 np0005539564 systemd[1]: Started Session 16 of User zuul.
Nov 29 02:06:34 np0005539564 python3.9[68443]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:06:35 np0005539564 python3.9[68595]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:06:36 np0005539564 python3.9[68747]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:37 np0005539564 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:06:38 np0005539564 python3.9[68899]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsLXbFhjUoBaTkhKZlhlr4wo49zgbzeJBequh3eUPlExtzdjrm/R47hkAJGagw+KhipRZ6XygyvP7g0rFG4kdUV8ZbW7HpIhvM2LCuDhFHJGta5IbLQDOAA3QuuNA4DyzfWhW146Q2aOja0AoRZOxjBRKO37fhEgGVJO/UZQHoJZFXHQPBPhZ27Wtt4Jfhz0G/t7WgxqsHTg9pnZL3PKV8yC/Ety9V+G9Hjrbwv8GblAazAMvnYcN6Hhh0mKKJ41E1++cy2nN9Lr6iU9KXS4BN73PkapyN75SJK4/2HEELgi7XCGQtXkdc+cnS1nYdtqW5aUS8fONsji8bdoy4AvRQrTsNWbXNcQXBesHoKNiBaUZjzaW0LhwQ2HTD36wG2FW/thgjrlU0AY8aqut/tcB7sjUacgNn8XfqibZb07x75HvbixT1G+V9ax63HLyfAiLCZquwpnl7CuyQvBAe+UNPLU4Kegtn+KKw2+3BoNkkAKkAoDdKd5fQKWFavTllfU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEesPYkFXAKa2jD/XHieFXe2/NLZG5BPNBvLebxF7i4V#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3fAbGbewc62wcP/ANYyTDYdWflUi4LqSZ2pYXEDgbyEIKVn6IU7ulNV9i7b7SvxrtzT5K34kYv1WsU3bRd5RM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc04fosxiJMz9URZzfwgW2kqQvT/wRjkGRSpo8InnYlU+RAljr+QL8e1C8DPu41m+HGkgDmV4uDikwXF3b0w/6D0/P6iPUsexRy4OkOFgOqlzl7+pNzQ1p5SMgMoaKslyPA1DEUc0bxHjIpTHyjq/X8YamvXJO4KLpZ42Ii0c6RyWcejiRw4wZQWh2s6egN8in6cEVODGcWVseYKhFaPjdUDBtuQy4LaGwosJIkR1OCy9coVbEdcv2vOxdpLby9ssC7nEDAKg2X+0rmcdpImSt43KnAXiuMegm5A7FvAas99jVOYawKyostqRzEOId/1TnbBGDEabjKYlPEOLSFiMsBWLwTkN5loBfqwpLWlheJWPYP90mvfiENFN4W+ut6nx4zBVHQYvGts86HDkcSVipUVxaYaWf37c/GMXcee85lI//k2lNWe0yYOJGU7P1jyU+ug0Cn1MeQghj1V8Gcnax0b58J+Ttp4a7UnYek2q2w2h6nbIbZT5m+yw/KYeNtE8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgIAlZsupHHlO1a9ydDFIdgMGgwYqu0xx1PBhB1cRGz#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZLPbvNXmCCAW6hZosm19hA5j7Lbr0PZCizVLJXvz0y88L5bXrAQVln7SscOXMnvFy6P8Fn/54/gijC9Rd2rDs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9IXwkB2kbuJv6AXS7YRKSa74/LXNdMPGOs9WAzsnePFq78YtNX+JkgkhS6H4PtKZr7d8zGldcUVTXsG54r7DHIiEhjiunXArwm7nxPCcvRVmU6kntuiJbAOObaZlgrdlGcNsB0gEt5E4YWVNxiiRnsA60PvQbLyfN0/+99rmyMLcT4z9DL+dZj8kNH54PFTeXByeUArORk1qkPj734Ru+RP82qH26PyeJz2HlCsq7qPKepCgiVDKLbjXnLqt58qEzzVFKx3gfIhpvZ8PiUoFSS6UJlk/70XVp+og+tU/Dv952UWQMOHkfsIfqvdJgcy2hYuLbI03ZOF/NRU1FEUEPIhfU7kM2KzkqoDLyu+ntXGTBE6vWBuqrH+KUMqrAGGXZPnoTS8zb3H1izaYqN48vVE10jDHjkhWEEIuwN5AVGsCBjpRkQ+rZ+gDb/z4loN29WMX/KmqYAy+qsu7X8gFojfnlrv4DYVd1lxYZPnqS8bCkeBF8txjMVUD5EpNVGVU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpx0/R+UH9iWt0hByjYOi11MmeoOEV/RM05Qq0CkR6T#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcAFq3gx5S+bCbh1b0B1Plh9X3nnDc+14hmd4HK59tBD1jd/VrvEVcg/jrioqZJxPOiBK8QMTq5htAcmQbIjnM=#012 create=True mode=0644 path=/tmp/ansible.l4scgdyi state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:39 np0005539564 python3.9[69054]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.l4scgdyi' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:40 np0005539564 python3.9[69208]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.l4scgdyi state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:40 np0005539564 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 02:06:40 np0005539564 systemd[1]: session-16.scope: Consumed 3.798s CPU time.
Nov 29 02:06:40 np0005539564 systemd-logind[785]: Session 16 logged out. Waiting for processes to exit.
Nov 29 02:06:40 np0005539564 systemd-logind[785]: Removed session 16.
Nov 29 02:06:46 np0005539564 systemd-logind[785]: New session 17 of user zuul.
Nov 29 02:06:46 np0005539564 systemd[1]: Started Session 17 of User zuul.
Nov 29 02:06:47 np0005539564 python3.9[69388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:48 np0005539564 python3.9[69544]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:06:49 np0005539564 python3.9[69698]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:06:50 np0005539564 python3.9[69851]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:51 np0005539564 python3.9[70004]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:06:52 np0005539564 python3.9[70158]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:53 np0005539564 python3.9[70313]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:53 np0005539564 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 02:06:53 np0005539564 systemd[1]: session-17.scope: Consumed 4.620s CPU time.
Nov 29 02:06:53 np0005539564 systemd-logind[785]: Session 17 logged out. Waiting for processes to exit.
Nov 29 02:06:53 np0005539564 systemd-logind[785]: Removed session 17.
Nov 29 02:06:59 np0005539564 systemd-logind[785]: New session 18 of user zuul.
Nov 29 02:06:59 np0005539564 systemd[1]: Started Session 18 of User zuul.
Nov 29 02:07:00 np0005539564 python3.9[70492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:01 np0005539564 python3.9[70648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:02 np0005539564 python3.9[70732]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:07:04 np0005539564 python3.9[70883]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:05 np0005539564 python3.9[71034]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:07:06 np0005539564 python3.9[71184]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:07:06 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:07:06 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:07:07 np0005539564 python3.9[71335]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:07:07 np0005539564 systemd-logind[785]: Session 18 logged out. Waiting for processes to exit.
Nov 29 02:07:07 np0005539564 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 02:07:07 np0005539564 systemd[1]: session-18.scope: Consumed 5.887s CPU time.
Nov 29 02:07:07 np0005539564 systemd-logind[785]: Removed session 18.
Nov 29 02:07:16 np0005539564 systemd-logind[785]: New session 19 of user zuul.
Nov 29 02:07:16 np0005539564 systemd[1]: Started Session 19 of User zuul.
Nov 29 02:07:23 np0005539564 python3[72101]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:25 np0005539564 python3[72198]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 02:07:27 np0005539564 python3[72226]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:07:27 np0005539564 python3[72254]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:27 np0005539564 kernel: loop: module loaded
Nov 29 02:07:27 np0005539564 kernel: loop3: detected capacity change from 0 to 14680064
Nov 29 02:07:28 np0005539564 python3[72289]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:28 np0005539564 lvm[72292]: PV /dev/loop3 not used.
Nov 29 02:07:28 np0005539564 lvm[72294]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:07:28 np0005539564 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 02:07:28 np0005539564 lvm[72304]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:07:28 np0005539564 lvm[72304]: VG ceph_vg0 finished
Nov 29 02:07:28 np0005539564 lvm[72302]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 02:07:28 np0005539564 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 02:07:28 np0005539564 chronyd[58565]: Selected source 23.159.16.194 (pool.ntp.org)
Nov 29 02:07:29 np0005539564 python3[72382]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:07:29 np0005539564 python3[72455]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400048.7265468-37018-52313882946787/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:30 np0005539564 python3[72505]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:07:30 np0005539564 systemd[1]: Reloading.
Nov 29 02:07:30 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:07:30 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:07:30 np0005539564 systemd[1]: Starting Ceph OSD losetup...
Nov 29 02:07:30 np0005539564 bash[72547]: /dev/loop3: [64513]:4327940 (/var/lib/ceph-osd-0.img)
Nov 29 02:07:30 np0005539564 systemd[1]: Finished Ceph OSD losetup.
Nov 29 02:07:30 np0005539564 lvm[72549]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:07:30 np0005539564 lvm[72549]: VG ceph_vg0 finished
Nov 29 02:07:32 np0005539564 python3[72575]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:09:53 np0005539564 systemd-logind[785]: New session 20 of user ceph-admin.
Nov 29 02:09:53 np0005539564 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 02:09:53 np0005539564 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 02:09:53 np0005539564 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 02:09:53 np0005539564 systemd[1]: Starting User Manager for UID 42477...
Nov 29 02:09:53 np0005539564 systemd[72696]: Queued start job for default target Main User Target.
Nov 29 02:09:53 np0005539564 systemd-logind[785]: New session 22 of user ceph-admin.
Nov 29 02:09:53 np0005539564 systemd[72696]: Created slice User Application Slice.
Nov 29 02:09:53 np0005539564 systemd[72696]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:09:53 np0005539564 systemd[72696]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:09:53 np0005539564 systemd[72696]: Reached target Paths.
Nov 29 02:09:53 np0005539564 systemd[72696]: Reached target Timers.
Nov 29 02:09:53 np0005539564 systemd[72696]: Starting D-Bus User Message Bus Socket...
Nov 29 02:09:53 np0005539564 systemd[72696]: Starting Create User's Volatile Files and Directories...
Nov 29 02:09:53 np0005539564 systemd[72696]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:09:53 np0005539564 systemd[72696]: Finished Create User's Volatile Files and Directories.
Nov 29 02:09:53 np0005539564 systemd[72696]: Reached target Sockets.
Nov 29 02:09:53 np0005539564 systemd[72696]: Reached target Basic System.
Nov 29 02:09:53 np0005539564 systemd[72696]: Reached target Main User Target.
Nov 29 02:09:53 np0005539564 systemd[72696]: Startup finished in 132ms.
Nov 29 02:09:53 np0005539564 systemd[1]: Started User Manager for UID 42477.
Nov 29 02:09:53 np0005539564 systemd[1]: Started Session 20 of User ceph-admin.
Nov 29 02:09:53 np0005539564 systemd[1]: Started Session 22 of User ceph-admin.
Nov 29 02:09:53 np0005539564 systemd-logind[785]: New session 23 of user ceph-admin.
Nov 29 02:09:53 np0005539564 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 02:09:54 np0005539564 systemd-logind[785]: New session 24 of user ceph-admin.
Nov 29 02:09:54 np0005539564 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 02:09:54 np0005539564 systemd-logind[785]: New session 25 of user ceph-admin.
Nov 29 02:09:54 np0005539564 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 02:09:54 np0005539564 systemd-logind[785]: New session 26 of user ceph-admin.
Nov 29 02:09:54 np0005539564 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 02:09:55 np0005539564 systemd-logind[785]: New session 27 of user ceph-admin.
Nov 29 02:09:55 np0005539564 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 02:09:55 np0005539564 systemd-logind[785]: New session 28 of user ceph-admin.
Nov 29 02:09:55 np0005539564 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 02:09:56 np0005539564 systemd-logind[785]: New session 29 of user ceph-admin.
Nov 29 02:09:56 np0005539564 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 02:09:56 np0005539564 systemd-logind[785]: New session 30 of user ceph-admin.
Nov 29 02:09:56 np0005539564 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 02:09:57 np0005539564 systemd-logind[785]: New session 31 of user ceph-admin.
Nov 29 02:09:57 np0005539564 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 02:09:57 np0005539564 systemd-logind[785]: New session 32 of user ceph-admin.
Nov 29 02:09:57 np0005539564 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 02:09:57 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:09:58 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:09:59 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:09:59 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:09:59 np0005539564 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73664 (sysctl)
Nov 29 02:10:00 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:00 np0005539564 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 02:10:00 np0005539564 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 02:10:00 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:01 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:01 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:03 np0005539564 systemd[1]: var-lib-containers-storage-overlay-compat4279279623-merged.mount: Deactivated successfully.
Nov 29 02:10:03 np0005539564 systemd[1]: var-lib-containers-storage-overlay-compat4279279623-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 02:10:19 np0005539564 podman[73944]: 2025-11-29 07:10:19.408475871 +0000 UTC m=+17.902120462 container create c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:10:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck192520297-merged.mount: Deactivated successfully.
Nov 29 02:10:19 np0005539564 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 02:10:19 np0005539564 systemd[1]: Started libpod-conmon-c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d.scope.
Nov 29 02:10:19 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:19 np0005539564 podman[73944]: 2025-11-29 07:10:19.393335008 +0000 UTC m=+17.886979619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:19 np0005539564 podman[73944]: 2025-11-29 07:10:19.499035523 +0000 UTC m=+17.992680134 container init c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:10:19 np0005539564 podman[73944]: 2025-11-29 07:10:19.506088418 +0000 UTC m=+17.999733009 container start c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:10:19 np0005539564 podman[73944]: 2025-11-29 07:10:19.509698016 +0000 UTC m=+18.003342617 container attach c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:10:19 np0005539564 naughty_joliot[74004]: 167 167
Nov 29 02:10:19 np0005539564 systemd[1]: libpod-c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d.scope: Deactivated successfully.
Nov 29 02:10:19 np0005539564 conmon[74004]: conmon c006157a7032e1b0d63f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d.scope/container/memory.events
Nov 29 02:10:19 np0005539564 podman[73944]: 2025-11-29 07:10:19.516401221 +0000 UTC m=+18.010045812 container died c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:10:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay-7036d1da8d516f44ffc13c435321dfc54654e76757a852ec1cccd4cb43336e15-merged.mount: Deactivated successfully.
Nov 29 02:10:19 np0005539564 podman[73944]: 2025-11-29 07:10:19.558539621 +0000 UTC m=+18.052184212 container remove c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:19 np0005539564 systemd[1]: libpod-conmon-c006157a7032e1b0d63f72217f37449362f1ad995e37432a304493d2ae07cc1d.scope: Deactivated successfully.
Nov 29 02:10:19 np0005539564 podman[74027]: 2025-11-29 07:10:19.735076393 +0000 UTC m=+0.052458675 container create 3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:10:19 np0005539564 systemd[1]: Started libpod-conmon-3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1.scope.
Nov 29 02:10:19 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:19 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9629c79d35145b12d3dceef0b0abfb042ec4c60fd4952760ed9f455e37490f15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:19 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9629c79d35145b12d3dceef0b0abfb042ec4c60fd4952760ed9f455e37490f15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:19 np0005539564 podman[74027]: 2025-11-29 07:10:19.807737833 +0000 UTC m=+0.125120105 container init 3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:19 np0005539564 podman[74027]: 2025-11-29 07:10:19.717323665 +0000 UTC m=+0.034705967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:19 np0005539564 podman[74027]: 2025-11-29 07:10:19.816571352 +0000 UTC m=+0.133953664 container start 3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:19 np0005539564 podman[74027]: 2025-11-29 07:10:19.821033801 +0000 UTC m=+0.138416173 container attach 3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]: [
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:    {
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        "available": false,
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        "ceph_device": false,
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        "lsm_data": {},
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        "lvs": [],
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        "path": "/dev/sr0",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        "rejected_reasons": [
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "Has a FileSystem",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "Insufficient space (<5GB)"
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        ],
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        "sys_api": {
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "actuators": null,
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "device_nodes": "sr0",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "devname": "sr0",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "human_readable_size": "482.00 KB",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "id_bus": "ata",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "model": "QEMU DVD-ROM",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "nr_requests": "2",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "parent": "/dev/sr0",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "partitions": {},
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "path": "/dev/sr0",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "removable": "1",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "rev": "2.5+",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "ro": "0",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "rotational": "1",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "sas_address": "",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "sas_device_handle": "",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "scheduler_mode": "mq-deadline",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "sectors": 0,
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "sectorsize": "2048",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "size": 493568.0,
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "support_discard": "2048",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "type": "disk",
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:            "vendor": "QEMU"
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:        }
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]:    }
Nov 29 02:10:20 np0005539564 hardcore_leakey[74043]: ]
Nov 29 02:10:20 np0005539564 systemd[1]: libpod-3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1.scope: Deactivated successfully.
Nov 29 02:10:20 np0005539564 systemd[1]: libpod-3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1.scope: Consumed 1.121s CPU time.
Nov 29 02:10:20 np0005539564 podman[74027]: 2025-11-29 07:10:20.924519494 +0000 UTC m=+1.241901766 container died 3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:20 np0005539564 systemd[1]: var-lib-containers-storage-overlay-9629c79d35145b12d3dceef0b0abfb042ec4c60fd4952760ed9f455e37490f15-merged.mount: Deactivated successfully.
Nov 29 02:10:20 np0005539564 podman[74027]: 2025-11-29 07:10:20.982113025 +0000 UTC m=+1.299495307 container remove 3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:20 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:20 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:20 np0005539564 systemd[1]: libpod-conmon-3bd69bd24b370b06f4f596512c05d7d4b6521d7b2bc15e466e547c4d335f28e1.scope: Deactivated successfully.
Nov 29 02:10:25 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:25 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:26 np0005539564 podman[76957]: 2025-11-29 07:10:26.052878978 +0000 UTC m=+0.051393228 container create 6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bartik, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:26 np0005539564 systemd[1]: Started libpod-conmon-6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26.scope.
Nov 29 02:10:26 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:26 np0005539564 podman[76957]: 2025-11-29 07:10:26.028058746 +0000 UTC m=+0.026573066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:26 np0005539564 podman[76957]: 2025-11-29 07:10:26.127413425 +0000 UTC m=+0.125927645 container init 6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bartik, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:10:26 np0005539564 podman[76957]: 2025-11-29 07:10:26.135826953 +0000 UTC m=+0.134341173 container start 6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:26 np0005539564 podman[76957]: 2025-11-29 07:10:26.139276248 +0000 UTC m=+0.137790488 container attach 6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bartik, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:26 np0005539564 beautiful_bartik[76973]: 167 167
Nov 29 02:10:26 np0005539564 systemd[1]: libpod-6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26.scope: Deactivated successfully.
Nov 29 02:10:26 np0005539564 podman[76957]: 2025-11-29 07:10:26.142392745 +0000 UTC m=+0.140907055 container died 6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bartik, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:26 np0005539564 podman[76957]: 2025-11-29 07:10:26.201433571 +0000 UTC m=+0.199947791 container remove 6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:26 np0005539564 systemd[1]: libpod-conmon-6ce8ed0d5fc3ebd0585f0e32c64dd9de8c5950939b58e490a661f8111176fb26.scope: Deactivated successfully.
Nov 29 02:10:26 np0005539564 systemd[1]: Reloading.
Nov 29 02:10:26 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:26 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:26 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:26 np0005539564 systemd[1]: Reloading.
Nov 29 02:10:26 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:26 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:26 np0005539564 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 02:10:26 np0005539564 systemd[1]: Reloading.
Nov 29 02:10:26 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:26 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:26 np0005539564 systemd[1]: Reached target Ceph cluster 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:10:26 np0005539564 systemd[1]: Reloading.
Nov 29 02:10:27 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:27 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:27 np0005539564 systemd[1]: Reloading.
Nov 29 02:10:27 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:27 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:27 np0005539564 systemd[1]: Created slice Slice /system/ceph-38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:10:27 np0005539564 systemd[1]: Reached target System Time Set.
Nov 29 02:10:27 np0005539564 systemd[1]: Reached target System Time Synchronized.
Nov 29 02:10:27 np0005539564 systemd[1]: Starting Ceph crash.compute-1 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:10:27 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:27 np0005539564 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:10:27 np0005539564 podman[77230]: 2025-11-29 07:10:27.825582509 +0000 UTC m=+0.058317799 container create 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:27 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18cd1f8032add91643d3c7671a91defa81c411c91bf718ba902a090af0b1f801/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:27 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18cd1f8032add91643d3c7671a91defa81c411c91bf718ba902a090af0b1f801/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:27 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18cd1f8032add91643d3c7671a91defa81c411c91bf718ba902a090af0b1f801/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:27 np0005539564 podman[77230]: 2025-11-29 07:10:27.801577667 +0000 UTC m=+0.034312967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:27 np0005539564 podman[77230]: 2025-11-29 07:10:27.910102712 +0000 UTC m=+0.142838032 container init 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:10:27 np0005539564 podman[77230]: 2025-11-29 07:10:27.919076814 +0000 UTC m=+0.151812104 container start 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:27 np0005539564 bash[77230]: 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303
Nov 29 02:10:27 np0005539564 systemd[1]: Started Ceph crash.compute-1 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: 2025-11-29T07:10:28.376+0000 7fe1a5db7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: 2025-11-29T07:10:28.376+0000 7fe1a5db7640 -1 AuthRegistry(0x7fe1a0066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: 2025-11-29T07:10:28.377+0000 7fe1a5db7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: 2025-11-29T07:10:28.377+0000 7fe1a5db7640 -1 AuthRegistry(0x7fe1a5db6000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: 2025-11-29T07:10:28.379+0000 7fe19f7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: 2025-11-29T07:10:28.379+0000 7fe1a5db7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 02:10:28 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1[77245]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 02:10:28 np0005539564 podman[77402]: 2025-11-29 07:10:28.606655993 +0000 UTC m=+0.041064252 container create 5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:10:28 np0005539564 systemd[1]: Started libpod-conmon-5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd.scope.
Nov 29 02:10:28 np0005539564 podman[77402]: 2025-11-29 07:10:28.586564218 +0000 UTC m=+0.020972497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:28 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:28 np0005539564 podman[77402]: 2025-11-29 07:10:28.706147446 +0000 UTC m=+0.140555735 container init 5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:28 np0005539564 podman[77402]: 2025-11-29 07:10:28.714494932 +0000 UTC m=+0.148903161 container start 5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:10:28 np0005539564 podman[77402]: 2025-11-29 07:10:28.718836559 +0000 UTC m=+0.153244818 container attach 5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:10:28 np0005539564 relaxed_shtern[77418]: 167 167
Nov 29 02:10:28 np0005539564 systemd[1]: libpod-5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd.scope: Deactivated successfully.
Nov 29 02:10:28 np0005539564 podman[77402]: 2025-11-29 07:10:28.722883239 +0000 UTC m=+0.157291498 container died 5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:28 np0005539564 systemd[1]: var-lib-containers-storage-overlay-c03fb3a91a69d203166c1f14a5b955da55f473d0fe8c08aba01f0a7afbaa1a8e-merged.mount: Deactivated successfully.
Nov 29 02:10:28 np0005539564 podman[77402]: 2025-11-29 07:10:28.764819993 +0000 UTC m=+0.199228242 container remove 5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:28 np0005539564 systemd[1]: libpod-conmon-5e67a0d66d6b4b17a5e440d48f54216da92217225a985529825cd4876745b7fd.scope: Deactivated successfully.
Nov 29 02:10:28 np0005539564 podman[77443]: 2025-11-29 07:10:28.927307418 +0000 UTC m=+0.036896140 container create 70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:10:28 np0005539564 systemd[1]: Started libpod-conmon-70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073.scope.
Nov 29 02:10:28 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:29 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c64fa98d2c8a3615ae6b9ab03c7d128c9e476b091edf74098e711eae423cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:29 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c64fa98d2c8a3615ae6b9ab03c7d128c9e476b091edf74098e711eae423cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:29 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c64fa98d2c8a3615ae6b9ab03c7d128c9e476b091edf74098e711eae423cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:29 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c64fa98d2c8a3615ae6b9ab03c7d128c9e476b091edf74098e711eae423cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:29 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c64fa98d2c8a3615ae6b9ab03c7d128c9e476b091edf74098e711eae423cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:29 np0005539564 podman[77443]: 2025-11-29 07:10:28.910548475 +0000 UTC m=+0.020137247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:29 np0005539564 podman[77443]: 2025-11-29 07:10:29.019509481 +0000 UTC m=+0.129098243 container init 70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:29 np0005539564 podman[77443]: 2025-11-29 07:10:29.025381366 +0000 UTC m=+0.134970098 container start 70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:29 np0005539564 podman[77443]: 2025-11-29 07:10:29.029529778 +0000 UTC m=+0.139118510 container attach 70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:29 np0005539564 stupefied_feynman[77460]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:10:29 np0005539564 stupefied_feynman[77460]: --> relative data size: 1.0
Nov 29 02:10:29 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:10:29 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0bd7e18e-b0cb-49d8-9a2b-77b4b562d860
Nov 29 02:10:30 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:10:30 np0005539564 lvm[77508]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:30 np0005539564 lvm[77508]: VG ceph_vg0 finished
Nov 29 02:10:30 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 29 02:10:30 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 02:10:30 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:10:30 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:30 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 29 02:10:31 np0005539564 stupefied_feynman[77460]: stderr: got monmap epoch 1
Nov 29 02:10:31 np0005539564 stupefied_feynman[77460]: --> Creating keyring file for osd.1
Nov 29 02:10:31 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 29 02:10:31 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 29 02:10:31 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 0bd7e18e-b0cb-49d8-9a2b-77b4b562d860 --setuser ceph --setgroup ceph
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: stderr: 2025-11-29T07:10:31.113+0000 7f084735d740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: stderr: 2025-11-29T07:10:31.113+0000 7f084735d740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: stderr: 2025-11-29T07:10:31.113+0000 7f084735d740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: stderr: 2025-11-29T07:10:31.113+0000 7f084735d740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 29 02:10:33 np0005539564 stupefied_feynman[77460]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 02:10:33 np0005539564 systemd[1]: libpod-70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073.scope: Deactivated successfully.
Nov 29 02:10:33 np0005539564 systemd[1]: libpod-70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073.scope: Consumed 2.570s CPU time.
Nov 29 02:10:33 np0005539564 podman[78416]: 2025-11-29 07:10:33.536579886 +0000 UTC m=+0.025125081 container died 70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:10:33 np0005539564 systemd[1]: var-lib-containers-storage-overlay-ee4c64fa98d2c8a3615ae6b9ab03c7d128c9e476b091edf74098e711eae423cd-merged.mount: Deactivated successfully.
Nov 29 02:10:33 np0005539564 podman[78416]: 2025-11-29 07:10:33.586135517 +0000 UTC m=+0.074680692 container remove 70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:33 np0005539564 systemd[1]: libpod-conmon-70980da759cf95d0b827da613910b3f9438748abded362c22657f01614e3e073.scope: Deactivated successfully.
Nov 29 02:10:34 np0005539564 podman[78572]: 2025-11-29 07:10:34.165830218 +0000 UTC m=+0.018716193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:35 np0005539564 podman[78572]: 2025-11-29 07:10:35.933025153 +0000 UTC m=+1.785911128 container create 50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:10:35 np0005539564 systemd[1]: Started libpod-conmon-50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7.scope.
Nov 29 02:10:36 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:36 np0005539564 podman[78572]: 2025-11-29 07:10:36.022400396 +0000 UTC m=+1.875286391 container init 50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_goldstine, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:10:36 np0005539564 podman[78572]: 2025-11-29 07:10:36.034224358 +0000 UTC m=+1.887110323 container start 50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:10:36 np0005539564 podman[78572]: 2025-11-29 07:10:36.037392005 +0000 UTC m=+1.890277970 container attach 50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_goldstine, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:36 np0005539564 practical_goldstine[78592]: 167 167
Nov 29 02:10:36 np0005539564 systemd[1]: libpod-50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7.scope: Deactivated successfully.
Nov 29 02:10:36 np0005539564 podman[78572]: 2025-11-29 07:10:36.042046391 +0000 UTC m=+1.894932366 container died 50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:36 np0005539564 systemd[1]: var-lib-containers-storage-overlay-533b3fa34126cb8914ea1e0aa1bd1e40c8cdc7f28ddf3905e300833a669038d9-merged.mount: Deactivated successfully.
Nov 29 02:10:36 np0005539564 podman[78572]: 2025-11-29 07:10:36.080527799 +0000 UTC m=+1.933413784 container remove 50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:10:36 np0005539564 systemd[1]: libpod-conmon-50a68c1d8c9b0d119b02070679bcbb9b98fa92f3a038ac0af1937cf27abb17f7.scope: Deactivated successfully.
Nov 29 02:10:36 np0005539564 podman[78615]: 2025-11-29 07:10:36.231219533 +0000 UTC m=+0.036022009 container create 4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:10:36 np0005539564 systemd[1]: Started libpod-conmon-4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9.scope.
Nov 29 02:10:36 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:36 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a9282125fc42880f99493cfe8c13fb88b7de3b2484ab7b01ac1974f0838cb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:36 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a9282125fc42880f99493cfe8c13fb88b7de3b2484ab7b01ac1974f0838cb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:36 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a9282125fc42880f99493cfe8c13fb88b7de3b2484ab7b01ac1974f0838cb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:36 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a9282125fc42880f99493cfe8c13fb88b7de3b2484ab7b01ac1974f0838cb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:36 np0005539564 podman[78615]: 2025-11-29 07:10:36.215673561 +0000 UTC m=+0.020476067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:36 np0005539564 podman[78615]: 2025-11-29 07:10:36.464436013 +0000 UTC m=+0.269238529 container init 4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldwasser, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:36 np0005539564 podman[78615]: 2025-11-29 07:10:36.47244313 +0000 UTC m=+0.277245656 container start 4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:10:36 np0005539564 podman[78615]: 2025-11-29 07:10:36.690313001 +0000 UTC m=+0.495115477 container attach 4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]: {
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:    "1": [
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:        {
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "devices": [
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "/dev/loop3"
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            ],
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "lv_name": "ceph_lv0",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "lv_size": "7511998464",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TcwZZo-ZQKq-KSVc-EABB-OaWg-mGPO-rpGp9Z,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0bd7e18e-b0cb-49d8-9a2b-77b4b562d860,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "lv_uuid": "TcwZZo-ZQKq-KSVc-EABB-OaWg-mGPO-rpGp9Z",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "name": "ceph_lv0",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "tags": {
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.block_uuid": "TcwZZo-ZQKq-KSVc-EABB-OaWg-mGPO-rpGp9Z",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.cluster_name": "ceph",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.crush_device_class": "",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.encrypted": "0",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.osd_fsid": "0bd7e18e-b0cb-49d8-9a2b-77b4b562d860",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.osd_id": "1",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.type": "block",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:                "ceph.vdo": "0"
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            },
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "type": "block",
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:            "vg_name": "ceph_vg0"
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:        }
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]:    ]
Nov 29 02:10:37 np0005539564 agitated_goldwasser[78632]: }
Nov 29 02:10:37 np0005539564 systemd[1]: libpod-4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9.scope: Deactivated successfully.
Nov 29 02:10:37 np0005539564 podman[78615]: 2025-11-29 07:10:37.241819737 +0000 UTC m=+1.046622253 container died 4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:10:37 np0005539564 systemd[1]: var-lib-containers-storage-overlay-e7a9282125fc42880f99493cfe8c13fb88b7de3b2484ab7b01ac1974f0838cb8-merged.mount: Deactivated successfully.
Nov 29 02:10:37 np0005539564 podman[78615]: 2025-11-29 07:10:37.305608 +0000 UTC m=+1.110410476 container remove 4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:10:37 np0005539564 systemd[1]: libpod-conmon-4905d4e6e949cb06443d3a02ad8352b98323e41361124a24c2c0658471f853c9.scope: Deactivated successfully.
Nov 29 02:10:38 np0005539564 podman[78792]: 2025-11-29 07:10:37.927774587 +0000 UTC m=+0.035098457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:38 np0005539564 podman[78792]: 2025-11-29 07:10:38.154103047 +0000 UTC m=+0.261426877 container create cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:38 np0005539564 systemd[1]: Started libpod-conmon-cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117.scope.
Nov 29 02:10:38 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:38 np0005539564 podman[78792]: 2025-11-29 07:10:38.377730239 +0000 UTC m=+0.485054089 container init cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:10:38 np0005539564 podman[78792]: 2025-11-29 07:10:38.390260098 +0000 UTC m=+0.497583938 container start cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:38 np0005539564 beautiful_borg[78808]: 167 167
Nov 29 02:10:38 np0005539564 systemd[1]: libpod-cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117.scope: Deactivated successfully.
Nov 29 02:10:38 np0005539564 podman[78792]: 2025-11-29 07:10:38.396229625 +0000 UTC m=+0.503553445 container attach cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:38 np0005539564 podman[78792]: 2025-11-29 07:10:38.397092026 +0000 UTC m=+0.504415866 container died cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:10:38 np0005539564 systemd[1]: var-lib-containers-storage-overlay-a4e99a04e217dc45d4b2cab80a1d887f9ae30ae9fb644da80e9d814bac601e16-merged.mount: Deactivated successfully.
Nov 29 02:10:38 np0005539564 podman[78792]: 2025-11-29 07:10:38.442389273 +0000 UTC m=+0.549713113 container remove cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:38 np0005539564 systemd[1]: libpod-conmon-cd06283d3c452168a53a27b4ad884e56bef440c9cc3b6662b88e28f1ea70d117.scope: Deactivated successfully.
Nov 29 02:10:38 np0005539564 podman[78842]: 2025-11-29 07:10:38.686732157 +0000 UTC m=+0.043566286 container create b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:38 np0005539564 systemd[1]: Started libpod-conmon-b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c.scope.
Nov 29 02:10:38 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769b3a8860e6a7282e48b46bc7a6ee97c8453048a4d6b33a438b5e59a30ac3c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769b3a8860e6a7282e48b46bc7a6ee97c8453048a4d6b33a438b5e59a30ac3c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769b3a8860e6a7282e48b46bc7a6ee97c8453048a4d6b33a438b5e59a30ac3c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769b3a8860e6a7282e48b46bc7a6ee97c8453048a4d6b33a438b5e59a30ac3c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769b3a8860e6a7282e48b46bc7a6ee97c8453048a4d6b33a438b5e59a30ac3c6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:38 np0005539564 podman[78842]: 2025-11-29 07:10:38.668202 +0000 UTC m=+0.025036119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:38 np0005539564 podman[78842]: 2025-11-29 07:10:38.773853925 +0000 UTC m=+0.130688074 container init b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:38 np0005539564 podman[78842]: 2025-11-29 07:10:38.791487899 +0000 UTC m=+0.148322028 container start b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:10:38 np0005539564 podman[78842]: 2025-11-29 07:10:38.796109453 +0000 UTC m=+0.152943542 container attach b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:10:39 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test[78859]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 02:10:39 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test[78859]:                            [--no-systemd] [--no-tmpfs]
Nov 29 02:10:39 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test[78859]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 02:10:39 np0005539564 systemd[1]: libpod-b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c.scope: Deactivated successfully.
Nov 29 02:10:39 np0005539564 podman[78842]: 2025-11-29 07:10:39.474101207 +0000 UTC m=+0.830935296 container died b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:42 np0005539564 systemd[1]: var-lib-containers-storage-overlay-769b3a8860e6a7282e48b46bc7a6ee97c8453048a4d6b33a438b5e59a30ac3c6-merged.mount: Deactivated successfully.
Nov 29 02:10:42 np0005539564 podman[78842]: 2025-11-29 07:10:42.558147246 +0000 UTC m=+3.914981345 container remove b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:42 np0005539564 systemd[1]: libpod-conmon-b9cdcd57893993d80893a1345095fcb5a9bbc4582f43d294bfeee0670bd2898c.scope: Deactivated successfully.
Nov 29 02:10:42 np0005539564 systemd[1]: Reloading.
Nov 29 02:10:42 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:42 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:43 np0005539564 systemd[1]: Reloading.
Nov 29 02:10:43 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:43 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:43 np0005539564 systemd[1]: Starting Ceph osd.1 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:10:43 np0005539564 podman[79022]: 2025-11-29 07:10:43.580023282 +0000 UTC m=+0.042661451 container create 51dfe37f4556891377a823a61a70d02bf2a858aaad71af62294f3e90abb85390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:43 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:43 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf14abbcf0772734195b8eceb905b82ff111eebf38b098fae83825dd5ad43395/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf14abbcf0772734195b8eceb905b82ff111eebf38b098fae83825dd5ad43395/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf14abbcf0772734195b8eceb905b82ff111eebf38b098fae83825dd5ad43395/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf14abbcf0772734195b8eceb905b82ff111eebf38b098fae83825dd5ad43395/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf14abbcf0772734195b8eceb905b82ff111eebf38b098fae83825dd5ad43395/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539564 podman[79022]: 2025-11-29 07:10:43.649871446 +0000 UTC m=+0.112509595 container init 51dfe37f4556891377a823a61a70d02bf2a858aaad71af62294f3e90abb85390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:43 np0005539564 podman[79022]: 2025-11-29 07:10:43.562590752 +0000 UTC m=+0.025228921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:43 np0005539564 podman[79022]: 2025-11-29 07:10:43.66172862 +0000 UTC m=+0.124366769 container start 51dfe37f4556891377a823a61a70d02bf2a858aaad71af62294f3e90abb85390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:10:43 np0005539564 podman[79022]: 2025-11-29 07:10:43.665140746 +0000 UTC m=+0.127778925 container attach 51dfe37f4556891377a823a61a70d02bf2a858aaad71af62294f3e90abb85390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:44 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate[79037]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:10:44 np0005539564 bash[79022]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:10:44 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate[79037]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:10:44 np0005539564 bash[79022]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:10:44 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate[79037]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:10:44 np0005539564 bash[79022]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:10:44 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate[79037]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:10:44 np0005539564 bash[79022]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:10:44 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate[79037]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:44 np0005539564 bash[79022]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:44 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate[79037]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:10:44 np0005539564 bash[79022]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:10:44 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate[79037]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 02:10:44 np0005539564 bash[79022]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 02:10:44 np0005539564 systemd[1]: libpod-51dfe37f4556891377a823a61a70d02bf2a858aaad71af62294f3e90abb85390.scope: Deactivated successfully.
Nov 29 02:10:44 np0005539564 podman[79022]: 2025-11-29 07:10:44.540509652 +0000 UTC m=+1.003147811 container died 51dfe37f4556891377a823a61a70d02bf2a858aaad71af62294f3e90abb85390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:10:44 np0005539564 systemd[1]: var-lib-containers-storage-overlay-bf14abbcf0772734195b8eceb905b82ff111eebf38b098fae83825dd5ad43395-merged.mount: Deactivated successfully.
Nov 29 02:10:44 np0005539564 podman[79022]: 2025-11-29 07:10:44.593413729 +0000 UTC m=+1.056051878 container remove 51dfe37f4556891377a823a61a70d02bf2a858aaad71af62294f3e90abb85390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:44 np0005539564 podman[79193]: 2025-11-29 07:10:44.776744896 +0000 UTC m=+0.041185870 container create 96efbbbc4edbe47dba665ac7f10db0c6b320d523d5f9a5835edf0af2483b4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:10:44 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b86ad5da0b80b4b9f61ca1daf305f2a6ae10bcddd3298e8de38483922679e38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:44 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b86ad5da0b80b4b9f61ca1daf305f2a6ae10bcddd3298e8de38483922679e38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:44 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b86ad5da0b80b4b9f61ca1daf305f2a6ae10bcddd3298e8de38483922679e38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:44 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b86ad5da0b80b4b9f61ca1daf305f2a6ae10bcddd3298e8de38483922679e38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:44 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b86ad5da0b80b4b9f61ca1daf305f2a6ae10bcddd3298e8de38483922679e38/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:44 np0005539564 podman[79193]: 2025-11-29 07:10:44.829360004 +0000 UTC m=+0.093800988 container init 96efbbbc4edbe47dba665ac7f10db0c6b320d523d5f9a5835edf0af2483b4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 02:10:44 np0005539564 podman[79193]: 2025-11-29 07:10:44.838725708 +0000 UTC m=+0.103166672 container start 96efbbbc4edbe47dba665ac7f10db0c6b320d523d5f9a5835edf0af2483b4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:10:44 np0005539564 bash[79193]: 96efbbbc4edbe47dba665ac7f10db0c6b320d523d5f9a5835edf0af2483b4bed
Nov 29 02:10:44 np0005539564 podman[79193]: 2025-11-29 07:10:44.756995 +0000 UTC m=+0.021436004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:44 np0005539564 systemd[1]: Started Ceph osd.1 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: pidfile_write: ignore empty --pid-file
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4d00b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4d00b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4d00b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4d00b800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4de43800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4de43800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4de43800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4de43800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 02:10:44 np0005539564 ceph-osd[79212]: bdev(0x55ba4de43800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4d00b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: load: jerasure load: lrc 
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:10:45 np0005539564 podman[79368]: 2025-11-29 07:10:45.454500005 +0000 UTC m=+0.043049522 container create 777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:10:45 np0005539564 systemd[1]: Started libpod-conmon-777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1.scope.
Nov 29 02:10:45 np0005539564 podman[79368]: 2025-11-29 07:10:45.431711733 +0000 UTC m=+0.020261210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:45 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:45 np0005539564 podman[79368]: 2025-11-29 07:10:45.55923309 +0000 UTC m=+0.147782597 container init 777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 29 02:10:45 np0005539564 podman[79368]: 2025-11-29 07:10:45.566178384 +0000 UTC m=+0.154727851 container start 777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:10:45 np0005539564 podman[79368]: 2025-11-29 07:10:45.570465646 +0000 UTC m=+0.159015133 container attach 777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:45 np0005539564 vibrant_cray[79389]: 167 167
Nov 29 02:10:45 np0005539564 systemd[1]: libpod-777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1.scope: Deactivated successfully.
Nov 29 02:10:45 np0005539564 podman[79368]: 2025-11-29 07:10:45.572777601 +0000 UTC m=+0.161327078 container died 777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:10:45 np0005539564 systemd[1]: var-lib-containers-storage-overlay-9cbbed340d09f059e9084e17421a051451b6ea1f253ff2823c37b28577b60bd6-merged.mount: Deactivated successfully.
Nov 29 02:10:45 np0005539564 podman[79368]: 2025-11-29 07:10:45.622164949 +0000 UTC m=+0.210714456 container remove 777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:10:45 np0005539564 systemd[1]: libpod-conmon-777e4a62906d2b2b5521b0fe9bc8badd36795a7c1e2aa28419df7142c59483b1.scope: Deactivated successfully.
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:10:45 np0005539564 podman[79418]: 2025-11-29 07:10:45.864416832 +0000 UTC m=+0.072158921 container create a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:10:45 np0005539564 systemd[1]: Started libpod-conmon-a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f.scope.
Nov 29 02:10:45 np0005539564 podman[79418]: 2025-11-29 07:10:45.822695498 +0000 UTC m=+0.030437627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:45 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:45 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea136949a4c04317d577265eeefd0c88efb4d1896c5ea8dfaef1fad35c6bda1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:45 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea136949a4c04317d577265eeefd0c88efb4d1896c5ea8dfaef1fad35c6bda1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:45 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea136949a4c04317d577265eeefd0c88efb4d1896c5ea8dfaef1fad35c6bda1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:45 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea136949a4c04317d577265eeefd0c88efb4d1896c5ea8dfaef1fad35c6bda1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:45 np0005539564 podman[79418]: 2025-11-29 07:10:45.953707353 +0000 UTC m=+0.161449462 container init a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:45 np0005539564 podman[79418]: 2025-11-29 07:10:45.960144704 +0000 UTC m=+0.167886783 container start a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ardinghelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:10:45 np0005539564 podman[79418]: 2025-11-29 07:10:45.967450559 +0000 UTC m=+0.175192648 container attach a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec4c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bluefs mount
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bluefs mount shared_bdev_used = 0
Nov 29 02:10:45 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Git sha 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: DB SUMMARY
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: DB Session ID:  ZKJZ50E92U1MNZZA71CP
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                                     Options.env: 0x55ba4de95c70
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                                Options.info_log: 0x55ba4d088ba0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.write_buffer_manager: 0x55ba4df9e460
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.row_cache: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                              Options.wal_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.wal_compression: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Compression algorithms supported:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kZSTD supported: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d088600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d088600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d088600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d088600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d088600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d088600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d088600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d0885c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d0885c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d0885c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1b001dc6-fb3a-4bfc-8701-d2156800a3a7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400246021326, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400246021565, "job": 1, "event": "recovery_finished"}
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: freelist init
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: freelist _read_cfg
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluefs umount
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bdev(0x55ba4dec5400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluefs mount
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluefs mount shared_bdev_used = 4718592
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Git sha 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: DB SUMMARY
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: DB Session ID:  ZKJZ50E92U1MNZZA71CO
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                                     Options.env: 0x55ba4d0ca690
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                                Options.info_log: 0x55ba4d0898a0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.write_buffer_manager: 0x55ba4df9e460
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.row_cache: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                              Options.wal_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.wal_compression: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Compression algorithms supported:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kZSTD supported: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d065b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d065b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d065b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d065b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d065b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d065b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d065b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d089e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d089e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba4d089e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ba4d07f770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1b001dc6-fb3a-4bfc-8701-d2156800a3a7
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400246293447, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400246305552, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400246, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b001dc6-fb3a-4bfc-8701-d2156800a3a7", "db_session_id": "ZKJZ50E92U1MNZZA71CO", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400246311086, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400246, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b001dc6-fb3a-4bfc-8701-d2156800a3a7", "db_session_id": "ZKJZ50E92U1MNZZA71CO", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400246314843, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400246, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b001dc6-fb3a-4bfc-8701-d2156800a3a7", "db_session_id": "ZKJZ50E92U1MNZZA71CO", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400246323553, "job": 1, "event": "recovery_finished"}
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ba4d13dc00
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: DB pointer 0x55ba4df87a00
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 460.80 MB usag
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: _get_class not permitted to load lua
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: _get_class not permitted to load sdk
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: _get_class not permitted to load test_remote_reads
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: osd.1 0 load_pgs
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: osd.1 0 load_pgs opened 0 pgs
Nov 29 02:10:46 np0005539564 ceph-osd[79212]: osd.1 0 log_to_monitors true
Nov 29 02:10:46 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:10:46.413+0000 7efc3c0ba740 -1 osd.1 0 log_to_monitors true
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]: {
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]:    "0bd7e18e-b0cb-49d8-9a2b-77b4b562d860": {
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]:        "osd_id": 1,
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]:        "osd_uuid": "0bd7e18e-b0cb-49d8-9a2b-77b4b562d860",
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]:        "type": "bluestore"
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]:    }
Nov 29 02:10:46 np0005539564 interesting_ardinghelli[79434]: }
Nov 29 02:10:46 np0005539564 systemd[1]: libpod-a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f.scope: Deactivated successfully.
Nov 29 02:10:46 np0005539564 podman[79418]: 2025-11-29 07:10:46.730469126 +0000 UTC m=+0.938211255 container died a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:10:46 np0005539564 systemd[1]: var-lib-containers-storage-overlay-9ea136949a4c04317d577265eeefd0c88efb4d1896c5ea8dfaef1fad35c6bda1-merged.mount: Deactivated successfully.
Nov 29 02:10:46 np0005539564 podman[79418]: 2025-11-29 07:10:46.79637837 +0000 UTC m=+1.004120459 container remove a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:10:46 np0005539564 systemd[1]: libpod-conmon-a3984ea33e867c8d524e007176cddc161af593e84a9549c108617c415baee42f.scope: Deactivated successfully.
Nov 29 02:10:47 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 02:10:47 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 02:10:48 np0005539564 podman[80100]: 2025-11-29 07:10:48.10288121 +0000 UTC m=+0.062668453 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:10:48 np0005539564 ceph-osd[79212]: osd.1 0 done with init, starting boot process
Nov 29 02:10:48 np0005539564 ceph-osd[79212]: osd.1 0 start_boot
Nov 29 02:10:48 np0005539564 ceph-osd[79212]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 02:10:48 np0005539564 ceph-osd[79212]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 02:10:48 np0005539564 ceph-osd[79212]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 02:10:48 np0005539564 ceph-osd[79212]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 02:10:48 np0005539564 ceph-osd[79212]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 29 02:10:48 np0005539564 podman[80100]: 2025-11-29 07:10:48.306428184 +0000 UTC m=+0.266215417 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:10:49 np0005539564 podman[80292]: 2025-11-29 07:10:49.288314316 +0000 UTC m=+0.083866460 container create d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:10:49 np0005539564 podman[80292]: 2025-11-29 07:10:49.241698984 +0000 UTC m=+0.037251148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:49 np0005539564 systemd[1]: Started libpod-conmon-d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d.scope.
Nov 29 02:10:49 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:49 np0005539564 podman[80292]: 2025-11-29 07:10:49.464658805 +0000 UTC m=+0.260210929 container init d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:49 np0005539564 podman[80292]: 2025-11-29 07:10:49.472973558 +0000 UTC m=+0.268525662 container start d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:49 np0005539564 frosty_murdock[80308]: 167 167
Nov 29 02:10:49 np0005539564 systemd[1]: libpod-d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d.scope: Deactivated successfully.
Nov 29 02:10:49 np0005539564 conmon[80308]: conmon d4bedc97e983fee3f58f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d.scope/container/memory.events
Nov 29 02:10:49 np0005539564 podman[80292]: 2025-11-29 07:10:49.522138182 +0000 UTC m=+0.317690396 container attach d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_murdock, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:10:49 np0005539564 podman[80292]: 2025-11-29 07:10:49.523547621 +0000 UTC m=+0.319099725 container died d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_murdock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:10:49 np0005539564 systemd[1]: var-lib-containers-storage-overlay-ec03116fdcab5c53d18388c84cae524be1784a5fcca611c81a4d0a060d27077a-merged.mount: Deactivated successfully.
Nov 29 02:10:49 np0005539564 podman[80292]: 2025-11-29 07:10:49.719725268 +0000 UTC m=+0.515277372 container remove d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_murdock, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:10:49 np0005539564 systemd[1]: libpod-conmon-d4bedc97e983fee3f58fa917108dc0d6c590d7564cc63c17a1a969d5564ee60d.scope: Deactivated successfully.
Nov 29 02:10:49 np0005539564 podman[80333]: 2025-11-29 07:10:49.941122853 +0000 UTC m=+0.071911814 container create ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:10:49 np0005539564 podman[80333]: 2025-11-29 07:10:49.894375599 +0000 UTC m=+0.025164589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:50 np0005539564 systemd[1]: Started libpod-conmon-ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d.scope.
Nov 29 02:10:50 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:10:50 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64921df6c9442530ed0c5ff7d41184ff8e7ae689046c35924a474c4559390a94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:50 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64921df6c9442530ed0c5ff7d41184ff8e7ae689046c35924a474c4559390a94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:50 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64921df6c9442530ed0c5ff7d41184ff8e7ae689046c35924a474c4559390a94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:50 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64921df6c9442530ed0c5ff7d41184ff8e7ae689046c35924a474c4559390a94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:50 np0005539564 podman[80333]: 2025-11-29 07:10:50.067567499 +0000 UTC m=+0.198356469 container init ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:50 np0005539564 podman[80333]: 2025-11-29 07:10:50.083190289 +0000 UTC m=+0.213979259 container start ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:10:50 np0005539564 podman[80333]: 2025-11-29 07:10:50.108668465 +0000 UTC m=+0.239457465 container attach ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]: [
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:    {
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        "available": false,
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        "ceph_device": false,
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        "lsm_data": {},
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        "lvs": [],
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        "path": "/dev/sr0",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        "rejected_reasons": [
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "Insufficient space (<5GB)",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "Has a FileSystem"
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        ],
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        "sys_api": {
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "actuators": null,
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "device_nodes": "sr0",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "devname": "sr0",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "human_readable_size": "482.00 KB",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "id_bus": "ata",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "model": "QEMU DVD-ROM",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "nr_requests": "2",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "parent": "/dev/sr0",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "partitions": {},
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "path": "/dev/sr0",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "removable": "1",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "rev": "2.5+",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "ro": "0",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "rotational": "1",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "sas_address": "",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "sas_device_handle": "",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "scheduler_mode": "mq-deadline",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "sectors": 0,
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "sectorsize": "2048",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "size": 493568.0,
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "support_discard": "2048",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "type": "disk",
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:            "vendor": "QEMU"
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:        }
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]:    }
Nov 29 02:10:51 np0005539564 wizardly_taussig[80349]: ]
Nov 29 02:10:51 np0005539564 systemd[1]: libpod-ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d.scope: Deactivated successfully.
Nov 29 02:10:51 np0005539564 systemd[1]: libpod-ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d.scope: Consumed 1.122s CPU time.
Nov 29 02:10:51 np0005539564 conmon[80349]: conmon ceeefafb539cf9f7c2e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d.scope/container/memory.events
Nov 29 02:10:51 np0005539564 podman[81368]: 2025-11-29 07:10:51.242064357 +0000 UTC m=+0.025945530 container died ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:10:51 np0005539564 systemd[1]: var-lib-containers-storage-overlay-64921df6c9442530ed0c5ff7d41184ff8e7ae689046c35924a474c4559390a94-merged.mount: Deactivated successfully.
Nov 29 02:10:51 np0005539564 podman[81368]: 2025-11-29 07:10:51.369698247 +0000 UTC m=+0.153579410 container remove ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:51 np0005539564 systemd[1]: libpod-conmon-ceeefafb539cf9f7c2e9a04fc51163e11903a45edf789a4d4a393d9ed743520d.scope: Deactivated successfully.
Nov 29 02:10:52 np0005539564 ceph-osd[79212]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.901 iops: 6630.781 elapsed_sec: 0.452
Nov 29 02:10:52 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : OSD bench result of 6630.780675 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:10:52 np0005539564 ceph-osd[79212]: osd.1 0 waiting for initial osdmap
Nov 29 02:10:52 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:10:52.983+0000 7efc3803a640 -1 osd.1 0 waiting for initial osdmap
Nov 29 02:10:52 np0005539564 ceph-osd[79212]: osd.1 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 02:10:52 np0005539564 ceph-osd[79212]: osd.1 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 02:10:52 np0005539564 ceph-osd[79212]: osd.1 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 02:10:52 np0005539564 ceph-osd[79212]: osd.1 10 check_osdmap_features require_osd_release unknown -> reef
Nov 29 02:10:53 np0005539564 ceph-osd[79212]: osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:10:53 np0005539564 ceph-osd[79212]: osd.1 10 set_numa_affinity not setting numa affinity
Nov 29 02:10:53 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:10:53.006+0000 7efc33662640 -1 osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:10:53 np0005539564 ceph-osd[79212]: osd.1 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 02:10:53 np0005539564 ceph-osd[79212]: osd.1 11 state: booting -> active
Nov 29 02:10:53 np0005539564 ceph-osd[79212]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 02:10:53 np0005539564 ceph-osd[79212]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 02:10:53 np0005539564 ceph-osd[79212]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 02:10:53 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:10:54 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:12 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:13 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 17 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=17 pruub=12.960441589s) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active pruub 43.430706024s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=17 pruub=12.960441589s) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown pruub 43.430706024s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.3( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.7( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.5( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.6( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.4( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.1( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.a( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.b( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.2( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.8( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.9( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.12( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.13( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.10( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.11( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.14( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.15( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.e( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.f( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.c( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.d( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.16( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.1a( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.1b( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.18( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.17( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.19( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.1e( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.1f( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.1c( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 18 pg[2.1d( empty local-lis/les=14/15 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.1e( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.1f( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.1c( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.b( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.a( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.1d( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.6( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.8( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.1( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.5( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.4( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.7( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.2( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.3( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.d( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.c( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.10( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.e( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.f( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.12( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.16( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.14( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.13( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.11( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.18( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.17( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.15( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.19( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.9( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.1b( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.1a( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 19 pg[2.0( empty local-lis/les=17/19 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=17) [1] r=0 lpr=17 pi=[14,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:22 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 29 02:11:22 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 29 02:11:22 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:23 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 29 02:11:23 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 29 02:11:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.9( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.473842621s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495552063s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.1e( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.467379570s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.489097595s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.a( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.467487335s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.489215851s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.6( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.472659111s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.494426727s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.466651917s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.489215851s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471759796s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.494426727s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.1f( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.466395378s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.489173889s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.472818375s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495552063s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.466229439s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.489097595s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.466159821s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.489173889s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.4( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471228600s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.494441986s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.1( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471487999s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.494758606s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471140862s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.494441986s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471413612s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.494758606s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.d( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471796989s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495166779s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.e( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471685410s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495204926s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.10( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471629143s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495197296s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.c( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471632957s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495182037s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471630096s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495204926s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471594810s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495166779s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.13( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471669197s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495368958s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.15( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471726418s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495559692s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.19( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471523285s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495540619s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471167564s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495182037s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471014023s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495197296s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471199989s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495368958s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471269608s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495559692s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471459389s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495540619s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 27 pg[2.1b( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471261978s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 active pruub 55.495655060s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:11:28 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27 pruub=13.471223831s) [0] r=-1 lpr=27 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.495655060s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:11:32 np0005539564 podman[81530]: 2025-11-29 07:11:32.94495797 +0000 UTC m=+0.047618440 container create 60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:32 np0005539564 systemd[1]: Started libpod-conmon-60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3.scope.
Nov 29 02:11:33 np0005539564 podman[81530]: 2025-11-29 07:11:32.923642851 +0000 UTC m=+0.026303361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:33 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:11:33 np0005539564 podman[81530]: 2025-11-29 07:11:33.043856411 +0000 UTC m=+0.146516961 container init 60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:33 np0005539564 podman[81530]: 2025-11-29 07:11:33.054666505 +0000 UTC m=+0.157327005 container start 60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:33 np0005539564 magical_grothendieck[81546]: 167 167
Nov 29 02:11:33 np0005539564 systemd[1]: libpod-60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3.scope: Deactivated successfully.
Nov 29 02:11:33 np0005539564 podman[81530]: 2025-11-29 07:11:33.06017455 +0000 UTC m=+0.162835050 container attach 60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:11:33 np0005539564 podman[81530]: 2025-11-29 07:11:33.061482938 +0000 UTC m=+0.164143448 container died 60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:11:33 np0005539564 systemd[1]: var-lib-containers-storage-overlay-d4b670eaf67d4dca58c7687b0479838de8ea50f083c75ed75a3de80c0fdbf153-merged.mount: Deactivated successfully.
Nov 29 02:11:33 np0005539564 podman[81530]: 2025-11-29 07:11:33.109008994 +0000 UTC m=+0.211669464 container remove 60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:33 np0005539564 systemd[1]: libpod-conmon-60a415ee55593eea5265613040f44a8aff3714058aa1daae122270b1fe0efaa3.scope: Deactivated successfully.
Nov 29 02:11:33 np0005539564 podman[81569]: 2025-11-29 07:11:33.182620764 +0000 UTC m=+0.044460492 container create e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:11:33 np0005539564 systemd[1]: Started libpod-conmon-e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c.scope.
Nov 29 02:11:33 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:11:33 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7fc3c796bbeb450d901321bd006ca59c89d38eb920658671daa020d57d1d2a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:33 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7fc3c796bbeb450d901321bd006ca59c89d38eb920658671daa020d57d1d2a/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:33 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7fc3c796bbeb450d901321bd006ca59c89d38eb920658671daa020d57d1d2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:33 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7fc3c796bbeb450d901321bd006ca59c89d38eb920658671daa020d57d1d2a/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:33 np0005539564 podman[81569]: 2025-11-29 07:11:33.162670923 +0000 UTC m=+0.024510691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:33 np0005539564 podman[81569]: 2025-11-29 07:11:33.530392353 +0000 UTC m=+0.392232161 container init e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:33 np0005539564 podman[81569]: 2025-11-29 07:11:33.542299258 +0000 UTC m=+0.404138996 container start e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:11:33 np0005539564 podman[81569]: 2025-11-29 07:11:33.545724124 +0000 UTC m=+0.407563942 container attach e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:33 np0005539564 systemd[1]: libpod-e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c.scope: Deactivated successfully.
Nov 29 02:11:33 np0005539564 podman[81569]: 2025-11-29 07:11:33.658825105 +0000 UTC m=+0.520664873 container died e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:33 np0005539564 systemd[1]: var-lib-containers-storage-overlay-5d7fc3c796bbeb450d901321bd006ca59c89d38eb920658671daa020d57d1d2a-merged.mount: Deactivated successfully.
Nov 29 02:11:33 np0005539564 podman[81569]: 2025-11-29 07:11:33.706320411 +0000 UTC m=+0.568160149 container remove e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:33 np0005539564 systemd[1]: libpod-conmon-e2694fc97ef6397056f3eff430a02b66f5aae356b240941f1f873d5937bc636c.scope: Deactivated successfully.
Nov 29 02:11:33 np0005539564 systemd[1]: Reloading.
Nov 29 02:11:33 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:33 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:34 np0005539564 systemd[1]: Reloading.
Nov 29 02:11:34 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:34 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:34 np0005539564 systemd[1]: Starting Ceph mon.compute-1 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:11:34 np0005539564 podman[81750]: 2025-11-29 07:11:34.558584747 +0000 UTC m=+0.038526394 container create e2ad297c4bdf6a6082cf8683eaad6a5e6c59b7d6117a028b532a9ba5d4c24799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:11:34 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62e524094a351d0dfa189336985e80140249938c9b13a0acd76112ae884c3a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:34 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62e524094a351d0dfa189336985e80140249938c9b13a0acd76112ae884c3a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:34 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62e524094a351d0dfa189336985e80140249938c9b13a0acd76112ae884c3a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:34 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62e524094a351d0dfa189336985e80140249938c9b13a0acd76112ae884c3a5/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:34 np0005539564 podman[81750]: 2025-11-29 07:11:34.62910468 +0000 UTC m=+0.109046357 container init e2ad297c4bdf6a6082cf8683eaad6a5e6c59b7d6117a028b532a9ba5d4c24799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-1, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:34 np0005539564 podman[81750]: 2025-11-29 07:11:34.540929621 +0000 UTC m=+0.020871288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:34 np0005539564 podman[81750]: 2025-11-29 07:11:34.644231616 +0000 UTC m=+0.124173273 container start e2ad297c4bdf6a6082cf8683eaad6a5e6c59b7d6117a028b532a9ba5d4c24799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-1, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:11:34 np0005539564 bash[81750]: e2ad297c4bdf6a6082cf8683eaad6a5e6c59b7d6117a028b532a9ba5d4c24799
Nov 29 02:11:34 np0005539564 systemd[1]: Started Ceph mon.compute-1 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: pidfile_write: ignore empty --pid-file
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: load: jerasure load: lrc 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Git sha 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: DB SUMMARY
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: DB Session ID:  LBPX4GW5MUJF8UJGE88L
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                                     Options.env: 0x558dc66c0c40
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                                Options.info_log: 0x558dc7320fc0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                                 Options.wal_dir: 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                    Options.write_buffer_manager: 0x558dc7330b40
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                               Options.row_cache: None
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                              Options.wal_filter: None
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.wal_compression: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.max_background_jobs: 2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Compression algorithms supported:
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: #011kZSTD supported: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:           Options.merge_operator: 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:        Options.compaction_filter: None
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558dc7320c00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558dc73191f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:          Options.compression: NoCompression
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.num_levels: 7
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9b003236-2c9b-47ac-982a-c4196705f81c
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400294703954, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400294706034, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400294706159, "job": 1, "event": "recovery_finished"}
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558dc7342e00
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: DB pointer 0x558dc744a000
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(???) e0 preinit fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).mds e1 new map
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e30 e30: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e31 e31: 2 total, 2 up, 2 in
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e31 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/3001830821' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2863789023' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2863789023' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2884731350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2884731350' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/4139808608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/4139808608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/653577093' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/653577093' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/1575948781' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/1575948781' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Deploying daemon mon.compute-2 on compute-2
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2619282733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2619282733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2522884032' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2522884032' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 02:11:34 np0005539564 ceph-mon[81769]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 29 02:11:36 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Nov 29 02:11:36 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Nov 29 02:11:37 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Nov 29 02:11:37 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Nov 29 02:11:38 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 29 02:11:38 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 29 02:11:39 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Nov 29 02:11:39 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Nov 29 02:11:40 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 29 02:11:40 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 29 02:11:40 np0005539564 ceph-mon[81769]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Nov 29 02:11:40 np0005539564 ceph-mon[81769]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:11:40 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 02:11:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e32 e32: 2 total, 2 up, 2 in
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: Deploying daemon mon.compute-1 on compute-1
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Nov 29 02:11:44 np0005539564 ceph-mon[81769]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:11:44 np0005539564 ceph-mon[81769]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:11:44 np0005539564 ceph-mon[81769]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: Deploying daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T07:11:33.600531Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:44 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 29 02:11:44 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1 calling monitor election
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled
Nov 29 02:11:44 np0005539564 ceph-mon[81769]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:11:44 np0005539564 ceph-mon[81769]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1019935674 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:45 np0005539564 podman[81948]: 2025-11-29 07:11:45.169175999 +0000 UTC m=+0.030431087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:45 np0005539564 podman[81948]: 2025-11-29 07:11:45.407485931 +0000 UTC m=+0.268741059 container create 7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:11:45 np0005539564 systemd[1]: Started libpod-conmon-7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9.scope.
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: Cluster is now healthy
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: Deploying daemon mgr.compute-1.jjnjed on compute-1
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2616467829' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:11:45 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/2616467829' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:11:45 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:11:45 np0005539564 podman[81948]: 2025-11-29 07:11:45.50632132 +0000 UTC m=+0.367576428 container init 7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:45 np0005539564 podman[81948]: 2025-11-29 07:11:45.512407611 +0000 UTC m=+0.373662699 container start 7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:45 np0005539564 podman[81948]: 2025-11-29 07:11:45.515979651 +0000 UTC m=+0.377234739 container attach 7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:11:45 np0005539564 vibrant_driscoll[81962]: 167 167
Nov 29 02:11:45 np0005539564 systemd[1]: libpod-7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9.scope: Deactivated successfully.
Nov 29 02:11:45 np0005539564 podman[81948]: 2025-11-29 07:11:45.51841074 +0000 UTC m=+0.379665828 container died 7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:45 np0005539564 systemd[1]: var-lib-containers-storage-overlay-e71ba94bec016cc3118afa04d7a2099a76c472f5de989b8671e8122671f60c0a-merged.mount: Deactivated successfully.
Nov 29 02:11:45 np0005539564 podman[81948]: 2025-11-29 07:11:45.55505211 +0000 UTC m=+0.416307198 container remove 7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_driscoll, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:45 np0005539564 systemd[1]: libpod-conmon-7edaef59e06ac8e98bf6f1903d4029eb656650d8a9592f80cb6538db2a7370e9.scope: Deactivated successfully.
Nov 29 02:11:45 np0005539564 systemd[1]: Reloading.
Nov 29 02:11:45 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:45 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:45 np0005539564 systemd[1]: Reloading.
Nov 29 02:11:46 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:46 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:46 np0005539564 systemd[1]: Starting Ceph mgr.compute-1.jjnjed for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:11:46 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 29 02:11:46 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 29 02:11:46 np0005539564 podman[82105]: 2025-11-29 07:11:46.554024603 +0000 UTC m=+0.050531362 container create 4bf4c7b6084ea62505d6c75a74296c4b78ab75c7ef5f3d47a2e79112df632705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:11:46 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b0916dae1e69f6608ae45b4d4c2829d0f6c92e4a4a9e469a21527c0eb4bac6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:46 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b0916dae1e69f6608ae45b4d4c2829d0f6c92e4a4a9e469a21527c0eb4bac6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:46 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b0916dae1e69f6608ae45b4d4c2829d0f6c92e4a4a9e469a21527c0eb4bac6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:46 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b0916dae1e69f6608ae45b4d4c2829d0f6c92e4a4a9e469a21527c0eb4bac6/merged/var/lib/ceph/mgr/ceph-compute-1.jjnjed supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:46 np0005539564 podman[82105]: 2025-11-29 07:11:46.617363844 +0000 UTC m=+0.113870603 container init 4bf4c7b6084ea62505d6c75a74296c4b78ab75c7ef5f3d47a2e79112df632705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:11:46 np0005539564 podman[82105]: 2025-11-29 07:11:46.529830232 +0000 UTC m=+0.026337061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:46 np0005539564 podman[82105]: 2025-11-29 07:11:46.631376228 +0000 UTC m=+0.127882967 container start 4bf4c7b6084ea62505d6c75a74296c4b78ab75c7ef5f3d47a2e79112df632705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:11:46 np0005539564 bash[82105]: 4bf4c7b6084ea62505d6c75a74296c4b78ab75c7ef5f3d47a2e79112df632705
Nov 29 02:11:46 np0005539564 systemd[1]: Started Ceph mgr.compute-1.jjnjed for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:11:46 np0005539564 ceph-mgr[82125]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:11:46 np0005539564 ceph-mgr[82125]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 02:11:46 np0005539564 ceph-mgr[82125]: pidfile_write: ignore empty --pid-file
Nov 29 02:11:46 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'alerts'
Nov 29 02:11:47 np0005539564 ceph-mgr[82125]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:11:47 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'balancer'
Nov 29 02:11:47 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:47.080+0000 7f1d66040140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:11:47 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/4133094199' entity='client.admin' 
Nov 29 02:11:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539564 ceph-mgr[82125]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:11:47 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:47.329+0000 7f1d66040140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:11:47 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'cephadm'
Nov 29 02:11:49 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'crash'
Nov 29 02:11:49 np0005539564 ceph-mgr[82125]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:11:49 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:49.574+0000 7f1d66040140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:11:49 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'dashboard'
Nov 29 02:11:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020053279 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:11:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:11:50 np0005539564 ceph-mon[81769]: Deploying daemon crash.compute-2 on compute-2
Nov 29 02:11:50 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'devicehealth'
Nov 29 02:11:51 np0005539564 ceph-mgr[82125]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:11:51 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 02:11:51 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:51.227+0000 7f1d66040140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:11:51 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 02:11:51 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 02:11:51 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]:  from numpy import show_config as show_numpy_config
Nov 29 02:11:51 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:51.748+0000 7f1d66040140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:11:51 np0005539564 ceph-mgr[82125]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:11:51 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'influx'
Nov 29 02:11:51 np0005539564 ceph-mgr[82125]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:11:51 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'insights'
Nov 29 02:11:51 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:51.987+0000 7f1d66040140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:11:52 np0005539564 ceph-mon[81769]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539564 ceph-mon[81769]: Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:11:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'iostat'
Nov 29 02:11:52 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 29 02:11:52 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 29 02:11:52 np0005539564 ceph-mgr[82125]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:11:52 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:52.519+0000 7f1d66040140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:11:52 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'k8sevents'
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e2 new map
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:11:53.720209+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 29 02:11:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e33 e33: 2 total, 2 up, 2 in
Nov 29 02:11:54 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'localpool'
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:54 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 29 02:11:54 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 29 02:11:54 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 02:11:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020054713 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:55 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'mirroring'
Nov 29 02:11:55 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'nfs'
Nov 29 02:11:56 np0005539564 ceph-mgr[82125]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:11:56 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'orchestrator'
Nov 29 02:11:56 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:56.149+0000 7f1d66040140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:11:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e34 e34: 3 total, 2 up, 3 in
Nov 29 02:11:56 np0005539564 ceph-mon[81769]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:56 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:56 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.102:0/4274206403' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]: dispatch
Nov 29 02:11:56 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]: dispatch
Nov 29 02:11:56 np0005539564 ceph-mgr[82125]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:11:56 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 02:11:56 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:56.799+0000 7f1d66040140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-mgr[82125]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'osd_support'
Nov 29 02:11:57 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:57.069+0000 7f1d66040140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-mgr[82125]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 02:11:57 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:57.315+0000 7f1d66040140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Nov 29 02:11:57 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Nov 29 02:11:57 np0005539564 ceph-mgr[82125]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'progress'
Nov 29 02:11:57 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:57.572+0000 7f1d66040140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-mgr[82125]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'prometheus'
Nov 29 02:11:57 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:57.801+0000 7f1d66040140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:11:57 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]': finished
Nov 29 02:11:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:58 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 29 02:11:58 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 29 02:11:58 np0005539564 ceph-mgr[82125]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:11:58 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:58.760+0000 7f1d66040140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:11:58 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'rbd_support'
Nov 29 02:11:59 np0005539564 ceph-mgr[82125]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:11:59 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'restful'
Nov 29 02:11:59 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:11:59.064+0000 7f1d66040140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:11:59 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/3119064189' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 02:11:59 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/3119064189' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 02:11:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:59 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'rgw'
Nov 29 02:12:00 np0005539564 systemd[72696]: Starting Mark boot as successful...
Nov 29 02:12:00 np0005539564 systemd[72696]: Finished Mark boot as successful.
Nov 29 02:12:00 np0005539564 ceph-mgr[82125]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:12:00 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'rook'
Nov 29 02:12:00 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:00.471+0000 7f1d66040140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:12:01 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 29 02:12:01 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 29 02:12:02 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 29 02:12:02 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 29 02:12:02 np0005539564 ceph-mgr[82125]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:12:02 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'selftest'
Nov 29 02:12:02 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:02.471+0000 7f1d66040140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:12:02 np0005539564 ceph-mgr[82125]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:12:02 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'snap_schedule'
Nov 29 02:12:02 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:02.706+0000 7f1d66040140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:12:02 np0005539564 ceph-mgr[82125]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:12:02 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:02.940+0000 7f1d66040140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:12:02 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'stats'
Nov 29 02:12:03 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/4263620903' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 02:12:03 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'status'
Nov 29 02:12:03 np0005539564 ceph-mgr[82125]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:12:03 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:03.408+0000 7f1d66040140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:12:03 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'telegraf'
Nov 29 02:12:03 np0005539564 ceph-mgr[82125]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:12:03 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:03.626+0000 7f1d66040140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:12:03 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'telemetry'
Nov 29 02:12:04 np0005539564 ceph-mgr[82125]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:12:04 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:04.178+0000 7f1d66040140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:12:04 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 02:12:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:04 np0005539564 ceph-mgr[82125]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:04 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'volumes'
Nov 29 02:12:04 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:04.807+0000 7f1d66040140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:05 np0005539564 ceph-mgr[82125]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:12:05 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:05.518+0000 7f1d66040140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:12:05 np0005539564 ceph-mgr[82125]: mgr[py] Loading python module 'zabbix'
Nov 29 02:12:05 np0005539564 ceph-mgr[82125]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:12:05 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-1-jjnjed[82121]: 2025-11-29T07:12:05.804+0000 7f1d66040140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:12:05 np0005539564 ceph-mgr[82125]: ms_deliver_dispatch: unhandled message 0x5596bdcb91e0 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Nov 29 02:12:05 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 02:12:06 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 02:12:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 02:12:10 np0005539564 ceph-mon[81769]: Deploying daemon osd.2 on compute-2
Nov 29 02:12:10 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 29 02:12:10 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 29 02:12:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:14 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 29 02:12:14 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 29 02:12:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e35 e35: 3 total, 2 up, 3 in
Nov 29 02:12:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e36 e36: 3 total, 2 up, 3 in
Nov 29 02:12:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e37 e37: 3 total, 2 up, 3 in
Nov 29 02:12:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e38 e38: 3 total, 2 up, 3 in
Nov 29 02:12:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 02:12:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e39 e39: 3 total, 2 up, 3 in
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:18 np0005539564 ceph-mon[81769]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 02:12:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e40 e40: 3 total, 2 up, 3 in
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.b( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.914623260s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 103.491180420s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.1d( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.918806076s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 103.495407104s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.1c( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.914435387s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 103.491088867s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.1d( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.918806076s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.495407104s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.918864250s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 103.495567322s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.b( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.914623260s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.491180420s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.1c( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.914435387s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.491088867s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.918864250s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.495567322s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.12( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.919062614s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 103.496047974s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.f( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.919203758s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 103.496192932s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.12( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.919062614s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.496047974s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.f( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.919203758s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.496192932s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.18( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.918683052s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 103.496253967s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 40 pg[2.18( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=40 pruub=9.918683052s) [] r=-1 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.496253967s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:19 np0005539564 ceph-mon[81769]: from='osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:12:19 np0005539564 ceph-mon[81769]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:12:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:19 np0005539564 ceph-mon[81769]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 02:12:21 np0005539564 podman[82391]: 2025-11-29 07:12:21.012125658 +0000 UTC m=+0.072494711 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:12:21 np0005539564 podman[82391]: 2025-11-29 07:12:21.137260342 +0000 UTC m=+0.197629365 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e41 e41: 3 total, 2 up, 3 in
Nov 29 02:12:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=12.648928642s) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 109.582252502s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=12.648928642s) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown pruub 109.582252502s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e42 e42: 3 total, 2 up, 3 in
Nov 29 02:12:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=41/42 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:25 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 29 02:12:25 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 29 02:12:27 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 02:12:27 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: Unable to set osd_memory_target on compute-2 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:12:28 np0005539564 ceph-mon[81769]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:12:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:30 np0005539564 ceph-mon[81769]: Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:31 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 29 02:12:31 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 29 02:12:31 np0005539564 ceph-mon[81769]: Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:31 np0005539564 ceph-mon[81769]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539564 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 02:12:32 np0005539564 systemd[1]: session-19.scope: Consumed 8.585s CPU time.
Nov 29 02:12:32 np0005539564 systemd-logind[785]: Session 19 logged out. Waiting for processes to exit.
Nov 29 02:12:32 np0005539564 systemd-logind[785]: Removed session 19.
Nov 29 02:12:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:12:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.b( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.1c( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.1c( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.1d( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.1d( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.b( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.f( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.f( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.12( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.12( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.18( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 43 pg[2.18( empty local-lis/les=17/19 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=43) [2] r=-1 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 02:12:36 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 29 02:12:36 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 29 02:12:36 np0005539564 ceph-mon[81769]: OSD bench result of 4342.898351 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:12:36 np0005539564 ceph-mon[81769]: osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232] boot
Nov 29 02:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:38 np0005539564 ceph-mon[81769]: Deploying daemon rgw.rgw.compute-2.tfmigt on compute-2
Nov 29 02:12:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 29 02:12:40 np0005539564 podman[83598]: 2025-11-29 07:12:40.36158596 +0000 UTC m=+0.052355017 container create c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:40 np0005539564 systemd[1]: Started libpod-conmon-c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209.scope.
Nov 29 02:12:40 np0005539564 podman[83598]: 2025-11-29 07:12:40.33830311 +0000 UTC m=+0.029072157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:40 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:12:40 np0005539564 podman[83598]: 2025-11-29 07:12:40.452644493 +0000 UTC m=+0.143413550 container init c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:12:40 np0005539564 podman[83598]: 2025-11-29 07:12:40.463142527 +0000 UTC m=+0.153911544 container start c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:12:40 np0005539564 podman[83598]: 2025-11-29 07:12:40.467055282 +0000 UTC m=+0.157824329 container attach c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:40 np0005539564 reverent_gauss[83614]: 167 167
Nov 29 02:12:40 np0005539564 systemd[1]: libpod-c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209.scope: Deactivated successfully.
Nov 29 02:12:40 np0005539564 conmon[83614]: conmon c41020fe42191b295247 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209.scope/container/memory.events
Nov 29 02:12:40 np0005539564 podman[83598]: 2025-11-29 07:12:40.470652479 +0000 UTC m=+0.161421506 container died c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:12:40 np0005539564 systemd[1]: var-lib-containers-storage-overlay-964552c6ec08f686495773810e5781a58e58bd3599111852a45b38b28c511c2c-merged.mount: Deactivated successfully.
Nov 29 02:12:40 np0005539564 podman[83598]: 2025-11-29 07:12:40.513763146 +0000 UTC m=+0.204532163 container remove c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:12:40 np0005539564 systemd[1]: libpod-conmon-c41020fe42191b2952476ecf2e4a42bc95f263726492119145ab8e81cbeb8209.scope: Deactivated successfully.
Nov 29 02:12:40 np0005539564 systemd[1]: Reloading.
Nov 29 02:12:40 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:40 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684131622s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.946624756s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684084892s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.946594238s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684069633s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.946624756s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684048653s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.946624756s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684000015s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.946594238s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683951378s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.946624756s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.679421425s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.942276001s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684055328s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.946914673s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684026718s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.946914673s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683874130s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.946792603s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.679380417s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.942276001s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683849335s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.946792603s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684128761s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947174072s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683897972s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.946960449s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683952332s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947036743s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684104919s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947174072s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683875084s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.946960449s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683922768s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947036743s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684157372s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947372437s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684126854s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947372437s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684128761s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947372437s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684105873s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947372437s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684051514s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947372437s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684026718s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947372437s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684287071s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947723389s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684278488s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947723389s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684255600s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947723389s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684206009s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947723389s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684194565s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947723389s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684151649s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947723389s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684170723s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947723389s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684112549s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947738647s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684095383s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947738647s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684118271s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947830200s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684100151s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947830200s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683984756s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947723389s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684326172s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.948196411s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684303284s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.948196411s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684227943s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.948318481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.684206009s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.948318481s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683568001s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 129.947921753s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.683541298s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.947921753s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.1b( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.1f( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.1c( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[5.10( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: Deploying daemon rgw.rgw.compute-1.mdhebv on compute-1
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:12:40 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539564 systemd[1]: Reloading.
Nov 29 02:12:40 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:40 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:41 np0005539564 systemd[1]: Starting Ceph rgw.rgw.compute-1.mdhebv for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Nov 29 02:12:41 np0005539564 podman[83758]: 2025-11-29 07:12:41.424612327 +0000 UTC m=+0.039150570 container create 9ef31c0c0b81430c4bd8b6b6d9f8750e2b7ce7576d64d5e664fe0780b0ef3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-1-mdhebv, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:41 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2bea0924daa552b4dcd2b82b63bbb239c39959af58c2754ef4c309246eaae51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:41 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2bea0924daa552b4dcd2b82b63bbb239c39959af58c2754ef4c309246eaae51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:41 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2bea0924daa552b4dcd2b82b63bbb239c39959af58c2754ef4c309246eaae51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:41 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2bea0924daa552b4dcd2b82b63bbb239c39959af58c2754ef4c309246eaae51/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.mdhebv supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:41 np0005539564 podman[83758]: 2025-11-29 07:12:41.488013552 +0000 UTC m=+0.102551795 container init 9ef31c0c0b81430c4bd8b6b6d9f8750e2b7ce7576d64d5e664fe0780b0ef3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-1-mdhebv, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:41 np0005539564 podman[83758]: 2025-11-29 07:12:41.494848286 +0000 UTC m=+0.109386529 container start 9ef31c0c0b81430c4bd8b6b6d9f8750e2b7ce7576d64d5e664fe0780b0ef3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-1-mdhebv, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:12:41 np0005539564 bash[83758]: 9ef31c0c0b81430c4bd8b6b6d9f8750e2b7ce7576d64d5e664fe0780b0ef3829
Nov 29 02:12:41 np0005539564 podman[83758]: 2025-11-29 07:12:41.406019104 +0000 UTC m=+0.020557387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:41 np0005539564 systemd[1]: Started Ceph rgw.rgw.compute-1.mdhebv for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:12:41 np0005539564 radosgw[83777]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:12:41 np0005539564 radosgw[83777]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 02:12:41 np0005539564 radosgw[83777]: framework: beast
Nov 29 02:12:41 np0005539564 radosgw[83777]: framework conf key: endpoint, val: 192.168.122.101:8082
Nov 29 02:12:41 np0005539564 radosgw[83777]: init_numa not setting numa affinity
Nov 29 02:12:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.1f( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[6.7( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.10( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[6.3( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.1b( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.c( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45) [1] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[5.1c( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[6.5( empty local-lis/les=45/46 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 02:12:42 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 29 02:12:42 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 29 02:12:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 02:12:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 02:12:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2435465121' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539564 ceph-mon[81769]: Deploying daemon rgw.rgw.compute-0.fvilij on compute-0
Nov 29 02:12:42 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.101:0/2435465121' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:12:43 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2435465121' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: Deploying daemon mds.cephfs.compute-2.fwjrvc on compute-2
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/3487140368' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.101:0/2435465121' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:45 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 49 pg[10.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 02:12:45 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 50 pg[10.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/3487140368' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:45 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e3 new map
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:11:53.720209+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.fwjrvc{-1:24133} state up:standby seq 1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e4 new map
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:46.514987+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:creating seq 1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1219183491' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: Deploying daemon mds.cephfs.compute-0.msknqt on compute-0
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: daemon mds.cephfs.compute-2.fwjrvc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: Cluster is now healthy
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: daemon mds.cephfs.compute-2.fwjrvc is now active in filesystem cephfs as rank 0
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.102:0/2934709007' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.101:0/1219183491' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:47 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 29 02:12:47 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e5 new map
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:47.524985+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 2 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e6 new map
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:47.524985+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 2 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1219183491' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.101:0/1219183491' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.102:0/2934709007' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:48 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 29 02:12:48 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 29 02:12:48 np0005539564 podman[83989]: 2025-11-29 07:12:48.50432073 +0000 UTC m=+0.066490989 container create e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:48 np0005539564 systemd[1]: Started libpod-conmon-e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963.scope.
Nov 29 02:12:48 np0005539564 podman[83989]: 2025-11-29 07:12:48.472264494 +0000 UTC m=+0.034434813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:48 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:12:48 np0005539564 podman[83989]: 2025-11-29 07:12:48.608300612 +0000 UTC m=+0.170470891 container init e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:12:48 np0005539564 podman[83989]: 2025-11-29 07:12:48.618976151 +0000 UTC m=+0.181146380 container start e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_benz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:48 np0005539564 podman[83989]: 2025-11-29 07:12:48.623049822 +0000 UTC m=+0.185220121 container attach e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:48 np0005539564 trusting_benz[84005]: 167 167
Nov 29 02:12:48 np0005539564 systemd[1]: libpod-e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963.scope: Deactivated successfully.
Nov 29 02:12:48 np0005539564 podman[83989]: 2025-11-29 07:12:48.627679246 +0000 UTC m=+0.189849555 container died e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:48 np0005539564 systemd[1]: var-lib-containers-storage-overlay-e1a65947e66ea2b5e1ad909019b842ba4bacf43389da39ade4d97148b398624a-merged.mount: Deactivated successfully.
Nov 29 02:12:48 np0005539564 podman[83989]: 2025-11-29 07:12:48.679413976 +0000 UTC m=+0.241584235 container remove e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:12:48 np0005539564 systemd[1]: libpod-conmon-e47d2232ade76af5a330f240f409046e34c4bbc2f122d9dd999a3290c111d963.scope: Deactivated successfully.
Nov 29 02:12:48 np0005539564 systemd[1]: Reloading.
Nov 29 02:12:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 02:12:48 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:48 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:48 np0005539564 ceph-mon[81769]: Deploying daemon mds.cephfs.compute-1.oeerwd on compute-1
Nov 29 02:12:48 np0005539564 ceph-mon[81769]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:48 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:48 np0005539564 ceph-mon[81769]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:49 np0005539564 radosgw[83777]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:12:49 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-1-mdhebv[83773]: 2025-11-29T07:12:49.032+0000 7f487e982940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:12:49 np0005539564 radosgw[83777]: framework: beast
Nov 29 02:12:49 np0005539564 radosgw[83777]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 02:12:49 np0005539564 radosgw[83777]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 02:12:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539564 radosgw[83777]: starting handler: beast
Nov 29 02:12:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539564 radosgw[83777]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:12:49 np0005539564 systemd[1]: Reloading.
Nov 29 02:12:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539564 radosgw[83777]: mgrc service_daemon_register rgw.24128 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.mdhebv,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=566f71d6-80f0-4888-8471-3c4b61b17fae,zone_name=default,zonegroup_id=52a2d801-fd4c-4d81-9622-166900f04f3d,zonegroup_name=default}
Nov 29 02:12:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:49 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539564 systemd[1]: Starting Ceph mds.cephfs.compute-1.oeerwd for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:12:49 np0005539564 podman[84696]: 2025-11-29 07:12:49.64029621 +0000 UTC m=+0.055106081 container create 22df02dc5450eeb849b7a20338f114659640d9fc0f4f6549e601a25f98f0dbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-1-oeerwd, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:49 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0b02d89917e5e573adb08b65fbdc52a8ab250a4de0e5c7eff7988cc81ee7f67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:49 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0b02d89917e5e573adb08b65fbdc52a8ab250a4de0e5c7eff7988cc81ee7f67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:49 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0b02d89917e5e573adb08b65fbdc52a8ab250a4de0e5c7eff7988cc81ee7f67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:49 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0b02d89917e5e573adb08b65fbdc52a8ab250a4de0e5c7eff7988cc81ee7f67/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.oeerwd supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:49 np0005539564 podman[84696]: 2025-11-29 07:12:49.699750078 +0000 UTC m=+0.114560009 container init 22df02dc5450eeb849b7a20338f114659640d9fc0f4f6549e601a25f98f0dbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-1-oeerwd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:49 np0005539564 podman[84696]: 2025-11-29 07:12:49.613409003 +0000 UTC m=+0.028218954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:49 np0005539564 podman[84696]: 2025-11-29 07:12:49.711471975 +0000 UTC m=+0.126281866 container start 22df02dc5450eeb849b7a20338f114659640d9fc0f4f6549e601a25f98f0dbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-1-oeerwd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:12:49 np0005539564 bash[84696]: 22df02dc5450eeb849b7a20338f114659640d9fc0f4f6549e601a25f98f0dbb3
Nov 29 02:12:49 np0005539564 systemd[1]: Started Ceph mds.cephfs.compute-1.oeerwd for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:12:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:49 np0005539564 ceph-mds[84716]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:12:49 np0005539564 ceph-mds[84716]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 02:12:49 np0005539564 ceph-mds[84716]: main not setting numa affinity
Nov 29 02:12:49 np0005539564 ceph-mds[84716]: pidfile_write: ignore empty --pid-file
Nov 29 02:12:49 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-1-oeerwd[84712]: starting mds.cephfs.compute-1.oeerwd at 
Nov 29 02:12:49 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Updating MDS map to version 6 from mon.2
Nov 29 02:12:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e7 new map
Nov 29 02:12:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:50.793764+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:50 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Updating MDS map to version 7 from mon.2
Nov 29 02:12:50 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Monitors have assigned me to become a standby.
Nov 29 02:12:50 np0005539564 ceph-mon[81769]: Deploying daemon haproxy.rgw.default.compute-0.aoijdn on compute-0
Nov 29 02:12:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e8 new map
Nov 29 02:12:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:50.793764+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:52 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 29 02:12:52 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 29 02:12:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:54 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Nov 29 02:12:54 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Nov 29 02:12:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e9 new map
Nov 29 02:12:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:50.793764+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:54 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Updating MDS map to version 9 from mon.2
Nov 29 02:12:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:55 np0005539564 ceph-mon[81769]: Deploying daemon haproxy.rgw.default.compute-2.goeiuk on compute-2
Nov 29 02:12:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:12:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.003000085s ======
Nov 29 02:12:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:12:55.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000085s
Nov 29 02:12:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:12:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:12:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:12:57.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:12:59 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 29 02:12:59 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 29 02:12:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:12:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:12:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:12:59.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:12:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:01.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:02.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:03 np0005539564 ceph-mon[81769]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:13:03 np0005539564 ceph-mon[81769]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:13:03 np0005539564 ceph-mon[81769]: Deploying daemon keepalived.rgw.default.compute-2.gecapa on compute-2
Nov 29 02:13:03 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Nov 29 02:13:03 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Nov 29 02:13:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:03.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:04 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 29 02:13:04 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 29 02:13:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:04.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:05.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:06.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:07.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:08 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 29 02:13:08 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 29 02:13:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:08.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:09.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:09 np0005539564 ceph-mon[81769]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:13:09 np0005539564 ceph-mon[81769]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:13:09 np0005539564 ceph-mon[81769]: Deploying daemon keepalived.rgw.default.compute-0.uxbosd on compute-0
Nov 29 02:13:10 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 29 02:13:10 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 29 02:13:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:10.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:11.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:12 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 29 02:13:12 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 29 02:13:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:12.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:13.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:14.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:15 np0005539564 podman[84960]: 2025-11-29 07:13:15.253859555 +0000 UTC m=+0.094475601 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:13:15 np0005539564 podman[84960]: 2025-11-29 07:13:15.365439656 +0000 UTC m=+0.206055682 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:15.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:13:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:13:17 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 29 02:13:17 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 29 02:13:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:17.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:18.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 02:13:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:19 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 29 02:13:19 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 29 02:13:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:19.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 02:13:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:20.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 02:13:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:21.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:22 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 29 02:13:22 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 29 02:13:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 02:13:22 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 57 pg[10.0( v 53'96 (0'0,53'96] local-lis/les=49/50 n=8 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.255620956s) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 53'95 mlcod 53'95 active pruub 167.379455566s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:22 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 57 pg[10.0( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.255620956s) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 53'95 mlcod 0'0 unknown pruub 167.379455566s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:22.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:23.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1b( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.7( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.10( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.12( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1f( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1e( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1c( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1a( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.19( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.b( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.8( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.a( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.d( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.f( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.3( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.14( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.15( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.c( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.9( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.e( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.4( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.5( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1( v 53'96 (0'0,53'96] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1d( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.18( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.6( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.2( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.16( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.17( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.11( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.13( v 53'96 lc 0'0 (0'0,53'96] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1f( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.12( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.7( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1c( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1b( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1a( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1e( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.8( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.b( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.f( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.10( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.19( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.3( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.15( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.d( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.9( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.0( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 53'95 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.4( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.c( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.a( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.14( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.5( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1d( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.e( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.18( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.16( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.6( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.1( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.11( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.17( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.13( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:23 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 58 pg[10.2( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:13:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:24.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.14( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.4( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.17( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.8( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.18( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.12( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.10( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[8.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.12( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511241913s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.263931274s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.12( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511207581s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.263931274s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.10( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511854172s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.264602661s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.10( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511802673s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.264602661s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.1b( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511497498s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.264465332s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.1e( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511476517s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.264541626s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.1e( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511442184s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.264541626s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.19( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511398315s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.264755249s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.19( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511360168s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.264755249s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.8( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511138916s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.264587402s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.8( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511116028s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.264587402s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.1b( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511200905s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.264465332s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.f( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.510919571s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.264663696s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.f( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.510891914s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.264663696s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.3( v 58'99 (0'0,58'99] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.510945320s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=58'97 lcod 58'98 mlcod 58'98 active pruub 173.264907837s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.14( v 58'99 (0'0,58'99] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.512280464s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=58'97 lcod 58'98 mlcod 58'98 active pruub 173.266387939s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.14( v 58'99 (0'0,58'99] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.512257576s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=58'97 lcod 58'98 mlcod 0'0 unknown NOTIFY pruub 173.266387939s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.15( v 58'99 (0'0,58'99] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.510780334s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=58'97 lcod 58'98 mlcod 58'98 active pruub 173.265045166s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.3( v 58'99 (0'0,58'99] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.510899544s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=58'97 lcod 58'98 mlcod 0'0 unknown NOTIFY pruub 173.264907837s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.15( v 58'99 (0'0,58'99] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.510734558s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=58'97 lcod 58'98 mlcod 0'0 unknown NOTIFY pruub 173.265045166s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.5( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511909485s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.266448975s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.5( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511878014s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.266448975s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.4( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.510965347s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.265716553s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.4( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.510932922s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.265716553s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.1( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511856079s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.266708374s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.1( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511830330s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.266708374s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.2( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.512106895s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.267074585s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.2( v 53'96 (0'0,53'96] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.512072563s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.267074585s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.18( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511505127s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.266586304s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.18( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511483192s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.266586304s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.11( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511463165s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.266754150s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.13( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511601448s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active pruub 173.266891479s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.11( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511435509s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.266754150s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 59 pg[10.13( v 53'96 (0'0,53'96] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.511573792s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.266891479s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rotard", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:13:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.10( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.18( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.1b( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.12( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.19( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.8( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.4( v 46'8 (0'0,46'8] local-lis/les=59/60 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.17( v 46'8 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 60 pg[8.14( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=59/60 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=46'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:26.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:27 np0005539564 ceph-mon[81769]: Reconfiguring mgr.compute-0.rotard (monmap changed)...
Nov 29 02:13:27 np0005539564 ceph-mon[81769]: Reconfiguring daemon mgr.compute-0.rotard on compute-0
Nov 29 02:13:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:27 np0005539564 ceph-mon[81769]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:13:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:13:27 np0005539564 ceph-mon[81769]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:13:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:28 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 29 02:13:28 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 29 02:13:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:13:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:28.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:13:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:29.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:29 np0005539564 ceph-mon[81769]: Reconfiguring osd.0 (monmap changed)...
Nov 29 02:13:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:13:29 np0005539564 ceph-mon[81769]: Reconfiguring daemon osd.0 on compute-0
Nov 29 02:13:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 29 02:13:30 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 29 02:13:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:30.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:13:31 np0005539564 podman[85253]: 2025-11-29 07:13:31.028908205 +0000 UTC m=+0.060293580 container create 439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:31 np0005539564 podman[85253]: 2025-11-29 07:13:30.997300769 +0000 UTC m=+0.028686244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:31 np0005539564 systemd[1]: Started libpod-conmon-439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024.scope.
Nov 29 02:13:31 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:13:31 np0005539564 podman[85253]: 2025-11-29 07:13:31.164232206 +0000 UTC m=+0.195617601 container init 439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jemison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:31 np0005539564 podman[85253]: 2025-11-29 07:13:31.174617995 +0000 UTC m=+0.206003370 container start 439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:31 np0005539564 quizzical_jemison[85269]: 167 167
Nov 29 02:13:31 np0005539564 systemd[1]: libpod-439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024.scope: Deactivated successfully.
Nov 29 02:13:31 np0005539564 podman[85253]: 2025-11-29 07:13:31.50810901 +0000 UTC m=+0.539494425 container attach 439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:31 np0005539564 podman[85253]: 2025-11-29 07:13:31.508933233 +0000 UTC m=+0.540318608 container died 439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:13:31 np0005539564 systemd[1]: var-lib-containers-storage-overlay-aaea571c501d22e6f81406af278a3dfe21136134a6c121371c072244878c4709-merged.mount: Deactivated successfully.
Nov 29 02:13:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:31.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:31 np0005539564 podman[85253]: 2025-11-29 07:13:31.580827306 +0000 UTC m=+0.612212701 container remove 439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:13:31 np0005539564 systemd[1]: libpod-conmon-439f4dbe20f7fb3458d1c1a79b6b10720098d60dae314258bdcc6d02d4c05024.scope: Deactivated successfully.
Nov 29 02:13:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.1a( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.1e( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.1b( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.1( v 53'2 (0'0,53'2] local-lis/les=61/62 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.1d( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.14( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.f( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.5( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.4( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.7( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.1c( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 62 pg[11.12( v 53'2 (0'0,53'2] local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:32 np0005539564 ceph-mon[81769]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:13:32 np0005539564 ceph-mon[81769]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:13:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:13:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:13:32 np0005539564 systemd-logind[785]: New session 33 of user zuul.
Nov 29 02:13:32 np0005539564 systemd[1]: Started Session 33 of User zuul.
Nov 29 02:13:32 np0005539564 podman[85408]: 2025-11-29 07:13:32.324557058 +0000 UTC m=+0.055006389 container create 25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:32 np0005539564 systemd[1]: Started libpod-conmon-25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc.scope.
Nov 29 02:13:32 np0005539564 podman[85408]: 2025-11-29 07:13:32.306553612 +0000 UTC m=+0.037002963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:32 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:13:32 np0005539564 podman[85408]: 2025-11-29 07:13:32.544655301 +0000 UTC m=+0.275104652 container init 25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:32 np0005539564 podman[85408]: 2025-11-29 07:13:32.55333789 +0000 UTC m=+0.283787221 container start 25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:32 np0005539564 podman[85408]: 2025-11-29 07:13:32.559519997 +0000 UTC m=+0.289969338 container attach 25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:13:32 np0005539564 affectionate_turing[85448]: 167 167
Nov 29 02:13:32 np0005539564 systemd[1]: libpod-25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc.scope: Deactivated successfully.
Nov 29 02:13:32 np0005539564 podman[85408]: 2025-11-29 07:13:32.561039661 +0000 UTC m=+0.291489022 container died 25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:32 np0005539564 systemd[1]: var-lib-containers-storage-overlay-26cdb0c8091461f4a600d8501a3f910ee4e1a6468ab0d8f88b258021ddd31146-merged.mount: Deactivated successfully.
Nov 29 02:13:32 np0005539564 podman[85408]: 2025-11-29 07:13:32.626846958 +0000 UTC m=+0.357296319 container remove 25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:32 np0005539564 systemd[1]: libpod-conmon-25229e8ba37773d88a771135d4deb719d8910d975b9d8f84ea0285c311d6d5bc.scope: Deactivated successfully.
Nov 29 02:13:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:32.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 02:13:33 np0005539564 ceph-mon[81769]: Reconfiguring osd.1 (monmap changed)...
Nov 29 02:13:33 np0005539564 ceph-mon[81769]: Reconfiguring daemon osd.1 on compute-1
Nov 29 02:13:33 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 02:13:33 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:33 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:33 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:33 np0005539564 python3.9[85674]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:13:33 np0005539564 podman[85719]: 2025-11-29 07:13:33.382512233 +0000 UTC m=+0.046697951 container create 621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:13:33 np0005539564 systemd[1]: Started libpod-conmon-621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506.scope.
Nov 29 02:13:33 np0005539564 podman[85719]: 2025-11-29 07:13:33.359529614 +0000 UTC m=+0.023715362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:33 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:13:33 np0005539564 podman[85719]: 2025-11-29 07:13:33.492872478 +0000 UTC m=+0.157058226 container init 621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:33 np0005539564 podman[85719]: 2025-11-29 07:13:33.500064665 +0000 UTC m=+0.164250373 container start 621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:33 np0005539564 podman[85719]: 2025-11-29 07:13:33.504630386 +0000 UTC m=+0.168816194 container attach 621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:33 np0005539564 awesome_jang[85744]: 167 167
Nov 29 02:13:33 np0005539564 systemd[1]: libpod-621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506.scope: Deactivated successfully.
Nov 29 02:13:33 np0005539564 podman[85719]: 2025-11-29 07:13:33.508398334 +0000 UTC m=+0.172584082 container died 621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:33 np0005539564 systemd[1]: var-lib-containers-storage-overlay-67b632947878d74c4001d8353c15a918cbf793c0597e4f9d5a054aad52d27be8-merged.mount: Deactivated successfully.
Nov 29 02:13:33 np0005539564 podman[85719]: 2025-11-29 07:13:33.571990138 +0000 UTC m=+0.236175856 container remove 621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:33.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:33 np0005539564 systemd[1]: libpod-conmon-621d06acf6dbc3e58543584a2ecc4ee52281eef716bae4fb22cbdfd402fa2506.scope: Deactivated successfully.
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:13:34 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 29 02:13:34 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 29 02:13:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:34.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 02:13:35 np0005539564 python3.9[85970]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 02:13:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:35 np0005539564 ceph-mon[81769]: Reconfiguring mgr.compute-2.vyxqrz (monmap changed)...
Nov 29 02:13:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:13:35 np0005539564 ceph-mon[81769]: Reconfiguring daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 02:13:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 02:13:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:13:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:35.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:13:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 02:13:36 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 29 02:13:36 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 29 02:13:36 np0005539564 podman[86152]: 2025-11-29 07:13:36.568868617 +0000 UTC m=+0.077038201 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:36.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:36 np0005539564 podman[86152]: 2025-11-29 07:13:36.70740877 +0000 UTC m=+0.215578334 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:13:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 02:13:37 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 29 02:13:37 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 29 02:13:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:37.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:38.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 02:13:39 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 29 02:13:39 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 29 02:13:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:39.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 02:13:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:40.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:13:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:41.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:42.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:43 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 29 02:13:43 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 29 02:13:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:43.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:44.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:45 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 29 02:13:45 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 29 02:13:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:45.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:13:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:46.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:13:47 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 29 02:13:47 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 29 02:13:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:13:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 02:13:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:48 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Nov 29 02:13:48 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Nov 29 02:13:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:13:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:48.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70) [1] r=0 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70) [1] r=0 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70) [1] r=0 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70) [1] r=0 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 29 02:13:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 02:13:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:49 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:49 np0005539564 systemd[1]: session-33.scope: Deactivated successfully.
Nov 29 02:13:49 np0005539564 systemd[1]: session-33.scope: Consumed 9.007s CPU time.
Nov 29 02:13:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:49.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:49 np0005539564 systemd-logind[785]: Session 33 logged out. Waiting for processes to exit.
Nov 29 02:13:49 np0005539564 systemd-logind[785]: Removed session 33.
Nov 29 02:13:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 02:13:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 02:13:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:13:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:50.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:13:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 02:13:51 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 73 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73) [1] r=0 lpr=73 pi=[56,73)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:51 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 73 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73) [1] r=0 lpr=73 pi=[56,73)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:51 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 73 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73) [1] r=0 lpr=73 pi=[56,73)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:51 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 73 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73) [1] r=0 lpr=73 pi=[56,73)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:51.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:52.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:54.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:55 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 29 02:13:55 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 29 02:13:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:55.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:13:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:56.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:13:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:57.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 02:13:58 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 74 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74) [1] r=0 lpr=74 pi=[56,74)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:58 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 74 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74) [1] r=0 lpr=74 pi=[56,74)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:58 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 74 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74) [1] r=0 lpr=74 pi=[56,74)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:58 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 74 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74) [1] r=0 lpr=74 pi=[56,74)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:58 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 74 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=73/74 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73) [1] r=0 lpr=73 pi=[56,73)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:58 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 74 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=73/74 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73) [1] r=0 lpr=73 pi=[56,73)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:58.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:13:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:59.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:00 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 29 02:14:00 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 29 02:14:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:00.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:01 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 29 02:14:01 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 29 02:14:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 02:14:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:14:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:02 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 75 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=74/75 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74) [1] r=0 lpr=74 pi=[56,74)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:02 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 75 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=74/75 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74) [1] r=0 lpr=74 pi=[56,74)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:02.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:03.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:14:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 02:14:05 np0005539564 systemd-logind[785]: New session 34 of user zuul.
Nov 29 02:14:05 np0005539564 systemd[1]: Started Session 34 of User zuul.
Nov 29 02:14:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:05.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 02:14:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 02:14:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 02:14:06 np0005539564 python3.9[86657]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 02:14:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:06.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 02:14:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:14:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:14:07 np0005539564 python3.9[86831]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:14:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:07.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 02:14:08 np0005539564 python3.9[86987]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:08.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 02:14:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 02:14:09 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 80 pg[9.a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [1] r=0 lpr=80 pi=[56,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:09 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 80 pg[9.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [1] r=0 lpr=80 pi=[56,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:09.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:10 np0005539564 python3.9[87140]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:14:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 02:14:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 02:14:10 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 81 pg[9.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] r=-1 lpr=81 pi=[56,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:10 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 81 pg[9.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] r=-1 lpr=81 pi=[56,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:10 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 81 pg[9.a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] r=-1 lpr=81 pi=[56,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:10 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 81 pg[9.a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] r=-1 lpr=81 pi=[56,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:10.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 02:14:11 np0005539564 python3.9[87294]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:14:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:11.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:12 np0005539564 python3.9[87446]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:14:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 02:14:12 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 83 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83) [1] r=0 lpr=83 pi=[56,83)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:12 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 83 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83) [1] r=0 lpr=83 pi=[56,83)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:12 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 83 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83) [1] r=0 lpr=83 pi=[56,83)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:12 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 83 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83) [1] r=0 lpr=83 pi=[56,83)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:12.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:13 np0005539564 python3.9[87596]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:14:13 np0005539564 network[87613]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:14:13 np0005539564 network[87614]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:14:13 np0005539564 network[87615]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:14:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 02:14:13 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 84 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=83/84 n=6 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83) [1] r=0 lpr=83 pi=[56,83)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:13 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 29 02:14:13 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 84 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=83/84 n=5 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83) [1] r=0 lpr=83 pi=[56,83)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:13 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 29 02:14:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:13.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 02:14:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 02:14:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:15 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 29 02:14:15 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 29 02:14:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:15.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 02:14:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:16.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:17 np0005539564 python3.9[87875]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:14:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 02:14:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 02:14:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:17.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:18 np0005539564 python3.9[88025]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:14:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 02:14:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:18.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 02:14:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=87) [1] r=0 lpr=87 pi=[68,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:19 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=87) [1] r=0 lpr=87 pi=[68,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 02:14:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:19.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:19 np0005539564 python3.9[88179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:14:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 02:14:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 02:14:20 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:20 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:20 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:20 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:21 np0005539564 python3.9[88337]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:14:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:21.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:22 np0005539564 python3.9[88421]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:14:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:22.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:23 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 29 02:14:23 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 29 02:14:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:23.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:24.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 02:14:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:25.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:26.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:27 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 29 02:14:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:27.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:28 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 29 02:14:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:28.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:29 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 29 02:14:29 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 29 02:14:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:29.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 02:14:29 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 90 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90) [1] r=0 lpr=90 pi=[68,90)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:29 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 90 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90) [1] r=0 lpr=90 pi=[68,90)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:29 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 90 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90) [1] r=0 lpr=90 pi=[68,90)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:29 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 90 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90) [1] r=0 lpr=90 pi=[68,90)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:30 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 29 02:14:30 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 29 02:14:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:30.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 02:14:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 91 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=90/91 n=5 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90) [1] r=0 lpr=90 pi=[68,90)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:31 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 91 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=90/91 n=6 ec=56/47 lis/c=88/68 les/c/f=89/69/0 sis=90) [1] r=0 lpr=90 pi=[68,90)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:31.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:32 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 29 02:14:32 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 29 02:14:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:32.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:33 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 29 02:14:33 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 29 02:14:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:33.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:34 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 29 02:14:34 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 29 02:14:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:34.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 02:14:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:14:35 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 29 02:14:35 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 29 02:14:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:35.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:36.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:14:37 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 29 02:14:37 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 29 02:14:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 02:14:37 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=93) [1] r=0 lpr=93 pi=[65,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:37 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=93) [1] r=0 lpr=93 pi=[65,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:37.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:14:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:14:38 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 29 02:14:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 02:14:38 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 29 02:14:38 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=94) [1] r=0 lpr=94 pi=[56,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:38 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 94 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:38 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 94 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:38 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 94 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:38 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 94 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 02:14:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 02:14:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 02:14:39 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=95) [1]/[0] r=-1 lpr=95 pi=[56,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:39 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=95) [1]/[0] r=-1 lpr=95 pi=[56,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:40 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 29 02:14:40 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 29 02:14:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 02:14:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 96 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 96 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 96 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:40 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 96 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:40.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 02:14:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 97 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=95/56 les/c/f=96/57/0 sis=97) [1] r=0 lpr=97 pi=[56,97)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 97 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=95/56 les/c/f=96/57/0 sis=97) [1] r=0 lpr=97 pi=[56,97)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 97 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=96/97 n=5 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:41 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 97 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=96/97 n=6 ec=56/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:41.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 02:14:42 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 98 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=97/98 n=6 ec=56/47 lis/c=95/56 les/c/f=96/57/0 sis=97) [1] r=0 lpr=97 pi=[56,97)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:42 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Nov 29 02:14:42 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Nov 29 02:14:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:44.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 02:14:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 02:14:45 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 29 02:14:45 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 29 02:14:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:45 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=99) [1] r=0 lpr=99 pi=[56,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 02:14:46 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[56,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:46 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[56,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:46 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 02:14:46 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 29 02:14:46 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 29 02:14:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:46.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 02:14:47 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:14:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 02:14:48 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 102 pg[9.12( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[56,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:48 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 102 pg[9.12( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[56,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:48 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 102 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=100/56 les/c/f=101/57/0 sis=102) [1] r=0 lpr=102 pi=[56,102)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:48 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 102 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=6 ec=56/47 lis/c=100/56 les/c/f=101/57/0 sis=102) [1] r=0 lpr=102 pi=[56,102)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:48.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:14:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:14:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:50 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 29 02:14:50 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 29 02:14:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 02:14:50 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 103 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=102/103 n=6 ec=56/47 lis/c=100/56 les/c/f=101/57/0 sis=102) [1] r=0 lpr=102 pi=[56,102)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:14:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:50.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 02:14:51 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 104 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=102/56 les/c/f=103/57/0 sis=104) [1] r=0 lpr=104 pi=[56,104)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:51 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 104 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=102/56 les/c/f=103/57/0 sis=104) [1] r=0 lpr=104 pi=[56,104)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:51.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 02:14:52 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 105 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=104/105 n=5 ec=56/47 lis/c=102/56 les/c/f=103/57/0 sis=104) [1] r=0 lpr=104 pi=[56,104)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:52.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:53 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 29 02:14:53 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 29 02:14:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:53.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:54.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 02:14:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 02:14:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:56 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 02:14:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:14:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:56.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:14:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 02:14:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 02:14:57 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:57.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 02:14:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 02:14:58 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:58 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:58.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 02:14:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 02:14:59 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 109 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=74/75 n=5 ec=56/47 lis/c=74/74 les/c/f=75/75/0 sis=109 pruub=14.920358658s) [2] r=-1 lpr=109 pi=[74,109)/1 crt=53'1137 mlcod 0'0 active pruub 267.883087158s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:59 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 109 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=74/75 n=5 ec=56/47 lis/c=74/74 les/c/f=75/75/0 sis=109 pruub=14.920310020s) [2] r=-1 lpr=109 pi=[74,109)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 267.883087158s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:14:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:59.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:00 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 02:15:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 02:15:00 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 110 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=74/75 n=5 ec=56/47 lis/c=74/74 les/c/f=75/75/0 sis=110) [2]/[1] r=0 lpr=110 pi=[74,110)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:00 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 110 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=74/75 n=5 ec=56/47 lis/c=74/74 les/c/f=75/75/0 sis=110) [2]/[1] r=0 lpr=110 pi=[74,110)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:00 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 110 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:00 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 110 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:00 np0005539564 systemd[72696]: Created slice User Background Tasks Slice.
Nov 29 02:15:00 np0005539564 systemd[72696]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 02:15:00 np0005539564 systemd[72696]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 02:15:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:00.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 02:15:01 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 111 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=110/111 n=5 ec=56/47 lis/c=74/74 les/c/f=75/75/0 sis=110) [2]/[1] async=[2] r=0 lpr=110 pi=[74,110)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 111 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=110/111 n=5 ec=56/47 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 29 02:15:01 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 29 02:15:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:01.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:02 np0005539564 podman[88738]: 2025-11-29 07:15:02.126429914 +0000 UTC m=+0.706223561 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:15:02 np0005539564 podman[88738]: 2025-11-29 07:15:02.227275058 +0000 UTC m=+0.807068705 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:15:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 02:15:02 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 112 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=110/111 n=5 ec=56/47 lis/c=110/74 les/c/f=111/75/0 sis=112 pruub=14.800685883s) [2] async=[2] r=-1 lpr=112 pi=[74,112)/1 crt=53'1137 mlcod 53'1137 active pruub 271.021087646s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:02 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 112 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=110/111 n=5 ec=56/47 lis/c=110/74 les/c/f=111/75/0 sis=112 pruub=14.800599098s) [2] r=-1 lpr=112 pi=[74,112)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 271.021087646s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:02.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.165153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503165317, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7038, "num_deletes": 255, "total_data_size": 12987253, "memory_usage": 13168368, "flush_reason": "Manual Compaction"}
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503279212, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7806427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 258, "largest_seqno": 7043, "table_properties": {"data_size": 7778369, "index_size": 18343, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 81486, "raw_average_key_size": 23, "raw_value_size": 7711292, "raw_average_value_size": 2234, "num_data_blocks": 813, "num_entries": 3451, "num_filter_entries": 3451, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 1764400294, "file_creation_time": 1764400503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 114132 microseconds, and 30503 cpu microseconds.
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.279283) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7806427 bytes OK
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.279323) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.287997) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.288043) EVENT_LOG_v1 {"time_micros": 1764400503288035, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.288063) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 12949348, prev total WAL file size 13009136, number of live WAL files 2.
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.292315) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7623KB) 8(1648B)]
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503292430, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7808075, "oldest_snapshot_seqno": -1}
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3200 keys, 7802935 bytes, temperature: kUnknown
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503428661, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7802935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7775541, "index_size": 18324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 77308, "raw_average_key_size": 24, "raw_value_size": 7711572, "raw_average_value_size": 2409, "num_data_blocks": 813, "num_entries": 3200, "num_filter_entries": 3200, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764400503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.428861) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7802935 bytes
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.430956) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.3 rd, 57.3 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3456, records dropped: 256 output_compression: NoCompression
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.430975) EVENT_LOG_v1 {"time_micros": 1764400503430965, "job": 4, "event": "compaction_finished", "compaction_time_micros": 136291, "compaction_time_cpu_micros": 15127, "output_level": 6, "num_output_files": 1, "total_output_size": 7802935, "num_input_records": 3456, "num_output_records": 3200, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503432171, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503432218, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:15:03.292182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 02:15:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:03.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:15:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:04.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:05.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 02:15:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:06.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:07 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 29 02:15:07 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 29 02:15:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 02:15:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 02:15:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:07.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:08 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 29 02:15:08 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 29 02:15:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 02:15:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:08.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:09 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 29 02:15:09 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 29 02:15:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:09.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 02:15:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:10.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 02:15:11 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 02:15:11 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 02:15:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 02:15:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 02:15:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:12.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:15:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 02:15:13 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 118 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=83/84 n=5 ec=56/47 lis/c=83/83 les/c/f=84/84/0 sis=118 pruub=12.113296509s) [0] r=-1 lpr=118 pi=[83,118)/1 crt=53'1137 mlcod 0'0 active pruub 279.069183350s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:13 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 118 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=83/84 n=5 ec=56/47 lis/c=83/83 les/c/f=84/84/0 sis=118 pruub=12.113245010s) [0] r=-1 lpr=118 pi=[83,118)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 279.069183350s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:14 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 29 02:15:14 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 29 02:15:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:15:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 02:15:14 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 119 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=83/84 n=5 ec=56/47 lis/c=83/83 les/c/f=84/84/0 sis=119) [0]/[1] r=0 lpr=119 pi=[83,119)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:14 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 119 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=83/84 n=5 ec=56/47 lis/c=83/83 les/c/f=84/84/0 sis=119) [0]/[1] r=0 lpr=119 pi=[83,119)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:14.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:15 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.b deep-scrub starts
Nov 29 02:15:15 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.b deep-scrub ok
Nov 29 02:15:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:15:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 02:15:15 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 120 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=119/120 n=5 ec=56/47 lis/c=83/83 les/c/f=84/84/0 sis=119) [0]/[1] async=[0] r=0 lpr=119 pi=[83,119)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:15.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 02:15:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 121 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=119/120 n=5 ec=56/47 lis/c=119/83 les/c/f=120/84/0 sis=121 pruub=15.362966537s) [0] async=[0] r=-1 lpr=121 pi=[83,121)/1 crt=53'1137 mlcod 53'1137 active pruub 285.130432129s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:16 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 121 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=119/120 n=5 ec=56/47 lis/c=119/83 les/c/f=120/84/0 sis=121 pruub=15.362649918s) [0] r=-1 lpr=121 pi=[83,121)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 285.130432129s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:15:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:15:16 np0005539564 python3.9[89192]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:16.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 02:15:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:17.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:18 np0005539564 python3.9[89479]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:15:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:18.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:19 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 29 02:15:19 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 29 02:15:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:19.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:20 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Nov 29 02:15:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:20.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:21 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:15:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:21.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:22 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Nov 29 02:15:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:22.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:23.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:24.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:25 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:15:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:25.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:26.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).paxos(paxos updating c 252..780) lease_timeout -- calling new election
Nov 29 02:15:27 np0005539564 ceph-mon[81769]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:15:27 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(14) init, last seen epoch 14
Nov 29 02:15:27 np0005539564 python3.9[89631]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:15:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:15:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:27.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:15:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:27 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 8.485752106s
Nov 29 02:15:27 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 8.485752106s
Nov 29 02:15:27 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.486108780s, txc = 0x55ba503eb500
Nov 29 02:15:27 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.483182907s, txc = 0x55ba50ca0f00
Nov 29 02:15:27 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.528957844s, txc = 0x55ba50ca1200
Nov 29 02:15:27 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.563386917s, txc = 0x55ba4e525200
Nov 29 02:15:28 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Nov 29 02:15:28 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Nov 29 02:15:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:28 np0005539564 python3.9[89783]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:15:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:28.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:29 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 29 02:15:29 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 29 02:15:29 np0005539564 python3.9[89935]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:15:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 02:15:29 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:15:29 np0005539564 ceph-mon[81769]: mon.compute-1 calling monitor election
Nov 29 02:15:29 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:15:29 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:15:29 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:15:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:29.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 02:15:30 np0005539564 python3.9[90087]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:15:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:30.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:31 np0005539564 python3.9[90239]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:15:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:31.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:31 np0005539564 python3.9[90317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:15:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:32.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:33 np0005539564 python3.9[90469]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:15:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:33.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:33 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 29 02:15:33 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 29 02:15:34 np0005539564 python3.9[90623]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:15:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:15:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 02:15:34 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 125 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=90/91 n=5 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=125 pruub=9.320991516s) [2] r=-1 lpr=125 pi=[90,125)/1 crt=53'1137 mlcod 0'0 active pruub 297.582366943s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:34 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 125 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=90/91 n=5 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=125 pruub=9.320464134s) [2] r=-1 lpr=125 pi=[90,125)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 297.582366943s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:34.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:35 np0005539564 python3.9[90777]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:15:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:15:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:35.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 02:15:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 126 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=90/91 n=5 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=126) [2]/[1] r=0 lpr=126 pi=[90,126)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:35 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 126 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=90/91 n=5 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=126) [2]/[1] r=0 lpr=126 pi=[90,126)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:36 np0005539564 python3.9[90930]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:15:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:15:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:36.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:15:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 02:15:36 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 127 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=126/127 n=5 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=126) [2]/[1] async=[2] r=0 lpr=126 pi=[90,126)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:37 np0005539564 python3.9[91082]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:15:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 02:15:37 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 128 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=126/127 n=5 ec=56/47 lis/c=126/90 les/c/f=127/91/0 sis=128 pruub=15.239768982s) [2] async=[2] r=-1 lpr=128 pi=[90,128)/1 crt=53'1137 mlcod 53'1137 active pruub 306.551605225s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:37 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 128 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=126/127 n=5 ec=56/47 lis/c=126/90 les/c/f=127/91/0 sis=128 pruub=15.239694595s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 306.551605225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:15:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:37.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:15:38 np0005539564 python3.9[91234]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:15:38 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 29 02:15:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:38.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:38 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 29 02:15:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 02:15:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:39.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:40 np0005539564 python3.9[91387]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:15:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:40.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 02:15:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 02:15:41 np0005539564 python3.9[91539]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:15:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:41.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:42 np0005539564 python3.9[91617]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:15:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 02:15:42 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 130 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=73/74 n=5 ec=56/47 lis/c=73/73 les/c/f=74/74/0 sis=130 pruub=9.825573921s) [0] r=-1 lpr=130 pi=[73,130)/1 crt=53'1137 mlcod 0'0 active pruub 305.582672119s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:42 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 130 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=73/74 n=5 ec=56/47 lis/c=73/73 les/c/f=74/74/0 sis=130 pruub=9.825428009s) [0] r=-1 lpr=130 pi=[73,130)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 305.582672119s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 02:15:42 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 131 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=96/97 n=5 ec=56/47 lis/c=96/96 les/c/f=97/97/0 sis=131 pruub=10.480443954s) [0] r=-1 lpr=131 pi=[96,131)/1 crt=53'1137 mlcod 0'0 active pruub 306.733001709s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:42 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 131 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=96/97 n=5 ec=56/47 lis/c=96/96 les/c/f=97/97/0 sis=131 pruub=10.480386734s) [0] r=-1 lpr=131 pi=[96,131)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 306.733001709s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:42 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 131 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=73/74 n=5 ec=56/47 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=0 lpr=131 pi=[73,131)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:42 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 131 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=73/74 n=5 ec=56/47 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=0 lpr=131 pi=[73,131)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:42 np0005539564 python3.9[91769]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:15:42 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 29 02:15:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:42.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:42 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 29 02:15:43 np0005539564 python3.9[91847]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:15:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:15:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 02:15:43 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 132 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=96/97 n=5 ec=56/47 lis/c=96/96 les/c/f=97/97/0 sis=132) [0]/[1] r=0 lpr=132 pi=[96,132)/1 crt=53'1137 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:43 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 132 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=96/97 n=5 ec=56/47 lis/c=96/96 les/c/f=97/97/0 sis=132) [0]/[1] r=0 lpr=132 pi=[96,132)/1 crt=53'1137 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:43 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 132 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=131/132 n=5 ec=56/47 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] async=[0] r=0 lpr=131 pi=[73,131)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:43.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:44 np0005539564 python3.9[91999]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:15:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 02:15:44 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 133 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=131/132 n=5 ec=56/47 lis/c=131/73 les/c/f=132/74/0 sis=133 pruub=14.973197937s) [0] async=[0] r=-1 lpr=133 pi=[73,133)/1 crt=53'1137 mlcod 53'1137 active pruub 313.295532227s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:44 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 133 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=131/132 n=5 ec=56/47 lis/c=131/73 les/c/f=132/74/0 sis=133 pruub=14.973144531s) [0] r=-1 lpr=133 pi=[73,133)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 313.295532227s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:44 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 133 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=132/133 n=5 ec=56/47 lis/c=96/96 les/c/f=97/97/0 sis=132) [0]/[1] async=[0] r=0 lpr=132 pi=[96,132)/1 crt=53'1137 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 02:15:45 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 134 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=132/133 n=5 ec=56/47 lis/c=132/96 les/c/f=133/97/0 sis=134 pruub=14.983908653s) [0] async=[0] r=-1 lpr=134 pi=[96,134)/1 crt=53'1137 mlcod 53'1137 active pruub 314.336822510s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:45 np0005539564 ceph-osd[79212]: osd.1 pg_epoch: 134 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=132/133 n=5 ec=56/47 lis/c=132/96 les/c/f=133/97/0 sis=134 pruub=14.983828545s) [0] r=-1 lpr=134 pi=[96,134)/1 crt=53'1137 mlcod 0'0 unknown NOTIFY pruub 314.336822510s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:45.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:46.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:46 np0005539564 python3.9[92150]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:15:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:47.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 02:15:48 np0005539564 python3.9[92302]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:15:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:48.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:48 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 29 02:15:48 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 29 02:15:49 np0005539564 python3.9[92452]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:15:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:49.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:50.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:51 np0005539564 python3.9[92604]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:15:51 np0005539564 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:15:51 np0005539564 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:15:51 np0005539564 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:15:51 np0005539564 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:15:51 np0005539564 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:15:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:51.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:51 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Nov 29 02:15:51 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Nov 29 02:15:52 np0005539564 python3.9[92766]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:15:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:52.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:53.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:54 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 29 02:15:54 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 29 02:15:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:54.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:55.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:56 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 29 02:15:56 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 29 02:15:56 np0005539564 python3.9[92918]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:15:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:56.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:57 np0005539564 python3.9[93072]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:15:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:57 np0005539564 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 02:15:57 np0005539564 systemd[1]: session-34.scope: Consumed 1min 4.768s CPU time.
Nov 29 02:15:57 np0005539564 systemd-logind[785]: Session 34 logged out. Waiting for processes to exit.
Nov 29 02:15:57 np0005539564 systemd-logind[785]: Removed session 34.
Nov 29 02:15:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:57.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:58 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Nov 29 02:15:58 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Nov 29 02:15:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:15:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:58.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:15:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:15:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:59.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:59 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 29 02:15:59 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 29 02:16:00 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 29 02:16:00 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 29 02:16:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 02:16:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:00.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 02:16:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:01.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:02.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:03.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:04 np0005539564 systemd-logind[785]: New session 35 of user zuul.
Nov 29 02:16:04 np0005539564 systemd[1]: Started Session 35 of User zuul.
Nov 29 02:16:04 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 29 02:16:04 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 29 02:16:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:04.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:05 np0005539564 python3.9[93252]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:16:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:06 np0005539564 python3.9[93408]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:16:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:06.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:07 np0005539564 python3.9[93561]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:16:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:07.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:07 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Nov 29 02:16:07 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Nov 29 02:16:08 np0005539564 python3.9[93645]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:16:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:08.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:09.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:09 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 29 02:16:10 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 29 02:16:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:10.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:11 np0005539564 python3.9[93802]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:16:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:11.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:12.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:13 np0005539564 podman[94021]: 2025-11-29 07:16:13.617427412 +0000 UTC m=+0.103844754 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:16:13 np0005539564 podman[94021]: 2025-11-29 07:16:13.714922122 +0000 UTC m=+0.201339494 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:16:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:14 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 29 02:16:14 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 29 02:16:14 np0005539564 python3.9[94245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:16:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:14.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:15 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 29 02:16:15 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 29 02:16:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539564 python3.9[94499]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:16:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:15.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:16 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 29 02:16:16 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 29 02:16:16 np0005539564 python3.9[94681]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:16:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:16:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:16.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:16 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 29 02:16:17 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 29 02:16:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:17.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:17 np0005539564 python3.9[94831]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:16:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:16:18 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 29 02:16:18 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 29 02:16:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:18.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:19 np0005539564 python3.9[94989]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:16:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:19.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:20 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 29 02:16:20 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 29 02:16:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:20.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:21 np0005539564 python3.9[95142]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:21 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 29 02:16:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:21.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:21 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 29 02:16:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:22.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:23 np0005539564 python3.9[95429]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:16:23 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 29 02:16:23 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 29 02:16:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:23.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:24 np0005539564 python3.9[95579]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:16:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:24.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:25 np0005539564 python3.9[95735]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:16:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:25.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:26 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 29 02:16:26 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 29 02:16:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:26.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:27 np0005539564 python3.9[95890]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:16:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:27.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:28 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 29 02:16:28 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 29 02:16:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:28.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:29 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 29 02:16:29 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 29 02:16:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:30 np0005539564 python3.9[96093]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:16:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:30.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:31 np0005539564 python3.9[96247]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 02:16:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:33.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:33 np0005539564 systemd-logind[785]: Session 35 logged out. Waiting for processes to exit.
Nov 29 02:16:33 np0005539564 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 02:16:33 np0005539564 systemd[1]: session-35.scope: Consumed 19.923s CPU time.
Nov 29 02:16:33 np0005539564 systemd-logind[785]: Removed session 35.
Nov 29 02:16:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:33.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:35.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:37.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:37.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:38 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 29 02:16:38 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 29 02:16:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:39.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:40 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Nov 29 02:16:40 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Nov 29 02:16:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:41.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:41 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 29 02:16:41 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 29 02:16:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:43.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:43 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 29 02:16:43 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 29 02:16:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:43.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:44 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 29 02:16:44 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 29 02:16:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:45.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:47.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:47.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:49.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:49.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:50 np0005539564 systemd-logind[785]: New session 36 of user zuul.
Nov 29 02:16:50 np0005539564 systemd[1]: Started Session 36 of User zuul.
Nov 29 02:16:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:51.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:51 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 29 02:16:51 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 29 02:16:51 np0005539564 python3.9[96425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:16:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:52 np0005539564 python3.9[96579]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:16:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:53.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:53.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:54 np0005539564 python3.9[96772]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:55.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:55 np0005539564 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 02:16:55 np0005539564 systemd[1]: session-36.scope: Consumed 2.502s CPU time.
Nov 29 02:16:55 np0005539564 systemd-logind[785]: Session 36 logged out. Waiting for processes to exit.
Nov 29 02:16:55 np0005539564 systemd-logind[785]: Removed session 36.
Nov 29 02:16:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:55.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:57.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:57.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:59.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:16:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:59.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:01.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:01.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:02 np0005539564 systemd-logind[785]: New session 37 of user zuul.
Nov 29 02:17:03 np0005539564 systemd[1]: Started Session 37 of User zuul.
Nov 29 02:17:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:03.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:03 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 29 02:17:03 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 29 02:17:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:03.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:04 np0005539564 python3.9[96951]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:17:04 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 29 02:17:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:05.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:05 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 29 02:17:05 np0005539564 python3.9[97105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:17:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:05.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:06 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 29 02:17:06 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 29 02:17:07 np0005539564 python3.9[97261]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:17:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:07.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:07 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 29 02:17:07 np0005539564 python3.9[97345]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:17:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:08 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 29 02:17:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:09.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:09.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:10 np0005539564 python3.9[97498]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:17:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:11.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:11 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Nov 29 02:17:11 np0005539564 ceph-osd[79212]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Nov 29 02:17:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:11.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:12 np0005539564 python3.9[97693]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:17:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:14 np0005539564 python3.9[97845]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:17:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:15.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:15 np0005539564 python3.9[98010]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:17:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:15.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:16 np0005539564 python3.9[98088]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:17:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:17.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:17 np0005539564 python3.9[98240]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:17:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:17.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:18 np0005539564 python3.9[98318]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:19.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:19 np0005539564 python3.9[98470]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:20 np0005539564 python3.9[98622]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:21.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:21 np0005539564 python3.9[98774]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:21.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:22 np0005539564 python3.9[98926]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:23.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:23 np0005539564 python3.9[99078]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:17:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:24.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:25.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:26.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:27.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:28.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:29.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:29 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:17:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:30.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:31.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:32.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:33.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:33 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:17:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:34.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:35.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:36.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:37.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:37 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:17:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:38.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:39.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:40.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:41.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:41 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:17:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:42.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:43 np0005539564 python3.9[99366]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:17:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:43.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:43 np0005539564 python3.9[99520]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:17:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:44.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:44 np0005539564 python3.9[99672]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:17:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:45.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:45 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:17:45 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:17:45 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:17:45 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:17:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:45 np0005539564 python3.9[99824]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:17:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:46.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:46 np0005539564 python3.9[99977]: ansible-service_facts Invoked
Nov 29 02:17:46 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:17:46 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:46 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:17:46 np0005539564 network[99994]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:17:46 np0005539564 network[99995]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:17:46 np0005539564 network[99996]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:17:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:48.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:49.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:50.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:51.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:52.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:52 np0005539564 python3.9[100448]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:17:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:53.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:54.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:55.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:55 np0005539564 python3.9[100653]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:17:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:56.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:57.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:57 np0005539564 python3.9[100805]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:17:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:58.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:58 np0005539564 python3.9[100883]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:17:59 np0005539564 python3.9[101035]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:17:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:17:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:59 np0005539564 python3.9[101113]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:00.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:01 np0005539564 python3.9[101265]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:03.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:03 np0005539564 python3.9[101417]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:18:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:04 np0005539564 python3.9[101501]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:18:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:05.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:05 np0005539564 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 02:18:05 np0005539564 systemd[1]: session-37.scope: Consumed 26.679s CPU time.
Nov 29 02:18:05 np0005539564 systemd-logind[785]: Session 37 logged out. Waiting for processes to exit.
Nov 29 02:18:05 np0005539564 systemd-logind[785]: Removed session 37.
Nov 29 02:18:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:06.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:07.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:08.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:09.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:10.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:11 np0005539564 systemd-logind[785]: New session 38 of user zuul.
Nov 29 02:18:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:11 np0005539564 systemd[1]: Started Session 38 of User zuul.
Nov 29 02:18:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:11.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:11 np0005539564 python3.9[101685]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:12.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:12 np0005539564 python3.9[101837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:13.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:13 np0005539564 python3.9[101915]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:13 np0005539564 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 02:18:13 np0005539564 systemd[1]: session-38.scope: Consumed 1.787s CPU time.
Nov 29 02:18:13 np0005539564 systemd-logind[785]: Session 38 logged out. Waiting for processes to exit.
Nov 29 02:18:13 np0005539564 systemd-logind[785]: Removed session 38.
Nov 29 02:18:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:14.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:15.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:16.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:17.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:18.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:19.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:20.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:20 np0005539564 systemd-logind[785]: New session 39 of user zuul.
Nov 29 02:18:20 np0005539564 systemd[1]: Started Session 39 of User zuul.
Nov 29 02:18:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:21.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:21 np0005539564 python3.9[102093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:18:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:22.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:22 np0005539564 python3.9[102250]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:23.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:23 np0005539564 python3.9[102425]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:23 np0005539564 python3.9[102503]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.4zwokl5x recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:24.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:24 np0005539564 python3.9[102655]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:25.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:25 np0005539564 python3.9[102733]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.l91i0fts recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:26.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:26 np0005539564 python3.9[102885]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:18:26 np0005539564 python3.9[103037]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:27.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:27 np0005539564 python3.9[103115]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:18:28 np0005539564 python3.9[103267]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:28.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:28 np0005539564 python3.9[103345]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:18:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:29.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:29 np0005539564 python3.9[103497]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:30 np0005539564 python3.9[103649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:30.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:30 np0005539564 python3.9[103727]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:31.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:31 np0005539564 python3.9[103879]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:31 np0005539564 python3.9[103957]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:32.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:33.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:33 np0005539564 python3.9[104109]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:18:33 np0005539564 systemd[1]: Reloading.
Nov 29 02:18:33 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:18:33 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:18:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:34.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:35.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:36.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:37.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:37 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:18:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:38.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:38 np0005539564 python3.9[104298]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:38 np0005539564 python3.9[104376]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:39.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:39 np0005539564 python3.9[104528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:40.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:40 np0005539564 python3.9[104606]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:41 np0005539564 python3.9[104758]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:18:41 np0005539564 systemd[1]: Reloading.
Nov 29 02:18:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:41.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:41 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:18:41 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:18:41 np0005539564 systemd[1]: Starting Create netns directory...
Nov 29 02:18:41 np0005539564 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:18:41 np0005539564 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:18:41 np0005539564 systemd[1]: Finished Create netns directory.
Nov 29 02:18:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:42.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:42 np0005539564 python3.9[104952]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:18:42 np0005539564 network[104969]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:18:42 np0005539564 network[104970]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:18:42 np0005539564 network[104971]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:18:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:43.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:44.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:45.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:46.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:47.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:48.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:48 np0005539564 python3.9[105233]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:48 np0005539564 python3.9[105311]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:49.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:49 np0005539564 python3.9[105463]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:50.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:50 np0005539564 python3.9[105615]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:51.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:51 np0005539564 python3.9[105693]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:52.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:52 np0005539564 python3.9[105845]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:18:52 np0005539564 systemd[1]: Starting Time & Date Service...
Nov 29 02:18:52 np0005539564 systemd[1]: Started Time & Date Service.
Nov 29 02:18:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:53 np0005539564 python3.9[106001]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:54.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:54 np0005539564 python3.9[106155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:55 np0005539564 python3.9[106233]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:56.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:56 np0005539564 python3.9[106518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:57 np0005539564 python3.9[106596]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.sow5q1cb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:57 np0005539564 python3.9[106748]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:58.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:58 np0005539564 python3.9[106826]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:18:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:59.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:59 np0005539564 python3.9[106978]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:00.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:00 np0005539564 python3[107131]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:19:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:01.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:01 np0005539564 python3.9[107283]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:01 np0005539564 python3.9[107361]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:02.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:02 np0005539564 python3.9[107513]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:03.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:03 np0005539564 python3.9[107591]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:04.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:04 np0005539564 python3.9[107743]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:19:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:19:04 np0005539564 python3.9[107821]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:05.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:05 np0005539564 python3.9[107973]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:06.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:06 np0005539564 python3.9[108051]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:07 np0005539564 python3.9[108203]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:07.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.312454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747312539, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2726, "num_deletes": 251, "total_data_size": 6142853, "memory_usage": 6230912, "flush_reason": "Manual Compaction"}
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747381112, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3969675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7048, "largest_seqno": 9769, "table_properties": {"data_size": 3958555, "index_size": 6910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26743, "raw_average_key_size": 21, "raw_value_size": 3934543, "raw_average_value_size": 3167, "num_data_blocks": 306, "num_entries": 1242, "num_filter_entries": 1242, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400503, "oldest_key_time": 1764400503, "file_creation_time": 1764400747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 68707 microseconds, and 11766 cpu microseconds.
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.381172) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3969675 bytes OK
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.381201) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.386899) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.386983) EVENT_LOG_v1 {"time_micros": 1764400747386965, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.387019) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 6130360, prev total WAL file size 6130360, number of live WAL files 2.
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.388871) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3876KB)], [15(7620KB)]
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747388954, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11772610, "oldest_snapshot_seqno": -1}
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:19:07 np0005539564 python3.9[108281]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3916 keys, 10209032 bytes, temperature: kUnknown
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747786363, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 10209032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10176146, "index_size": 22024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 94677, "raw_average_key_size": 24, "raw_value_size": 10098759, "raw_average_value_size": 2578, "num_data_blocks": 963, "num_entries": 3916, "num_filter_entries": 3916, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764400747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.855686) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10209032 bytes
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.951038) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.6 rd, 25.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(5.5) write-amplify(2.6) OK, records in: 4442, records dropped: 526 output_compression: NoCompression
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.951089) EVENT_LOG_v1 {"time_micros": 1764400747951070, "job": 6, "event": "compaction_finished", "compaction_time_micros": 397508, "compaction_time_cpu_micros": 35524, "output_level": 6, "num_output_files": 1, "total_output_size": 10209032, "num_input_records": 4442, "num_output_records": 3916, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747951879, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747953399, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.388778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.953441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.953446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.953448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.953449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:07.953451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:08.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:08 np0005539564 python3.9[108433]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:09.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:11.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:11 np0005539564 python3.9[108588]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:12.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:12 np0005539564 python3.9[108740]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:13.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:13 np0005539564 python3.9[108892]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:15 np0005539564 python3.9[109044]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:19:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:15.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:15 np0005539564 python3.9[109196]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:19:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:16.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.309356) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756309416, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 314, "num_deletes": 250, "total_data_size": 213496, "memory_usage": 220600, "flush_reason": "Manual Compaction"}
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756313064, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 140562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9774, "largest_seqno": 10083, "table_properties": {"data_size": 138546, "index_size": 244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5246, "raw_average_key_size": 18, "raw_value_size": 134612, "raw_average_value_size": 485, "num_data_blocks": 11, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400751, "oldest_key_time": 1764400751, "file_creation_time": 1764400756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 3743 microseconds, and 1741 cpu microseconds.
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.313102) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 140562 bytes OK
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.313122) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.314386) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.314401) EVENT_LOG_v1 {"time_micros": 1764400756314396, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.314420) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 211252, prev total WAL file size 211252, number of live WAL files 2.
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.314736) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(137KB)], [18(9969KB)]
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756314794, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10349594, "oldest_snapshot_seqno": -1}
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3685 keys, 7524107 bytes, temperature: kUnknown
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756398000, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7524107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7496389, "index_size": 17432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 90419, "raw_average_key_size": 24, "raw_value_size": 7426559, "raw_average_value_size": 2015, "num_data_blocks": 761, "num_entries": 3685, "num_filter_entries": 3685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764400756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.398316) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7524107 bytes
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.400356) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.2 rd, 90.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(127.2) write-amplify(53.5) OK, records in: 4193, records dropped: 508 output_compression: NoCompression
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.400383) EVENT_LOG_v1 {"time_micros": 1764400756400370, "job": 8, "event": "compaction_finished", "compaction_time_micros": 83323, "compaction_time_cpu_micros": 20751, "output_level": 6, "num_output_files": 1, "total_output_size": 7524107, "num_input_records": 4193, "num_output_records": 3685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756400561, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756402794, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.314663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.402857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.402863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.402865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.402867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:19:16.402869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539564 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 02:19:16 np0005539564 systemd[1]: session-39.scope: Consumed 35.347s CPU time.
Nov 29 02:19:16 np0005539564 systemd-logind[785]: Session 39 logged out. Waiting for processes to exit.
Nov 29 02:19:16 np0005539564 systemd-logind[785]: Removed session 39.
Nov 29 02:19:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:17.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:18.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:19.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:20.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:21.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:22.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:23 np0005539564 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:19:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:23.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:24.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:25.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:26.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:26 np0005539564 systemd-logind[785]: New session 40 of user zuul.
Nov 29 02:19:26 np0005539564 systemd[1]: Started Session 40 of User zuul.
Nov 29 02:19:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:27.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:27 np0005539564 python3.9[109431]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:19:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:28.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:28 np0005539564 python3.9[109583]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:19:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:29.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:29 np0005539564 python3.9[109737]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 02:19:30 np0005539564 python3.9[109889]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.lweaynph follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:30.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:30 np0005539564 python3.9[110014]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.lweaynph mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400769.7309282-108-204335165196943/.source.lweaynph _original_basename=.0_pe2cbw follow=False checksum=425f0dec1542497d25012cf56c23eb3f3f1d2c45 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:31.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:32 np0005539564 python3.9[110166]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:32.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:33 np0005539564 python3.9[110318]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsLXbFhjUoBaTkhKZlhlr4wo49zgbzeJBequh3eUPlExtzdjrm/R47hkAJGagw+KhipRZ6XygyvP7g0rFG4kdUV8ZbW7HpIhvM2LCuDhFHJGta5IbLQDOAA3QuuNA4DyzfWhW146Q2aOja0AoRZOxjBRKO37fhEgGVJO/UZQHoJZFXHQPBPhZ27Wtt4Jfhz0G/t7WgxqsHTg9pnZL3PKV8yC/Ety9V+G9Hjrbwv8GblAazAMvnYcN6Hhh0mKKJ41E1++cy2nN9Lr6iU9KXS4BN73PkapyN75SJK4/2HEELgi7XCGQtXkdc+cnS1nYdtqW5aUS8fONsji8bdoy4AvRQrTsNWbXNcQXBesHoKNiBaUZjzaW0LhwQ2HTD36wG2FW/thgjrlU0AY8aqut/tcB7sjUacgNn8XfqibZb07x75HvbixT1G+V9ax63HLyfAiLCZquwpnl7CuyQvBAe+UNPLU4Kegtn+KKw2+3BoNkkAKkAoDdKd5fQKWFavTllfU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEesPYkFXAKa2jD/XHieFXe2/NLZG5BPNBvLebxF7i4V#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3fAbGbewc62wcP/ANYyTDYdWflUi4LqSZ2pYXEDgbyEIKVn6IU7ulNV9i7b7SvxrtzT5K34kYv1WsU3bRd5RM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc04fosxiJMz9URZzfwgW2kqQvT/wRjkGRSpo8InnYlU+RAljr+QL8e1C8DPu41m+HGkgDmV4uDikwXF3b0w/6D0/P6iPUsexRy4OkOFgOqlzl7+pNzQ1p5SMgMoaKslyPA1DEUc0bxHjIpTHyjq/X8YamvXJO4KLpZ42Ii0c6RyWcejiRw4wZQWh2s6egN8in6cEVODGcWVseYKhFaPjdUDBtuQy4LaGwosJIkR1OCy9coVbEdcv2vOxdpLby9ssC7nEDAKg2X+0rmcdpImSt43KnAXiuMegm5A7FvAas99jVOYawKyostqRzEOId/1TnbBGDEabjKYlPEOLSFiMsBWLwTkN5loBfqwpLWlheJWPYP90mvfiENFN4W+ut6nx4zBVHQYvGts86HDkcSVipUVxaYaWf37c/GMXcee85lI//k2lNWe0yYOJGU7P1jyU+ug0Cn1MeQghj1V8Gcnax0b58J+Ttp4a7UnYek2q2w2h6nbIbZT5m+yw/KYeNtE8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgIAlZsupHHlO1a9ydDFIdgMGgwYqu0xx1PBhB1cRGz#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZLPbvNXmCCAW6hZosm19hA5j7Lbr0PZCizVLJXvz0y88L5bXrAQVln7SscOXMnvFy6P8Fn/54/gijC9Rd2rDs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9IXwkB2kbuJv6AXS7YRKSa74/LXNdMPGOs9WAzsnePFq78YtNX+JkgkhS6H4PtKZr7d8zGldcUVTXsG54r7DHIiEhjiunXArwm7nxPCcvRVmU6kntuiJbAOObaZlgrdlGcNsB0gEt5E4YWVNxiiRnsA60PvQbLyfN0/+99rmyMLcT4z9DL+dZj8kNH54PFTeXByeUArORk1qkPj734Ru+RP82qH26PyeJz2HlCsq7qPKepCgiVDKLbjXnLqt58qEzzVFKx3gfIhpvZ8PiUoFSS6UJlk/70XVp+og+tU/Dv952UWQMOHkfsIfqvdJgcy2hYuLbI03ZOF/NRU1FEUEPIhfU7kM2KzkqoDLyu+ntXGTBE6vWBuqrH+KUMqrAGGXZPnoTS8zb3H1izaYqN48vVE10jDHjkhWEEIuwN5AVGsCBjpRkQ+rZ+gDb/z4loN29WMX/KmqYAy+qsu7X8gFojfnlrv4DYVd1lxYZPnqS8bCkeBF8txjMVUD5EpNVGVU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpx0/R+UH9iWt0hByjYOi11MmeoOEV/RM05Qq0CkR6T#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcAFq3gx5S+bCbh1b0B1Plh9X3nnDc+14hmd4HK59tBD1jd/VrvEVcg/jrioqZJxPOiBK8QMTq5htAcmQbIjnM=#012 create=True mode=0644 path=/tmp/ansible.lweaynph state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:33.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:34 np0005539564 python3.9[110470]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lweaynph' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:34.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:35 np0005539564 python3.9[110624]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lweaynph state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:35.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:35 np0005539564 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 02:19:35 np0005539564 systemd[1]: session-40.scope: Consumed 5.722s CPU time.
Nov 29 02:19:35 np0005539564 systemd-logind[785]: Session 40 logged out. Waiting for processes to exit.
Nov 29 02:19:35 np0005539564 systemd-logind[785]: Removed session 40.
Nov 29 02:19:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:36.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:37.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:38.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:39.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:40.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539564 systemd-logind[785]: New session 41 of user zuul.
Nov 29 02:19:40 np0005539564 systemd[1]: Started Session 41 of User zuul.
Nov 29 02:19:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:42 np0005539564 python3.9[110803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:42.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:43.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:43 np0005539564 python3.9[110959]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:19:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:44.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:44 np0005539564 python3.9[111113]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:19:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:45.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:45 np0005539564 python3.9[111266]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:46.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:46 np0005539564 python3.9[111419]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:19:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:47.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:47 np0005539564 python3.9[111571]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:47 np0005539564 systemd-logind[785]: Session 41 logged out. Waiting for processes to exit.
Nov 29 02:19:47 np0005539564 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 02:19:47 np0005539564 systemd[1]: session-41.scope: Consumed 4.504s CPU time.
Nov 29 02:19:47 np0005539564 systemd-logind[785]: Removed session 41.
Nov 29 02:19:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:48.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:50.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:52.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:53 np0005539564 systemd-logind[785]: New session 42 of user zuul.
Nov 29 02:19:53 np0005539564 systemd[1]: Started Session 42 of User zuul.
Nov 29 02:19:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:53.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:54.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:54 np0005539564 python3.9[111749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:55.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:55 np0005539564 python3.9[111905]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:19:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:56.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:57.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:57 np0005539564 python3.9[111989]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:19:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:58.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:19:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:59.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:59 np0005539564 python3.9[112140]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:20:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:00.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:01.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:01 np0005539564 python3.9[112291]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:20:02 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:20:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:02.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:02 np0005539564 python3.9[112441]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:20:03 np0005539564 python3.9[112591]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:20:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:03 np0005539564 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 02:20:03 np0005539564 systemd[1]: session-42.scope: Consumed 6.746s CPU time.
Nov 29 02:20:03 np0005539564 systemd-logind[785]: Session 42 logged out. Waiting for processes to exit.
Nov 29 02:20:03 np0005539564 systemd-logind[785]: Removed session 42.
Nov 29 02:20:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:04.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:05.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:06.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:08.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:09.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:09 np0005539564 systemd-logind[785]: New session 43 of user zuul.
Nov 29 02:20:09 np0005539564 systemd[1]: Started Session 43 of User zuul.
Nov 29 02:20:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:10 np0005539564 python3.9[112771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:20:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:11.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:12 np0005539564 python3.9[112927]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:13 np0005539564 python3.9[113079]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:13.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:14 np0005539564 python3.9[113231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:14.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:15 np0005539564 python3.9[113354]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400813.6431384-166-219295495889578/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=0d99c9abcec4a172081938761329ce65439bf03e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:15.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:15 np0005539564 python3.9[113506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:16.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:16 np0005539564 python3.9[113629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400815.2936068-166-150123738313037/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=cd8ab8ed4fdf501d1b4ce95ba4f398e005279fa9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:17 np0005539564 python3.9[113781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:17.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:17 np0005539564 python3.9[113904]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400816.8045678-166-65370010995727/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=ee13808667244d67f276273372d8805f301825b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:18 np0005539564 python3.9[114056]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:19.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:19 np0005539564 python3.9[114359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:20.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:20 np0005539564 python3.9[114613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:20 np0005539564 python3.9[114736]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400819.813546-350-240015011422067/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=3c4d8f6e9c17aae5651ce3d88d9a927cc7d724a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:21.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:21 np0005539564 python3.9[114888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:22 np0005539564 python3.9[115011]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400821.1430054-350-39519425094652/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=901ecafc59da21fac83aa5044424fabd09a6fef2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:23 np0005539564 python3.9[115163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:23.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:20:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:20:23 np0005539564 python3.9[115286]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400822.6642134-350-211067775467912/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=f38e7044b4430879d872881a5277bf0b18f4d4a0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:24 np0005539564 python3.9[115440]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:25 np0005539564 python3.9[115592]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:25.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:25 np0005539564 python3.9[115744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:26.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:26 np0005539564 python3.9[115867]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400825.433409-536-44786079121632/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=fe8dbda43339f246f13226e50c62f7aab7137cef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:27 np0005539564 python3.9[116019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:27.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:27 np0005539564 python3.9[116142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400826.6720395-536-129162465499869/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=901ecafc59da21fac83aa5044424fabd09a6fef2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:28.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:28 np0005539564 python3.9[116294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:29 np0005539564 python3.9[116440]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400827.9976761-536-262072419411276/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=c20087c5fddf24306433860443c3ecb6c6a2fe59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:29.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:30.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:30 np0005539564 python3.9[116619]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:31.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:31 np0005539564 python3.9[116771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:32 np0005539564 python3.9[116894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400830.8441598-743-222931488886951/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:32.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:32 np0005539564 python3.9[117046]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:33.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:33 np0005539564 python3.9[117198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:34 np0005539564 python3.9[117321]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400833.014514-819-39319373056802/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:34.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:35 np0005539564 python3.9[117473]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:35.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:35 np0005539564 python3.9[117625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:36.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:36 np0005539564 python3.9[117748]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400835.4153757-893-77064997694524/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:37 np0005539564 python3.9[117900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:37.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:37 np0005539564 python3.9[118052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:38.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:38 np0005539564 python3.9[118175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400837.4130142-970-53624096549183/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:39 np0005539564 python3.9[118327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:39.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:39 np0005539564 python3.9[118479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:40.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:40 np0005539564 python3.9[118602]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400839.4824107-1044-50144485659025/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:41 np0005539564 python3.9[118754]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:42 np0005539564 python3.9[118906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:42.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:42 np0005539564 python3.9[119029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400841.674822-1100-100357587456237/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:43 np0005539564 systemd-logind[785]: Session 43 logged out. Waiting for processes to exit.
Nov 29 02:20:43 np0005539564 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 02:20:43 np0005539564 systemd[1]: session-43.scope: Consumed 25.717s CPU time.
Nov 29 02:20:43 np0005539564 systemd-logind[785]: Removed session 43.
Nov 29 02:20:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:43.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:44.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:46.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:46.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:20:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5834 writes, 25K keys, 5834 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5834 writes, 927 syncs, 6.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5834 writes, 25K keys, 5834 commit groups, 1.0 writes per commit group, ingest: 19.04 MB, 0.03 MB/s#012Interval WAL: 5834 writes, 927 syncs, 6.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 02:20:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:48.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:48.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:48 np0005539564 systemd-logind[785]: New session 44 of user zuul.
Nov 29 02:20:49 np0005539564 systemd[1]: Started Session 44 of User zuul.
Nov 29 02:20:49 np0005539564 python3.9[119209]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:50.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:50 np0005539564 python3.9[119361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:51 np0005539564 python3.9[119484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400850.0907755-68-242437870265784/.source.conf _original_basename=ceph.conf follow=False checksum=c098df1eed8765439af66fe3d0de96ae0e466ab0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:52 np0005539564 python3.9[119636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:52 np0005539564 python3.9[119759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400851.6270733-68-22472210534621/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=b1c127dd74be8d747654d0d3f00b29a32faa6866 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:53 np0005539564 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 02:20:53 np0005539564 systemd[1]: session-44.scope: Consumed 2.996s CPU time.
Nov 29 02:20:53 np0005539564 systemd-logind[785]: Session 44 logged out. Waiting for processes to exit.
Nov 29 02:20:53 np0005539564 systemd-logind[785]: Removed session 44.
Nov 29 02:20:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:54.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:56.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:58.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:20:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:58.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:59 np0005539564 systemd-logind[785]: New session 45 of user zuul.
Nov 29 02:20:59 np0005539564 systemd[1]: Started Session 45 of User zuul.
Nov 29 02:21:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:00.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:01 np0005539564 python3.9[119939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:02 np0005539564 python3.9[120095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:21:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:02.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:02.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:02 np0005539564 python3.9[120247]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:21:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:03 np0005539564 python3.9[120397]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:04.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:04.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:04 np0005539564 python3.9[120549]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:21:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:06.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:08.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:08.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:09 np0005539564 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 02:21:09 np0005539564 python3.9[120705]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:21:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:10.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:10.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:10 np0005539564 python3.9[120789]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:12.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:13 np0005539564 python3.9[120942]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:21:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:14.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:14.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:14 np0005539564 python3[121097]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 02:21:16 np0005539564 python3.9[121250]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:16.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:16.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:16 np0005539564 python3.9[121402]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:17 np0005539564 python3.9[121480]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:18 np0005539564 python3.9[121632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:18.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:18 np0005539564 python3.9[121710]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hmswb0os recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:19 np0005539564 python3.9[121862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:19 np0005539564 python3.9[121940]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:20.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:20 np0005539564 python3.9[122093]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:21 np0005539564 python3[122246]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:21:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:22.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:22.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:22 np0005539564 python3.9[122398]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:23 np0005539564 python3.9[122525]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400882.254974-437-275966302330503/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:24.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:24.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:24 np0005539564 python3.9[122677]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:25 np0005539564 python3.9[122802]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400883.806396-482-267852946142744/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:26.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:26.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:26 np0005539564 python3.9[122954]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:27 np0005539564 python3.9[123079]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400886.0069842-527-105193333566906/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:28 np0005539564 python3.9[123231]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:28.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:28.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:29 np0005539564 python3.9[123356]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400887.6355019-572-131705429105272/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:30.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:32.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:33 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:21:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:34.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:21:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1826 writes, 11K keys, 1826 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 1825 writes, 1825 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1826 writes, 11K keys, 1826 commit groups, 1.0 writes per commit group, ingest: 21.43 MB, 0.04 MB/s#012Interval WAL: 1825 writes, 1825 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     60.3      0.19              0.04         4    0.047       0      0       0.0       0.0#012  L6      1/0    7.18 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     46.3     39.5      0.62              0.07         3    0.206     12K   1290       0.0       0.0#012 Sum      1/0    7.18 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     35.4     44.3      0.81              0.12         7    0.115     12K   1290       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     35.5     44.4      0.80              0.12         6    0.134     12K   1290       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     46.3     39.5      0.62              0.07         3    0.206     12K   1290       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.9      0.19              0.04         3    0.062       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.8 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 308.00 MB usage: 1.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(48,1.01 MB,0.32795%) FilterBlock(7,41.30 KB,0.0130938%) IndexBlock(7,92.83 KB,0.0294326%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:21:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:36.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:36.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:37 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:21:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:38.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:38.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:40.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:40.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:41 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:21:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:42.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:42.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:44.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:44.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:45 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:21:46 np0005539564 python3.9[123621]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:46.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:46.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:46 np0005539564 python3.9[123764]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400889.2834027-617-188926667633715/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:47 np0005539564 python3.9[123917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).paxos(paxos updating c 503..1138) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 12.637134552s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:21:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:47 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-1[81765]: 2025-11-29T07:21:47.725+0000 7f973f79c640 -1 mon.compute-1@2(peon).paxos(paxos updating c 503..1138) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 12.637134552s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:21:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:48.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:48 np0005539564 python3.9[124069]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:49 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:21:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:49 np0005539564 python3.9[124226]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:21:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:50.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:50 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:50.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:51 np0005539564 python3.9[124378]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:21:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:21:51 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:21:51 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:21:51 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:21:51 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:21:51 np0005539564 python3.9[124531]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:21:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:52.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:52 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:52 np0005539564 python3.9[124685]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:53 np0005539564 python3.9[124840]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:21:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:54 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:54.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:55 np0005539564 python3.9[124990]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:21:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 02:21:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:21:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:21:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:56 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:56.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:56.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:56 np0005539564 python3.9[125143]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:56 np0005539564 ovs-vsctl[125144]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 02:21:57 np0005539564 python3.9[125296]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:21:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:21:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:58 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:59 np0005539564 python3.9[125451]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:59 np0005539564 ovs-vsctl[125452]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 02:21:59 np0005539564 python3.9[125602]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:00 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:22:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:00.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:22:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:00 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:00.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:00 np0005539564 python3.9[125756]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:01 np0005539564 python3.9[125908]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:02 np0005539564 python3.9[125986]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:02.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:22:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:02 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:02.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:22:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:22:03 np0005539564 python3.9[126138]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:03 np0005539564 python3.9[126216]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:04 np0005539564 python3.9[126368]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:05 np0005539564 python3.9[126520]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:05 np0005539564 python3.9[126598]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:06.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:06 np0005539564 python3.9[126750]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:07 np0005539564 python3.9[126828]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:08 np0005539564 python3.9[126980]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:22:08 np0005539564 systemd[1]: Reloading.
Nov 29 02:22:08 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:08 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:08.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:09 np0005539564 python3.9[127171]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:09 np0005539564 python3.9[127249]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:10 np0005539564 python3.9[127401]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:10.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:10 np0005539564 python3.9[127479]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:12 np0005539564 python3.9[127631]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:22:12 np0005539564 systemd[1]: Reloading.
Nov 29 02:22:12 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:12 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:12.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:12.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:12 np0005539564 systemd[1]: Starting Create netns directory...
Nov 29 02:22:12 np0005539564 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:22:12 np0005539564 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:22:12 np0005539564 systemd[1]: Finished Create netns directory.
Nov 29 02:22:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:13 np0005539564 python3.9[127826]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:14 np0005539564 python3.9[127978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:14.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:14 np0005539564 python3.9[128101]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400933.7124038-1370-96850026760159/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:16 np0005539564 python3.9[128303]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:22:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:22:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:16.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:16.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:16 np0005539564 python3.9[128455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:17 np0005539564 python3.9[128578]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400936.2959273-1445-14929623563706/.source.json _original_basename=.y44xuv_x follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:17 np0005539564 python3.9[128730]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:18.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:20.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:20.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:20 np0005539564 python3.9[129157]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 02:22:21 np0005539564 python3.9[129309]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:22:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:22.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:22.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:22 np0005539564 python3.9[129461]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:22:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:24.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:24 np0005539564 python3[129639]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:22:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:26.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:28.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:30.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:30.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:32 np0005539564 podman[129652]: 2025-11-29 07:22:32.129888372 +0000 UTC m=+7.321114026 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:22:32 np0005539564 podman[129775]: 2025-11-29 07:22:32.244628309 +0000 UTC m=+0.025844529 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:22:32 np0005539564 podman[129775]: 2025-11-29 07:22:32.360017515 +0000 UTC m=+0.141233725 container create 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 02:22:32 np0005539564 python3[129639]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:22:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:32.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:32.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:33 np0005539564 python3.9[129965]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:34 np0005539564 python3.9[130119]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:34 np0005539564 python3.9[130195]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:34.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:35 np0005539564 python3.9[130346]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400954.5382454-1709-99660792997237/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:35 np0005539564 python3.9[130422]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:22:35 np0005539564 systemd[1]: Reloading.
Nov 29 02:22:35 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:35 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:36 np0005539564 python3.9[130534]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:22:36 np0005539564 systemd[1]: Reloading.
Nov 29 02:22:37 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:37 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:37 np0005539564 systemd[1]: Starting ovn_controller container...
Nov 29 02:22:37 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:22:37 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50b97c2a0df6374d1cfd51c9dbec04cf1697d808b285b846ca02fb6e71dcc3e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:37 np0005539564 systemd[1]: Started /usr/bin/podman healthcheck run 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1.
Nov 29 02:22:37 np0005539564 podman[130576]: 2025-11-29 07:22:37.351630127 +0000 UTC m=+0.112116798 container init 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + sudo -E kolla_set_configs
Nov 29 02:22:37 np0005539564 podman[130576]: 2025-11-29 07:22:37.375169452 +0000 UTC m=+0.135656123 container start 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:22:37 np0005539564 edpm-start-podman-container[130576]: ovn_controller
Nov 29 02:22:37 np0005539564 systemd[1]: Created slice User Slice of UID 0.
Nov 29 02:22:37 np0005539564 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 02:22:37 np0005539564 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 02:22:37 np0005539564 systemd[1]: Starting User Manager for UID 0...
Nov 29 02:22:37 np0005539564 edpm-start-podman-container[130575]: Creating additional drop-in dependency for "ovn_controller" (192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1)
Nov 29 02:22:37 np0005539564 podman[130598]: 2025-11-29 07:22:37.448783789 +0000 UTC m=+0.062297342 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 02:22:37 np0005539564 systemd[1]: 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1-63e62818cc38cc0c.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:22:37 np0005539564 systemd[1]: 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1-63e62818cc38cc0c.service: Failed with result 'exit-code'.
Nov 29 02:22:37 np0005539564 systemd[1]: Reloading.
Nov 29 02:22:37 np0005539564 systemd[130623]: Queued start job for default target Main User Target.
Nov 29 02:22:37 np0005539564 systemd[130623]: Created slice User Application Slice.
Nov 29 02:22:37 np0005539564 systemd[130623]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 02:22:37 np0005539564 systemd[130623]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:22:37 np0005539564 systemd[130623]: Reached target Paths.
Nov 29 02:22:37 np0005539564 systemd[130623]: Reached target Timers.
Nov 29 02:22:37 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:37 np0005539564 systemd[130623]: Starting D-Bus User Message Bus Socket...
Nov 29 02:22:37 np0005539564 systemd[130623]: Starting Create User's Volatile Files and Directories...
Nov 29 02:22:37 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:37 np0005539564 systemd[130623]: Finished Create User's Volatile Files and Directories.
Nov 29 02:22:37 np0005539564 systemd[130623]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:22:37 np0005539564 systemd[130623]: Reached target Sockets.
Nov 29 02:22:37 np0005539564 systemd[130623]: Reached target Basic System.
Nov 29 02:22:37 np0005539564 systemd[130623]: Reached target Main User Target.
Nov 29 02:22:37 np0005539564 systemd[130623]: Startup finished in 123ms.
Nov 29 02:22:37 np0005539564 systemd[1]: Started User Manager for UID 0.
Nov 29 02:22:37 np0005539564 systemd[1]: Started ovn_controller container.
Nov 29 02:22:37 np0005539564 systemd[1]: Started Session c1 of User root.
Nov 29 02:22:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: INFO:__main__:Validating config file
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: INFO:__main__:Writing out command to execute
Nov 29 02:22:37 np0005539564 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: ++ cat /run_command
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + ARGS=
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + sudo kolla_copy_cacerts
Nov 29 02:22:37 np0005539564 systemd[1]: Started Session c2 of User root.
Nov 29 02:22:37 np0005539564 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + [[ ! -n '' ]]
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + . kolla_extend_start
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + umask 0022
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 02:22:37 np0005539564 NetworkManager[48997]: <info>  [1764400957.9183] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 02:22:37 np0005539564 NetworkManager[48997]: <info>  [1764400957.9193] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:22:37 np0005539564 NetworkManager[48997]: <info>  [1764400957.9213] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 02:22:37 np0005539564 NetworkManager[48997]: <info>  [1764400957.9223] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 02:22:37 np0005539564 NetworkManager[48997]: <info>  [1764400957.9230] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:22:37 np0005539564 kernel: br-int: entered promiscuous mode
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:22:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:37Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:22:37 np0005539564 NetworkManager[48997]: <info>  [1764400957.9504] manager: (ovn-45d4c7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 02:22:37 np0005539564 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 02:22:37 np0005539564 systemd-udevd[130725]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:22:37 np0005539564 NetworkManager[48997]: <info>  [1764400957.9833] device (genev_sys_6081): carrier: link connected
Nov 29 02:22:37 np0005539564 systemd-udevd[130724]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:22:37 np0005539564 NetworkManager[48997]: <info>  [1764400957.9835] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 02:22:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:38.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:38.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:39 np0005539564 NetworkManager[48997]: <info>  [1764400959.0176] manager: (ovn-cb98fb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 02:22:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:40.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:40.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:41 np0005539564 NetworkManager[48997]: <info>  [1764400961.7818] manager: (ovn-c8abfd-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 02:22:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:42.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:42 np0005539564 python3.9[130861]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:42 np0005539564 ovs-vsctl[130862]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 02:22:43 np0005539564 python3.9[131014]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:43 np0005539564 ovs-vsctl[131016]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 02:22:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:44.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:45 np0005539564 python3.9[131169]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:45 np0005539564 ovs-vsctl[131170]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 02:22:45 np0005539564 systemd-logind[785]: Session 45 logged out. Waiting for processes to exit.
Nov 29 02:22:45 np0005539564 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 02:22:45 np0005539564 systemd[1]: session-45.scope: Consumed 1min 3.496s CPU time.
Nov 29 02:22:45 np0005539564 systemd-logind[785]: Removed session 45.
Nov 29 02:22:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:46.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:46.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:47 np0005539564 systemd[1]: Stopping User Manager for UID 0...
Nov 29 02:22:47 np0005539564 systemd[130623]: Activating special unit Exit the Session...
Nov 29 02:22:47 np0005539564 systemd[130623]: Stopped target Main User Target.
Nov 29 02:22:47 np0005539564 systemd[130623]: Stopped target Basic System.
Nov 29 02:22:47 np0005539564 systemd[130623]: Stopped target Paths.
Nov 29 02:22:47 np0005539564 systemd[130623]: Stopped target Sockets.
Nov 29 02:22:47 np0005539564 systemd[130623]: Stopped target Timers.
Nov 29 02:22:47 np0005539564 systemd[130623]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:22:47 np0005539564 systemd[130623]: Closed D-Bus User Message Bus Socket.
Nov 29 02:22:47 np0005539564 systemd[130623]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:22:47 np0005539564 systemd[130623]: Removed slice User Application Slice.
Nov 29 02:22:47 np0005539564 systemd[130623]: Reached target Shutdown.
Nov 29 02:22:47 np0005539564 systemd[130623]: Finished Exit the Session.
Nov 29 02:22:47 np0005539564 systemd[130623]: Reached target Exit the Session.
Nov 29 02:22:47 np0005539564 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 02:22:47 np0005539564 systemd[1]: Stopped User Manager for UID 0.
Nov 29 02:22:47 np0005539564 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 02:22:47 np0005539564 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 02:22:47 np0005539564 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 02:22:47 np0005539564 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 02:22:47 np0005539564 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 02:22:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:48.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:48.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:49Z|00025|memory|INFO|16256 kB peak resident set size after 11.9 seconds
Nov 29 02:22:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:22:49Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 29 02:22:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:50.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:50.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:51 np0005539564 systemd-logind[785]: New session 47 of user zuul.
Nov 29 02:22:51 np0005539564 systemd[1]: Started Session 47 of User zuul.
Nov 29 02:22:52 np0005539564 python3.9[131349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:52.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:52.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:53 np0005539564 python3.9[131505]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:54 np0005539564 python3.9[131657]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:54.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:54 np0005539564 python3.9[131809]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:55 np0005539564 python3.9[131961]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:56 np0005539564 python3.9[132114]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:56.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:56.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:57 np0005539564 python3.9[132264]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:58 np0005539564 python3.9[132416]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:22:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:22:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:58.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:00 np0005539564 python3.9[132567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:00.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:00.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:00 np0005539564 python3.9[132688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400979.3469856-224-59245943261947/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:01 np0005539564 python3.9[132838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:02 np0005539564 python3.9[132959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400981.0987732-269-228930637230382/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:02.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:03 np0005539564 python3.9[133111]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:23:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:04.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:04 np0005539564 python3.9[133195]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:23:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:06.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:06.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:08 np0005539564 podman[133320]: 2025-11-29 07:23:08.507865419 +0000 UTC m=+0.173168145 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:23:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:08.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:08 np0005539564 python3.9[133367]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:23:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:10.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:10.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:10 np0005539564 python3.9[133529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:11 np0005539564 python3.9[133650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400990.0181067-380-162915649891175/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:11 np0005539564 python3.9[133802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:12 np0005539564 python3.9[133923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400991.3314667-380-84500803479401/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:12.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:12.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:14 np0005539564 python3.9[134073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:14.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:14.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:14 np0005539564 python3.9[134194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400993.536785-512-83492506736611/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:15 np0005539564 python3.9[134344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:15 np0005539564 python3.9[134465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400994.8055332-512-8614723646975/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:16.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:16.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:16 np0005539564 python3.9[134729]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:23:17 np0005539564 python3.9[134900]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:23:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:23:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:23:18 np0005539564 python3.9[135052]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:18.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:18.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:18 np0005539564 python3.9[135130]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:19 np0005539564 python3.9[135282]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:19 np0005539564 python3.9[135360]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:20.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:20.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:20 np0005539564 python3.9[135512]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:21 np0005539564 python3.9[135664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:22 np0005539564 python3.9[135742]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:22.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:22.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:22 np0005539564 python3.9[135894]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:23 np0005539564 python3.9[136022]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:24 np0005539564 python3.9[136174]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:24 np0005539564 systemd[1]: Reloading.
Nov 29 02:23:24 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:24 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:24.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:24.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:25 np0005539564 python3.9[136363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:25 np0005539564 python3.9[136441]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:26.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:26.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:27 np0005539564 python3.9[136593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:27 np0005539564 python3.9[136671]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:28 np0005539564 python3.9[136823]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:28 np0005539564 systemd[1]: Reloading.
Nov 29 02:23:28 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:28 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:28.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:28 np0005539564 systemd[1]: Starting Create netns directory...
Nov 29 02:23:28 np0005539564 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:23:28 np0005539564 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:23:28 np0005539564 systemd[1]: Finished Create netns directory.
Nov 29 02:23:29 np0005539564 python3.9[137016]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:30.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:30.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:30 np0005539564 python3.9[137168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:31 np0005539564 python3.9[137291]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401010.216133-965-15346108757428/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:32 np0005539564 python3.9[137443]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:32.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:32.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:33 np0005539564 python3.9[137595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:33 np0005539564 python3.9[137718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401012.7126234-1040-171619138170240/.source.json _original_basename=.ufq5oexx follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:34.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:34 np0005539564 python3.9[137870]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:36.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:36.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:37 np0005539564 python3.9[138297]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 02:23:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:38 np0005539564 python3.9[138449]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:23:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:38.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:39 np0005539564 podman[138573]: 2025-11-29 07:23:39.11413908 +0000 UTC m=+0.107572585 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:23:39 np0005539564 python3.9[138621]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:23:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:40.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:40.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:41 np0005539564 python3[138806]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:23:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:42.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:42.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:44.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:44.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:46 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:23:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:46.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:48.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:50.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.503649) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032503681, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2333, "num_deletes": 252, "total_data_size": 6103865, "memory_usage": 6178032, "flush_reason": "Manual Compaction"}
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032551526, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 3966002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10088, "largest_seqno": 12416, "table_properties": {"data_size": 3956367, "index_size": 6129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19066, "raw_average_key_size": 20, "raw_value_size": 3937177, "raw_average_value_size": 4153, "num_data_blocks": 274, "num_entries": 948, "num_filter_entries": 948, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400757, "oldest_key_time": 1764400757, "file_creation_time": 1764401032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 47926 microseconds, and 11249 cpu microseconds.
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.551574) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 3966002 bytes OK
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.551596) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.553849) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.553867) EVENT_LOG_v1 {"time_micros": 1764401032553861, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.553885) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6093678, prev total WAL file size 6139821, number of live WAL files 2.
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.555300) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3873KB)], [21(7347KB)]
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032555371, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 11490109, "oldest_snapshot_seqno": -1}
Nov 29 02:23:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:52.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4108 keys, 9260677 bytes, temperature: kUnknown
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032659988, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 9260677, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9229336, "index_size": 19967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100058, "raw_average_key_size": 24, "raw_value_size": 9151208, "raw_average_value_size": 2227, "num_data_blocks": 863, "num_entries": 4108, "num_filter_entries": 4108, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764401032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.660196) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 9260677 bytes
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.661848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.8 rd, 88.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4633, records dropped: 525 output_compression: NoCompression
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.661867) EVENT_LOG_v1 {"time_micros": 1764401032661858, "job": 10, "event": "compaction_finished", "compaction_time_micros": 104666, "compaction_time_cpu_micros": 27558, "output_level": 6, "num_output_files": 1, "total_output_size": 9260677, "num_input_records": 4633, "num_output_records": 4108, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032662564, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032663685, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.555234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:52.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:54.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:54.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539564 podman[138818]: 2025-11-29 07:23:55.319914717 +0000 UTC m=+14.125360019 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:23:55 np0005539564 podman[138955]: 2025-11-29 07:23:55.468055703 +0000 UTC m=+0.047827511 container create aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:23:55 np0005539564 podman[138955]: 2025-11-29 07:23:55.441757468 +0000 UTC m=+0.021529316 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:23:55 np0005539564 python3[138806]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:23:56 np0005539564 python3.9[139146]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:23:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:56.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:57 np0005539564 python3.9[139300]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:57 np0005539564 python3.9[139376]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:23:58 np0005539564 python3.9[139528]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401037.7543008-1304-90917654368415/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:58.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:23:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:58.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:59 np0005539564 python3.9[139605]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:23:59 np0005539564 systemd[1]: Reloading.
Nov 29 02:23:59 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:59 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:59 np0005539564 python3.9[139717]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:59 np0005539564 systemd[1]: Reloading.
Nov 29 02:24:00 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:00 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:00.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:01 np0005539564 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 02:24:01 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:24:01 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d05fd2ac91537f8fd9cee6ad6656a10cda2df42a9234db3c48e2f47a6298fc/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:01 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d05fd2ac91537f8fd9cee6ad6656a10cda2df42a9234db3c48e2f47a6298fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:01 np0005539564 systemd[1]: Started /usr/bin/podman healthcheck run aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938.
Nov 29 02:24:01 np0005539564 podman[139759]: 2025-11-29 07:24:01.585019914 +0000 UTC m=+0.505189480 container init aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + sudo -E kolla_set_configs
Nov 29 02:24:01 np0005539564 podman[139759]: 2025-11-29 07:24:01.620071418 +0000 UTC m=+0.540240874 container start aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Validating config file
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Copying service configuration files
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Writing out command to execute
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: ++ cat /run_command
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + CMD=neutron-ovn-metadata-agent
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + ARGS=
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + sudo kolla_copy_cacerts
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + [[ ! -n '' ]]
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + . kolla_extend_start
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + umask 0022
Nov 29 02:24:01 np0005539564 ovn_metadata_agent[139775]: + exec neutron-ovn-metadata-agent
Nov 29 02:24:02 np0005539564 edpm-start-podman-container[139759]: ovn_metadata_agent
Nov 29 02:24:02 np0005539564 podman[139782]: 2025-11-29 07:24:02.227234358 +0000 UTC m=+0.597961042 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:24:02 np0005539564 edpm-start-podman-container[139758]: Creating additional drop-in dependency for "ovn_metadata_agent" (aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938)
Nov 29 02:24:02 np0005539564 systemd[1]: Reloading.
Nov 29 02:24:02 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:02 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:02 np0005539564 systemd[1]: Started ovn_metadata_agent container.
Nov 29 02:24:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:02.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.624 139780 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.625 139780 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.625 139780 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.625 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.625 139780 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.625 139780 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.625 139780 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.626 139780 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.627 139780 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.628 139780 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.629 139780 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.630 139780 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.631 139780 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.632 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.633 139780 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.634 139780 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.635 139780 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.635 139780 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.635 139780 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.635 139780 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.635 139780 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.635 139780 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.635 139780 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.635 139780 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.636 139780 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.637 139780 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.637 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.637 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.637 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.637 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.637 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.637 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.637 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.638 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.638 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.638 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.638 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.638 139780 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.638 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.639 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.640 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.641 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.642 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.643 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.644 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.644 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.644 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.644 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.644 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.644 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.644 139780 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.644 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.645 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.645 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.645 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.645 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.645 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.645 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.645 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.645 139780 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.646 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.646 139780 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.646 139780 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.646 139780 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.646 139780 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.646 139780 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.646 139780 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.646 139780 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.647 139780 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.647 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.647 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.647 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.647 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.647 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.647 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.648 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.648 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.648 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.648 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.648 139780 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.648 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.648 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.648 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.649 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.649 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.649 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.649 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.649 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.649 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.649 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.649 139780 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.650 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.651 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.651 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.651 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.651 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.651 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.651 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.651 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.652 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.653 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.654 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.655 139780 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.656 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.657 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.658 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.659 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.660 139780 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.660 139780 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.668 139780 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.668 139780 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.669 139780 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.669 139780 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.669 139780 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.682 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 011fdddc-8681-4ece-b276-7e821dffaec6 (UUID: 011fdddc-8681-4ece-b276-7e821dffaec6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.709 139780 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.709 139780 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.709 139780 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.709 139780 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.712 139780 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.718 139780 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.722 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '011fdddc-8681-4ece-b276-7e821dffaec6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], external_ids={}, name=011fdddc-8681-4ece-b276-7e821dffaec6, nb_cfg_timestamp=1764400965949, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.723 139780 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f3bafe74f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.724 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.724 139780 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.724 139780 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.724 139780 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.728 139780 DEBUG oslo_service.service [-] Started child 139890 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.732 139780 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpo3_y6aln/privsep.sock']#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.734 139890 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-33457631'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 29 02:24:03 np0005539564 systemd[1]: session-47.scope: Deactivated successfully.
Nov 29 02:24:03 np0005539564 systemd[1]: session-47.scope: Consumed 58.963s CPU time.
Nov 29 02:24:03 np0005539564 systemd-logind[785]: Session 47 logged out. Waiting for processes to exit.
Nov 29 02:24:03 np0005539564 systemd-logind[785]: Removed session 47.
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.776 139890 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.777 139890 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.777 139890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.783 139890 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.790 139890 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:03.797 139890 INFO eventlet.wsgi.server [-] (139890) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 29 02:24:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:04 np0005539564 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.447 139780 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.447 139780 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpo3_y6aln/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.307 139895 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.311 139895 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.316 139895 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.316 139895 INFO oslo.privsep.daemon [-] privsep daemon running as pid 139895#033[00m
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.450 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[f58107ec-37bc-42d2-bd5a-bfd4d6f671e5]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:24:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:04.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:04.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.961 139895 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.961 139895 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:04.961 139895 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.508 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[c028c461-1a05-4dbf-9fe7-1a926bb36eb2]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.510 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, column=external_ids, values=({'neutron:ovn-metadata-id': '91db6c27-12c2-51b3-bad9-f3d2956bfbee'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.518 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.525 139780 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.525 139780 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.525 139780 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.525 139780 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.525 139780 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.525 139780 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.525 139780 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.526 139780 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.526 139780 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.526 139780 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.526 139780 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.526 139780 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.526 139780 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.527 139780 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.527 139780 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.527 139780 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.527 139780 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.527 139780 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.527 139780 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.527 139780 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.527 139780 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.528 139780 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.528 139780 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.528 139780 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.528 139780 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.528 139780 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.528 139780 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.528 139780 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.528 139780 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.529 139780 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.529 139780 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.529 139780 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.529 139780 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.529 139780 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.529 139780 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.529 139780 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.529 139780 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.530 139780 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.530 139780 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.530 139780 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.530 139780 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.530 139780 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.530 139780 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.530 139780 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.530 139780 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.531 139780 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.532 139780 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.533 139780 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.534 139780 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.535 139780 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.536 139780 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.537 139780 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.537 139780 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.537 139780 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.537 139780 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.537 139780 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.537 139780 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.537 139780 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.537 139780 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.538 139780 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.538 139780 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.538 139780 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.538 139780 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.538 139780 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.538 139780 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.539 139780 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.539 139780 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.539 139780 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.539 139780 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.539 139780 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.539 139780 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.539 139780 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.539 139780 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.540 139780 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.541 139780 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.542 139780 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.543 139780 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.544 139780 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.544 139780 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.544 139780 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.544 139780 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.544 139780 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.544 139780 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.544 139780 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.544 139780 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.545 139780 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.545 139780 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.545 139780 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.545 139780 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.545 139780 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.545 139780 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.545 139780 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.546 139780 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.546 139780 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.546 139780 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.546 139780 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.546 139780 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.546 139780 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.546 139780 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.547 139780 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.547 139780 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.547 139780 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.547 139780 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.547 139780 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.547 139780 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.547 139780 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.547 139780 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.548 139780 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.548 139780 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.548 139780 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.548 139780 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.548 139780 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.548 139780 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.549 139780 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.550 139780 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.551 139780 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.552 139780 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.552 139780 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.552 139780 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.552 139780 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.552 139780 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.552 139780 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.552 139780 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.552 139780 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.553 139780 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.553 139780 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.553 139780 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.553 139780 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.553 139780 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.553 139780 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.553 139780 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.554 139780 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.554 139780 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.554 139780 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.554 139780 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.554 139780 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.554 139780 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.554 139780 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.554 139780 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.555 139780 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.555 139780 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.555 139780 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.555 139780 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.555 139780 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.555 139780 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.555 139780 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.555 139780 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.556 139780 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.556 139780 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.556 139780 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.556 139780 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.556 139780 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.556 139780 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.556 139780 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.556 139780 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.557 139780 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.557 139780 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.557 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.557 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.557 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.557 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.557 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.557 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.558 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.558 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.558 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.558 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.558 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.558 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.558 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.558 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.559 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.559 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.559 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.559 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.559 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.559 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.559 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.559 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.560 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.560 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.560 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.560 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.560 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.560 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.560 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.561 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.561 139780 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.561 139780 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.561 139780 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.561 139780 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.561 139780 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:24:05.561 139780 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:24:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:06.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:06.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:08.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:08.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:09 np0005539564 podman[139900]: 2025-11-29 07:24:09.571138835 +0000 UTC m=+0.117189216 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:24:09 np0005539564 systemd-logind[785]: New session 48 of user zuul.
Nov 29 02:24:09 np0005539564 systemd[1]: Started Session 48 of User zuul.
Nov 29 02:24:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:10.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:10 np0005539564 python3.9[140080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:10.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:12 np0005539564 python3.9[140236]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:12.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:12.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:13 np0005539564 python3.9[140401]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:24:13 np0005539564 systemd[1]: Reloading.
Nov 29 02:24:13 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:13 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:14.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:14 np0005539564 python3.9[140586]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:24:15 np0005539564 network[140603]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:24:15 np0005539564 network[140604]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:24:15 np0005539564 network[140605]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:24:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:16.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:18.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:18.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:20 np0005539564 python3.9[140867]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:20.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:20.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:21 np0005539564 python3.9[141020]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:22.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:22.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:23 np0005539564 python3.9[141173]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:23 np0005539564 python3.9[141440]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:24.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:24 np0005539564 python3.9[141610]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:24:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:24:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:24.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:25 np0005539564 python3.9[141763]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:26.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:26.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:28.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:28.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:30 np0005539564 python3.9[141916]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:30.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:30.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:31 np0005539564 python3.9[142072]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:32 np0005539564 python3.9[142224]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:32 np0005539564 podman[142325]: 2025-11-29 07:24:32.52517292 +0000 UTC m=+0.075514684 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:24:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:32 np0005539564 python3.9[142395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:32.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:33 np0005539564 python3.9[142547]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:34 np0005539564 python3.9[142699]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:34.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:34 np0005539564 python3.9[142901]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:35 np0005539564 python3.9[143053]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:36 np0005539564 python3.9[143207]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:36.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:37 np0005539564 python3.9[143359]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:37 np0005539564 python3.9[143511]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:38 np0005539564 python3.9[143663]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:38.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:38.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:39 np0005539564 python3.9[143815]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:39 np0005539564 podman[143967]: 2025-11-29 07:24:39.777183598 +0000 UTC m=+0.120809364 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:24:39 np0005539564 python3.9[143968]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:40 np0005539564 python3.9[144147]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:40.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:40.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:42.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:42.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:44.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:44.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:44 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:24:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:46.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:46.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:47 np0005539564 python3.9[144299]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:48 np0005539564 python3.9[144451]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:24:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:48.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:48.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:49 np0005539564 python3.9[144603]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:24:49 np0005539564 systemd[1]: Reloading.
Nov 29 02:24:49 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:49 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:50.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:50 np0005539564 python3.9[144791]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:50.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:51 np0005539564 python3.9[144944]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:52 np0005539564 python3.9[145097]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:52.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:52.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:52 np0005539564 python3.9[145250]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:53 np0005539564 python3.9[145403]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:54 np0005539564 python3.9[145556]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:54.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:54.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:55 np0005539564 python3.9[145709]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:56 np0005539564 python3.9[145862]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 02:24:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:56.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:56.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:57 np0005539564 python3.9[146015]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:24:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:58.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:58 np0005539564 python3.9[146173]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:24:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:24:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:58.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:00 np0005539564 python3.9[146333]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:25:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:00.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:00.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:01 np0005539564 python3.9[146417]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:25:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:02.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:02.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:03 np0005539564 podman[146431]: 2025-11-29 07:25:03.514545591 +0000 UTC m=+0.065122855 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 02:25:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:25:03.670 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:25:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:25:03.671 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:25:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:25:03.671 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:25:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:04.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:04.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:06.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:06.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:08.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:08.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:10 np0005539564 podman[146519]: 2025-11-29 07:25:10.57577466 +0000 UTC m=+0.130306691 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Nov 29 02:25:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:10.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:10.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:12.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:12.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:14.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:14.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:16.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:16.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:18.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:18.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:20.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:22.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:22.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:24.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:24.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:26.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:26.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:28.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:30.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:32.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:32.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:34 np0005539564 podman[146662]: 2025-11-29 07:25:34.508784541 +0000 UTC m=+0.063009129 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:25:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:34.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:36.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:36.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:37 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:25:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:38.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:38.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:40.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:41 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:25:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).paxos(paxos updating c 754..1352) lease_timeout -- calling new election
Nov 29 02:25:41 np0005539564 ceph-mon[81769]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:25:41 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(24) init, last seen epoch 24
Nov 29 02:25:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:25:41 np0005539564 podman[146813]: 2025-11-29 07:25:41.56886919 +0000 UTC m=+0.115415357 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:25:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:25:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:25:42 np0005539564 kernel: SELinux:  Converting 2769 SID table entries...
Nov 29 02:25:42 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:25:42 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:25:42 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:25:42 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:25:42 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:25:42 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:25:42 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:25:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:43.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:44.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:45.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:25:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:46.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:47.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:47 np0005539564 ceph-mon[81769]: mon.compute-1 calling monitor election
Nov 29 02:25:47 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:25:47 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:25:47 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:25:47 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:25:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:50.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:52.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:53 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:25:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:53.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:54.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:55.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:55 np0005539564 kernel: SELinux:  Converting 2769 SID table entries...
Nov 29 02:25:55 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:25:55 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:25:55 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:25:55 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:25:55 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:25:55 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:25:55 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:25:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:56.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:57 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:25:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:57.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:25:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:25:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:58.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:25:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:59.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:00.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:02.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:03.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:26:03.672 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:26:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:26:03.674 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:26:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:26:03.674 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:26:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:04 np0005539564 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 02:26:04 np0005539564 podman[146876]: 2025-11-29 07:26:04.875013731 +0000 UTC m=+0.083027791 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 02:26:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:04.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:05.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:26:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:26:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:07.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:08.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:10.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:11.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:12 np0005539564 podman[147870]: 2025-11-29 07:26:12.519414786 +0000 UTC m=+0.079934005 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 02:26:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:12.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:13.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:14.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:15.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:18.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:19.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:20.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:21.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:22.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:23.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:24.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:26.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:27.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:28.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:29.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:30.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:31.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:32.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:33.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:34.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:35.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:35 np0005539564 podman[161232]: 2025-11-29 07:26:35.48297803 +0000 UTC m=+0.045148565 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 02:26:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:36.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:37.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:38.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:39.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:40.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:41.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:42 np0005539564 podman[163755]: 2025-11-29 07:26:42.859360246 +0000 UTC m=+0.123728023 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 02:26:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:42.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:43.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:45.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:47.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:48.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:49.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:50.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:51.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:53.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:53.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:54 np0005539564 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 02:26:54 np0005539564 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:26:54 np0005539564 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:26:54 np0005539564 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:26:54 np0005539564 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:26:54 np0005539564 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:26:54 np0005539564 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:26:54 np0005539564 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:26:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:55.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:55.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:56 np0005539564 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 29 02:26:56 np0005539564 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 02:26:56 np0005539564 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 29 02:26:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:57.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:57.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:59.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:26:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:59.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:01.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:01.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:03.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:03.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:27:03.674 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:27:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:27:03.676 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:27:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:27:03.676 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:27:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:05.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:05.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:05 np0005539564 podman[163993]: 2025-11-29 07:27:05.901292067 +0000 UTC m=+0.110571256 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:27:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:07.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:07.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:07 np0005539564 podman[164042]: 2025-11-29 07:27:07.268157145 +0000 UTC m=+1.302562119 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:27:07 np0005539564 podman[164042]: 2025-11-29 07:27:07.377464536 +0000 UTC m=+1.411869520 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:27:07 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 02:27:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:09.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:11.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:11.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:13.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:13 np0005539564 podman[164378]: 2025-11-29 07:27:13.25825222 +0000 UTC m=+0.086030145 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 02:27:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:27:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:27:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:15.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.520026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235520061, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1747, "num_deletes": 252, "total_data_size": 4259212, "memory_usage": 4324008, "flush_reason": "Manual Compaction"}
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235612611, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 2788969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12421, "largest_seqno": 14163, "table_properties": {"data_size": 2781720, "index_size": 4256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13972, "raw_average_key_size": 18, "raw_value_size": 2767247, "raw_average_value_size": 3679, "num_data_blocks": 192, "num_entries": 752, "num_filter_entries": 752, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401032, "oldest_key_time": 1764401032, "file_creation_time": 1764401235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 93947 microseconds, and 6557 cpu microseconds.
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.612747) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 2788969 bytes OK
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.614047) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.615453) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.615466) EVENT_LOG_v1 {"time_micros": 1764401235615462, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.615480) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 4251399, prev total WAL file size 4251399, number of live WAL files 2.
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.616598) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(2723KB)], [24(9043KB)]
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235616679, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12049646, "oldest_snapshot_seqno": -1}
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4338 keys, 11506922 bytes, temperature: kUnknown
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235789595, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11506922, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11471952, "index_size": 23020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 106569, "raw_average_key_size": 24, "raw_value_size": 11387619, "raw_average_value_size": 2625, "num_data_blocks": 979, "num_entries": 4338, "num_filter_entries": 4338, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764401235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.789899) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11506922 bytes
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.834031) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 69.6 rd, 66.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.4) write-amplify(4.1) OK, records in: 4860, records dropped: 522 output_compression: NoCompression
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.834084) EVENT_LOG_v1 {"time_micros": 1764401235834062, "job": 12, "event": "compaction_finished", "compaction_time_micros": 173045, "compaction_time_cpu_micros": 33787, "output_level": 6, "num_output_files": 1, "total_output_size": 11506922, "num_input_records": 4860, "num_output_records": 4338, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235834844, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235836965, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.616448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.837046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.837053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.837056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.837058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:27:15.837061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:17.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:17.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:18 np0005539564 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 02:27:18 np0005539564 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 02:27:18 np0005539564 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 02:27:18 np0005539564 systemd[1]: sshd.service: Consumed 6.598s CPU time, read 32.0K from disk, written 176.0K to disk.
Nov 29 02:27:18 np0005539564 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 02:27:18 np0005539564 systemd[1]: Stopping sshd-keygen.target...
Nov 29 02:27:18 np0005539564 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:27:18 np0005539564 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:27:18 np0005539564 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:27:18 np0005539564 systemd[1]: Reached target sshd-keygen.target.
Nov 29 02:27:18 np0005539564 systemd[1]: Starting OpenSSH server daemon...
Nov 29 02:27:18 np0005539564 systemd[1]: Started OpenSSH server daemon.
Nov 29 02:27:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:19.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.007000185s ======
Nov 29 02:27:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:19.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.007000185s
Nov 29 02:27:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:21.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:21 np0005539564 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:27:21 np0005539564 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:27:21 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:22 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:22 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:22 np0005539564 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:27:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:23.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:23.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:25.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:25.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:27 np0005539564 python3.9[170378]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:27 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:27.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:27 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:27 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:28 np0005539564 python3.9[171595]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:28 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:28 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:28 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:29.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:29 np0005539564 python3.9[172986]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:29 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:29 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:29 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:30 np0005539564 python3.9[174357]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:31 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:31.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:31 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:31 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:31.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:33.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:33.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:33 np0005539564 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:27:33 np0005539564 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:27:33 np0005539564 systemd[1]: man-db-cache-update.service: Consumed 11.464s CPU time.
Nov 29 02:27:33 np0005539564 systemd[1]: run-r4b14743ef1b54056b68cac7f6f7c5e9f.service: Deactivated successfully.
Nov 29 02:27:33 np0005539564 python3.9[174722]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:33 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:33 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:33 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:34 np0005539564 python3.9[174913]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:34 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:34 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:34 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:35.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:35.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:35 np0005539564 python3.9[175103]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:36 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:36 np0005539564 podman[175105]: 2025-11-29 07:27:36.144420041 +0000 UTC m=+0.099246291 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:27:36 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:36 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:37.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:37 np0005539564 python3.9[175313]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:39 np0005539564 python3.9[175468]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:39.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:39 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:39.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:39 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:39 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:41.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:41 np0005539564 python3.9[175659]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:41 np0005539564 systemd[1]: Reloading.
Nov 29 02:27:41 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:41 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:41 np0005539564 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 02:27:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:41 np0005539564 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 02:27:42 np0005539564 python3.9[175852]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:43.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:43.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:43 np0005539564 podman[175955]: 2025-11-29 07:27:43.588325914 +0000 UTC m=+0.140453299 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:27:43 np0005539564 python3.9[176032]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:45.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:45.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:47.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:47.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:47 np0005539564 python3.9[176187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:48 np0005539564 python3.9[176342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:49.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:49.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:49 np0005539564 python3.9[176497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:50 np0005539564 python3.9[176652]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:51.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:51.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:51 np0005539564 python3.9[176807]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:52 np0005539564 python3.9[176962]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:53.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:53.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:55.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:55.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:55 np0005539564 python3.9[177117]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:56 np0005539564 python3.9[177272]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.004000106s ======
Nov 29 02:27:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:57.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000106s
Nov 29 02:27:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:57.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:57 np0005539564 python3.9[177427]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:59.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:27:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:59.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:59 np0005539564 python3.9[177582]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:28:00 np0005539564 python3.9[177737]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:28:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:01.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:01.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:02 np0005539564 python3.9[177892]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:28:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:03.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:03.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:03 np0005539564 python3.9[178047]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:28:03.676 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:28:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:28:03.677 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:28:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:28:03.677 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:28:04 np0005539564 python3.9[178199]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:05 np0005539564 python3.9[178351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:05.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:05 np0005539564 python3.9[178503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:06 np0005539564 python3.9[178655]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:07 np0005539564 podman[178779]: 2025-11-29 07:28:07.178832604 +0000 UTC m=+0.084809172 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:28:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:07.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:07 np0005539564 python3.9[178825]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:08 np0005539564 python3.9[178977]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:09 np0005539564 python3.9[179102]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401287.6173637-1628-246740544483984/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:09 np0005539564 python3.9[179254]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:10 np0005539564 python3.9[179379]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401289.206585-1628-56583500209152/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:11 np0005539564 python3.9[179531]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:11 np0005539564 python3.9[179656]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401290.7180617-1628-15741031897169/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:12 np0005539564 python3.9[179808]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:13.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:13.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:13 np0005539564 python3.9[179933]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401292.17653-1628-172591129434552/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:13 np0005539564 podman[180057]: 2025-11-29 07:28:13.96639166 +0000 UTC m=+0.109461526 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:28:14 np0005539564 python3.9[180109]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:14 np0005539564 python3.9[180238]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401293.5614152-1628-254916286004021/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:15.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:15.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:15 np0005539564 python3.9[180390]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:16 np0005539564 python3.9[180515]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401294.9882722-1628-237247326499302/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:17.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:17 np0005539564 python3.9[180667]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:17.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:17 np0005539564 python3.9[180790]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401296.571165-1628-59125333464042/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:18 np0005539564 python3.9[180942]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:19.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:19 np0005539564 python3.9[181067]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401298.0541296-1628-175253958225487/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:20 np0005539564 python3.9[181219]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 02:28:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:21.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:21.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:22 np0005539564 python3.9[181374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:22 np0005539564 python3.9[181526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:23.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:23.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:23 np0005539564 python3.9[181678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:24 np0005539564 python3.9[181830]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:24 np0005539564 python3.9[181982]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:25.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:25.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:25 np0005539564 python3.9[182134]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:26 np0005539564 python3.9[182286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:27 np0005539564 python3.9[182488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:27.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:27 np0005539564 python3.9[182723]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:28 np0005539564 python3.9[182875]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:29.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:28:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:28:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:28:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:29.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:29 np0005539564 python3.9[183027]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:30 np0005539564 python3.9[183179]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:30 np0005539564 python3.9[183331]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:31.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:31.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:31 np0005539564 python3.9[183483]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:33.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:33.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:33 np0005539564 python3.9[183635]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:34 np0005539564 python3.9[183758]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401312.9605248-2291-247981974037912/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:35 np0005539564 python3.9[183910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:35.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:35 np0005539564 python3.9[184033]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401314.4781556-2291-194798321086874/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:36 np0005539564 python3.9[184185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:37.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:37.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:37 np0005539564 podman[184308]: 2025-11-29 07:28:37.341104163 +0000 UTC m=+0.076448998 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:28:37 np0005539564 python3.9[184309]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401315.9041424-2291-15925844600962/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:38 np0005539564 python3.9[184478]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:38 np0005539564 python3.9[184601]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401317.6857007-2291-72752523148417/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:39.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:39.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:39 np0005539564 python3.9[184753]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:40 np0005539564 python3.9[184876]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401319.0816584-2291-148775682958664/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:41.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:41.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:41 np0005539564 python3.9[185028]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:42 np0005539564 python3.9[185151]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401320.6605592-2291-76444991867272/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:42 np0005539564 python3.9[185306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:43.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:43.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:43 np0005539564 python3.9[185429]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401322.2881117-2291-165495083095614/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:44 np0005539564 podman[185553]: 2025-11-29 07:28:44.407561762 +0000 UTC m=+0.270851756 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:28:44 np0005539564 python3.9[185598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:45 np0005539564 python3.9[185728]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401323.8319612-2291-81104929944069/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:45.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:45.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:45 np0005539564 python3.9[185880]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:46 np0005539564 python3.9[186003]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401325.328515-2291-82775357017145/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:47.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:47 np0005539564 python3.9[186155]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:47.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:47 np0005539564 python3.9[186278]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401326.755376-2291-80462768295264/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:48 np0005539564 python3.9[186430]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:49.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:49 np0005539564 python3.9[186553]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401328.1068504-2291-8082658053105/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:49.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:50 np0005539564 python3.9[186705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:50 np0005539564 python3.9[186828]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401329.4971526-2291-49797979257359/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:51.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:51.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:51 np0005539564 python3.9[186980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:52 np0005539564 python3.9[187103]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401331.0163825-2291-159314979200416/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:53.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:53 np0005539564 python3.9[187255]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:54 np0005539564 python3.9[187378]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401332.449247-2291-104835871020194/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:55.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:56 np0005539564 python3.9[187530]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:28:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:57.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:57.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:57 np0005539564 python3.9[187685]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 02:28:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:59.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:28:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:59.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:01.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:01.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:01 np0005539564 auditd[702]: Audit daemon rotating log files
Nov 29 02:29:02 np0005539564 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 02:29:02 np0005539564 python3.9[187841]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:03 np0005539564 python3.9[187993]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:03.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:29:03.677 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:29:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:29:03.679 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:29:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:29:03.679 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:29:03 np0005539564 python3.9[188195]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:29:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:29:04 np0005539564 python3.9[188347]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:05 np0005539564 python3.9[188499]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:05.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:05 np0005539564 python3.9[188651]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:06 np0005539564 python3.9[188803]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:07 np0005539564 python3.9[188955]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:07.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:07 np0005539564 podman[188980]: 2025-11-29 07:29:07.535864466 +0000 UTC m=+0.076742492 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:29:07 np0005539564 python3.9[189126]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:08 np0005539564 python3.9[189278]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:09.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:09.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:09 np0005539564 python3.9[189430]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:09 np0005539564 systemd[1]: Reloading.
Nov 29 02:29:09 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:09 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:10 np0005539564 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 02:29:10 np0005539564 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 02:29:10 np0005539564 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 02:29:10 np0005539564 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 02:29:10 np0005539564 systemd[1]: Starting libvirt logging daemon...
Nov 29 02:29:10 np0005539564 systemd[1]: Started libvirt logging daemon.
Nov 29 02:29:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:11.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:11 np0005539564 python3.9[189623]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:11 np0005539564 systemd[1]: Reloading.
Nov 29 02:29:11 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:11 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:11 np0005539564 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 02:29:11 np0005539564 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 02:29:11 np0005539564 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 02:29:11 np0005539564 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 02:29:11 np0005539564 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 02:29:11 np0005539564 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 02:29:11 np0005539564 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:29:11 np0005539564 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:29:12 np0005539564 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 02:29:12 np0005539564 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 02:29:12 np0005539564 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 02:29:12 np0005539564 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 02:29:12 np0005539564 python3.9[189847]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:12 np0005539564 systemd[1]: Reloading.
Nov 29 02:29:13 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:13 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:13.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:13.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:13 np0005539564 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 02:29:13 np0005539564 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 02:29:13 np0005539564 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 02:29:13 np0005539564 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 02:29:13 np0005539564 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:29:13 np0005539564 setroubleshoot[189686]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 320262b2-6fdd-42a3-b077-b44940203d3c
Nov 29 02:29:13 np0005539564 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:29:13 np0005539564 setroubleshoot[189686]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:29:13 np0005539564 setroubleshoot[189686]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 320262b2-6fdd-42a3-b077-b44940203d3c
Nov 29 02:29:13 np0005539564 setroubleshoot[189686]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:29:14 np0005539564 python3.9[190060]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:14 np0005539564 systemd[1]: Reloading.
Nov 29 02:29:14 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:14 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:14 np0005539564 podman[190062]: 2025-11-29 07:29:14.868019134 +0000 UTC m=+0.144672567 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:29:15 np0005539564 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 02:29:15 np0005539564 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 02:29:15 np0005539564 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 02:29:15 np0005539564 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 02:29:15 np0005539564 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 02:29:15 np0005539564 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 02:29:15 np0005539564 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 02:29:15 np0005539564 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 02:29:15 np0005539564 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 02:29:15 np0005539564 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 02:29:15 np0005539564 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:29:15 np0005539564 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:29:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:15.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:15.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:16 np0005539564 python3.9[190303]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:16 np0005539564 systemd[1]: Reloading.
Nov 29 02:29:16 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:16 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:16 np0005539564 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 02:29:16 np0005539564 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 02:29:16 np0005539564 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 02:29:16 np0005539564 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 02:29:16 np0005539564 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 02:29:16 np0005539564 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 02:29:16 np0005539564 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:29:16 np0005539564 systemd[1]: Started libvirt secret daemon.
Nov 29 02:29:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:17.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:17.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:17 np0005539564 python3.9[190515]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:18 np0005539564 python3.9[190667]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:29:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:19.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:19.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:19 np0005539564 python3.9[190819]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:20 np0005539564 python3.9[190973]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:29:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:21.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:21.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:21 np0005539564 python3.9[191123]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:22 np0005539564 python3.9[191244]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401361.381073-3365-114930183610974/.source.xml follow=False _original_basename=secret.xml.j2 checksum=3de32f8e861874afb18756e58a543ac33a4e4294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:23.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:23.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:23 np0005539564 python3.9[191396]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 38a37ed2-442a-5e0d-a69a-881fdd186450#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:23 np0005539564 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 02:29:23 np0005539564 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 02:29:25 np0005539564 python3.9[191558]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:25.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:25.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:27.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:27.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:29.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:29 np0005539564 python3.9[192021]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:29.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:30 np0005539564 python3.9[192173]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:31 np0005539564 python3.9[192296]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401369.7278519-3530-116877028704332/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:31.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:31.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:32 np0005539564 python3.9[192448]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:33.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:33.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:33 np0005539564 python3.9[192600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:33 np0005539564 python3.9[192678]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:35 np0005539564 python3.9[192830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:35.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:35.367380) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375367429, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1526, "num_deletes": 503, "total_data_size": 3034503, "memory_usage": 3071440, "flush_reason": "Manual Compaction"}
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 29 02:29:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:35.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375564518, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1183243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14169, "largest_seqno": 15689, "table_properties": {"data_size": 1178410, "index_size": 1781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15117, "raw_average_key_size": 19, "raw_value_size": 1166021, "raw_average_value_size": 1479, "num_data_blocks": 81, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401235, "oldest_key_time": 1764401235, "file_creation_time": 1764401375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 197240 microseconds, and 6897 cpu microseconds.
Nov 29 02:29:35 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:29:35 np0005539564 python3.9[192908]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.mnue07n0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:37 np0005539564 python3.9[193060]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:35.564616) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1183243 bytes OK
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:35.564642) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:35.628495) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:35.628559) EVENT_LOG_v1 {"time_micros": 1764401375628545, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:35.628593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3026546, prev total WAL file size 3042519, number of live WAL files 2.
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:37.258173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1155KB)], [27(10MB)]
Nov 29 02:29:37 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401377258292, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12690165, "oldest_snapshot_seqno": -1}
Nov 29 02:29:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:37.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:37.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:38 np0005539564 podman[193139]: 2025-11-29 07:29:38.53138568 +0000 UTC m=+0.084529154 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:29:38 np0005539564 python3.9[193138]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:39.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:39.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4158 keys, 7563826 bytes, temperature: kUnknown
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401379491980, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 7563826, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7534978, "index_size": 17346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 104097, "raw_average_key_size": 25, "raw_value_size": 7458546, "raw_average_value_size": 1793, "num_data_blocks": 724, "num_entries": 4158, "num_filter_entries": 4158, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764401377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:39.492636) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7563826 bytes
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:39.581333) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 5.7 rd, 3.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(17.1) write-amplify(6.4) OK, records in: 5126, records dropped: 968 output_compression: NoCompression
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:39.581401) EVENT_LOG_v1 {"time_micros": 1764401379581377, "job": 14, "event": "compaction_finished", "compaction_time_micros": 2233834, "compaction_time_cpu_micros": 24398, "output_level": 6, "num_output_files": 1, "total_output_size": 7563826, "num_input_records": 5126, "num_output_records": 4158, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401379582148, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401379585898, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:37.258018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:39.586042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:39.586052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:39.586056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:39.586060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:29:39.586064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:39 np0005539564 python3.9[193309]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:40 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:29:41 np0005539564 python3[193462]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:29:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:41.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:41.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:42 np0005539564 python3.9[193614]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:42 np0005539564 python3.9[193692]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:43.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:45.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:45 np0005539564 podman[193771]: 2025-11-29 07:29:45.601582525 +0000 UTC m=+0.170207825 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:29:47 np0005539564 python3.9[193872]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:47.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:47.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:47 np0005539564 python3.9[193950]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:48 np0005539564 python3.9[194102]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:49 np0005539564 python3.9[194180]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:49.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:50 np0005539564 python3.9[194332]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:51 np0005539564 python3.9[194410]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:51.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:51 np0005539564 python3.9[194562]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:52 np0005539564 python3.9[194687]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401391.2879138-3905-225010133086787/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:53.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:53.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:53 np0005539564 python3.9[194839]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:54 np0005539564 python3.9[194991]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:55.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:55.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:56 np0005539564 python3.9[195146]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:57.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:57.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:57 np0005539564 python3.9[195298]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:58 np0005539564 python3.9[195451]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:29:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:59.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:29:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:59.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:00 np0005539564 python3.9[195605]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:01.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:01 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:30:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:01.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:01 np0005539564 python3.9[195760]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:02 np0005539564 python3.9[195912]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:03 np0005539564 python3.9[196035]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401402.0371835-4121-254212556915766/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:03.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:30:03.679 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:30:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:30:03.679 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:30:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:30:03.679 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:30:04 np0005539564 python3.9[196304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:04 np0005539564 python3.9[196441]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401403.6092882-4166-116665101451236/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:05.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:05.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:30:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:30:05 np0005539564 python3.9[196593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:06 np0005539564 python3.9[196716]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401405.1442301-4211-174484641509169/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:07 np0005539564 python3.9[196868]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:07.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:07 np0005539564 systemd[1]: Reloading.
Nov 29 02:30:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:07.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:07 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:07 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:07 np0005539564 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 02:30:08 np0005539564 python3.9[197060]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:30:08 np0005539564 systemd[1]: Reloading.
Nov 29 02:30:08 np0005539564 podman[197062]: 2025-11-29 07:30:08.943535592 +0000 UTC m=+0.098469469 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:30:08 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:08 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:09 np0005539564 systemd[1]: Reloading.
Nov 29 02:30:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:09.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:09 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:09 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:09.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:11 np0005539564 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 02:30:11 np0005539564 systemd[1]: session-48.scope: Consumed 3min 48.817s CPU time.
Nov 29 02:30:11 np0005539564 systemd-logind[785]: Session 48 logged out. Waiting for processes to exit.
Nov 29 02:30:11 np0005539564 systemd-logind[785]: Removed session 48.
Nov 29 02:30:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:11.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:13.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:13.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:15.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:16 np0005539564 systemd-logind[785]: New session 49 of user zuul.
Nov 29 02:30:16 np0005539564 systemd[1]: Started Session 49 of User zuul.
Nov 29 02:30:16 np0005539564 podman[197230]: 2025-11-29 07:30:16.566381175 +0000 UTC m=+0.133746332 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:30:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:17.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:17.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:17 np0005539564 python3.9[197409]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:30:19 np0005539564 python3.9[197563]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:30:19 np0005539564 network[197580]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:30:19 np0005539564 network[197581]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:30:19 np0005539564 network[197582]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:30:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:19.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:19.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:21.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:21.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:23.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:25.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:25.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:26 np0005539564 python3.9[197854]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:30:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:27.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:27.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:27 np0005539564 python3.9[197938]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:30:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:29.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:29.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:31.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:31.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:33.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:33.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:34 np0005539564 python3.9[198091]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:30:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:35.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:35.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:36 np0005539564 python3.9[198243]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:37 np0005539564 python3.9[198396]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:30:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:37.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:37 np0005539564 python3.9[198548]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:38 np0005539564 python3.9[198701]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:39.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:39 np0005539564 podman[198749]: 2025-11-29 07:30:39.544285793 +0000 UTC m=+0.087760970 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:30:39 np0005539564 python3.9[198844]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401438.351516-251-271112774723762/.source.iscsi _original_basename=.siru3oj6 follow=False checksum=ead08d1b1c15fca491481728181e507f61afc000 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:40 np0005539564 python3.9[198996]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:41.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:41.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:41 np0005539564 python3.9[199148]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:43.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:43.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:43 np0005539564 python3.9[199300]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:43 np0005539564 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 02:30:44 np0005539564 python3.9[199456]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:44 np0005539564 systemd[1]: Reloading.
Nov 29 02:30:45 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:45 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:45 np0005539564 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:30:45 np0005539564 systemd[1]: Starting Open-iSCSI...
Nov 29 02:30:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:45.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:45 np0005539564 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 02:30:45 np0005539564 systemd[1]: Started Open-iSCSI.
Nov 29 02:30:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:45.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:45 np0005539564 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 02:30:45 np0005539564 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 02:30:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:30:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6327 writes, 25K keys, 6327 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6327 writes, 1168 syncs, 5.42 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 493 writes, 750 keys, 493 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 493 writes, 241 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Nov 29 02:30:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:47.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:47 np0005539564 podman[199605]: 2025-11-29 07:30:47.581498343 +0000 UTC m=+0.131235334 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:30:47 np0005539564 python3.9[199683]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:30:47 np0005539564 network[199700]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:30:47 np0005539564 network[199701]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:30:47 np0005539564 network[199702]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:30:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:49.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:49.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:51.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:53.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:54 np0005539564 python3.9[199976]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:30:55 np0005539564 python3.9[200128]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 02:30:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:55.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:55.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:55 np0005539564 python3.9[200284]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:56 np0005539564 python3.9[200407]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401455.311742-482-110739437530454/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:57.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:57.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:58 np0005539564 python3.9[200559]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:59.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:30:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:59.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:59 np0005539564 python3.9[200711]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:30:59 np0005539564 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:30:59 np0005539564 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:30:59 np0005539564 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:30:59 np0005539564 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:30:59 np0005539564 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:31:00 np0005539564 python3.9[200867]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:01.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:01.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:01 np0005539564 python3.9[201019]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:31:02 np0005539564 python3.9[201171]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:31:03 np0005539564 python3.9[201323]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:03.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:03.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:31:03.680 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:31:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:31:03.681 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:31:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:31:03.682 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:31:04 np0005539564 python3.9[201446]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401462.8464162-656-81020919172226/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:05 np0005539564 python3.9[201598]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:31:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:05.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:05.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:06 np0005539564 python3.9[201751]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:07 np0005539564 python3.9[201903]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:07.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:07.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:08 np0005539564 python3.9[202055]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:09 np0005539564 python3.9[202207]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:09.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:09.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:09 np0005539564 podman[202359]: 2025-11-29 07:31:09.728238955 +0000 UTC m=+0.077518796 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:31:09 np0005539564 python3.9[202360]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:10 np0005539564 python3.9[202530]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:11.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:11.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:12 np0005539564 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 02:31:12 np0005539564 python3.9[202682]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:13.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:13.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:13 np0005539564 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:31:13 np0005539564 python3.9[202835]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:31:14 np0005539564 python3.9[202990]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:15.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:15.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:16 np0005539564 python3.9[203262]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:17.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:17 np0005539564 podman[203386]: 2025-11-29 07:31:17.831700897 +0000 UTC m=+0.150550919 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:31:17 np0005539564 python3.9[203434]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:18 np0005539564 python3.9[203518]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:19 np0005539564 python3.9[203670]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:19.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:19.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:19 np0005539564 python3.9[203748]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:21.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:21.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:22 np0005539564 python3.9[203900]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:23 np0005539564 python3.9[204052]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:23.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:23.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:23 np0005539564 python3.9[204241]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:24 np0005539564 python3.9[204412]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:25.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:25.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:27 np0005539564 python3.9[204490]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:27.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:27.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:28 np0005539564 python3.9[204642]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:31:28 np0005539564 systemd[1]: Reloading.
Nov 29 02:31:28 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:31:28 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:31:28 np0005539564 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 02:31:28 np0005539564 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 02:31:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:31:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:31:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:29.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:29.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:29 np0005539564 python3.9[204833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:30 np0005539564 python3.9[204911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:31 np0005539564 python3.9[205063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:31.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:31 np0005539564 python3.9[205141]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:32 np0005539564 python3.9[205293]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:31:32 np0005539564 systemd[1]: Reloading.
Nov 29 02:31:32 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:31:32 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:31:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:33 np0005539564 systemd[1]: Starting Create netns directory...
Nov 29 02:31:33 np0005539564 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:31:33 np0005539564 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:31:33 np0005539564 systemd[1]: Finished Create netns directory.
Nov 29 02:31:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:33.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:33.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:31:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 2983 writes, 16K keys, 2983 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 2983 writes, 2983 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1157 writes, 5494 keys, 1157 commit groups, 1.0 writes per commit group, ingest: 12.30 MB, 0.02 MB/s#012Interval WAL: 1158 writes, 1158 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     35.9      0.53              0.07         7    0.075       0      0       0.0       0.0#012  L6      1/0    7.21 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.7     20.2     16.4      3.13              0.16         6    0.521     26K   3305       0.0       0.0#012 Sum      1/0    7.21 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7     17.3     19.2      3.66              0.23        13    0.281     26K   3305       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     12.1     12.1      2.85              0.11         6    0.475     14K   2015       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0     20.2     16.4      3.13              0.16         6    0.521     26K   3305       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     36.0      0.53              0.07         6    0.088       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.018, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 3.7 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 2.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 308.00 MB usage: 2.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000115 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(105,2.14 MB,0.695578%) FilterBlock(13,83.98 KB,0.0266286%) IndexBlock(13,169.55 KB,0.0537575%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:31:35 np0005539564 python3.9[205485]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:35.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:35.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:36 np0005539564 python3.9[205637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:37 np0005539564 python3.9[205760]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401495.802951-1277-124324424933321/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:37.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:38 np0005539564 python3.9[205912]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:39 np0005539564 python3.9[206066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:39.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:39.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:39 np0005539564 python3.9[206189]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401498.7602525-1352-106653717713224/.source.json _original_basename=.fezzy_jp follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:40 np0005539564 podman[206214]: 2025-11-29 07:31:40.530660032 +0000 UTC m=+0.081114292 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:31:41 np0005539564 python3.9[206360]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:41.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:41.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:43.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:43 np0005539564 python3.9[206787]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 02:31:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:45 np0005539564 python3.9[206939]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:31:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:48 np0005539564 podman[206964]: 2025-11-29 07:31:48.585488851 +0000 UTC m=+0.133916489 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 02:31:48 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:31:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:49.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:49.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:51.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:51.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:52 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:31:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:31:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:53.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:53.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:55.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:55.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:56 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:31:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:57.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:31:58 np0005539564 python3.9[207118]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:31:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:31:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:59.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:31:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:59.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:59 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:31:59 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:31:59 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:31:59 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:31:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:32:00 np0005539564 python3[207346]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:32:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:01.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:01.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:01 np0005539564 podman[207359]: 2025-11-29 07:32:01.915699671 +0000 UTC m=+1.386425780 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:32:02 np0005539564 podman[207420]: 2025-11-29 07:32:02.051325975 +0000 UTC m=+0.022921570 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:32:02 np0005539564 podman[207420]: 2025-11-29 07:32:02.237159937 +0000 UTC m=+0.208755522 container create 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:32:02 np0005539564 python3[207346]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:32:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:03.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:03.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:32:03.681 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:32:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:32:03.681 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:32:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:32:03.681 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:32:03 np0005539564 python3.9[207609]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:32:04 np0005539564 python3.9[207763]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:05 np0005539564 python3.9[207839]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:32:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:05.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:05.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:05 np0005539564 python3.9[207990]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401525.2459338-1616-243382636174391/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:06 np0005539564 python3.9[208066]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:32:06 np0005539564 systemd[1]: Reloading.
Nov 29 02:32:06 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:06 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:07.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:08 np0005539564 python3.9[208176]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:08 np0005539564 systemd[1]: Reloading.
Nov 29 02:32:08 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:08 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:08 np0005539564 systemd[1]: Starting multipathd container...
Nov 29 02:32:08 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:32:08 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b1487faeb20251f54d8e283d7344ba2e715c63f2de776213457c3f43b21944/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:08 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b1487faeb20251f54d8e283d7344ba2e715c63f2de776213457c3f43b21944/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:08 np0005539564 systemd[1]: Started /usr/bin/podman healthcheck run 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57.
Nov 29 02:32:08 np0005539564 podman[208216]: 2025-11-29 07:32:08.928068304 +0000 UTC m=+0.438905999 container init 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:32:08 np0005539564 multipathd[208232]: + sudo -E kolla_set_configs
Nov 29 02:32:08 np0005539564 podman[208216]: 2025-11-29 07:32:08.96121793 +0000 UTC m=+0.472055605 container start 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:32:08 np0005539564 multipathd[208232]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:32:08 np0005539564 multipathd[208232]: INFO:__main__:Validating config file
Nov 29 02:32:08 np0005539564 multipathd[208232]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:32:08 np0005539564 multipathd[208232]: INFO:__main__:Writing out command to execute
Nov 29 02:32:09 np0005539564 multipathd[208232]: ++ cat /run_command
Nov 29 02:32:09 np0005539564 multipathd[208232]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:32:09 np0005539564 multipathd[208232]: + ARGS=
Nov 29 02:32:09 np0005539564 multipathd[208232]: + sudo kolla_copy_cacerts
Nov 29 02:32:09 np0005539564 multipathd[208232]: + [[ ! -n '' ]]
Nov 29 02:32:09 np0005539564 multipathd[208232]: + . kolla_extend_start
Nov 29 02:32:09 np0005539564 multipathd[208232]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:32:09 np0005539564 multipathd[208232]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:32:09 np0005539564 multipathd[208232]: + umask 0022
Nov 29 02:32:09 np0005539564 multipathd[208232]: + exec /usr/sbin/multipathd -d
Nov 29 02:32:09 np0005539564 multipathd[208232]: 4504.428614 | --------start up--------
Nov 29 02:32:09 np0005539564 multipathd[208232]: 4504.428633 | read /etc/multipath.conf
Nov 29 02:32:09 np0005539564 multipathd[208232]: 4504.434108 | path checkers start up
Nov 29 02:32:09 np0005539564 podman[208216]: multipathd
Nov 29 02:32:09 np0005539564 systemd[1]: Started multipathd container.
Nov 29 02:32:09 np0005539564 podman[208239]: 2025-11-29 07:32:09.118924501 +0000 UTC m=+0.144602278 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:32:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:09.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:09 np0005539564 python3.9[208420]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:32:11 np0005539564 podman[208575]: 2025-11-29 07:32:11.534789603 +0000 UTC m=+0.084107354 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:32:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:11.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:11.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:13.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:13.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:14 np0005539564 python3.9[208574]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:32:15 np0005539564 python3.9[208760]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:32:15 np0005539564 systemd[1]: Stopping multipathd container...
Nov 29 02:32:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:15.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:15 np0005539564 multipathd[208232]: 4511.319113 | exit (signal)
Nov 29 02:32:15 np0005539564 multipathd[208232]: 4511.319693 | --------shut down-------
Nov 29 02:32:15 np0005539564 systemd[1]: libpod-9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57.scope: Deactivated successfully.
Nov 29 02:32:15 np0005539564 podman[208765]: 2025-11-29 07:32:15.974345822 +0000 UTC m=+0.588041528 container died 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:32:16 np0005539564 systemd[1]: 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57-405476d25a329044.timer: Deactivated successfully.
Nov 29 02:32:16 np0005539564 systemd[1]: Stopped /usr/bin/podman healthcheck run 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57.
Nov 29 02:32:16 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57-userdata-shm.mount: Deactivated successfully.
Nov 29 02:32:16 np0005539564 systemd[1]: var-lib-containers-storage-overlay-28b1487faeb20251f54d8e283d7344ba2e715c63f2de776213457c3f43b21944-merged.mount: Deactivated successfully.
Nov 29 02:32:17 np0005539564 podman[208765]: 2025-11-29 07:32:17.488457421 +0000 UTC m=+2.102153097 container cleanup 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:32:17 np0005539564 podman[208765]: multipathd
Nov 29 02:32:17 np0005539564 podman[208795]: multipathd
Nov 29 02:32:17 np0005539564 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 02:32:17 np0005539564 systemd[1]: Stopped multipathd container.
Nov 29 02:32:17 np0005539564 systemd[1]: Starting multipathd container...
Nov 29 02:32:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:17.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:17.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:17 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:32:17 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b1487faeb20251f54d8e283d7344ba2e715c63f2de776213457c3f43b21944/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:17 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b1487faeb20251f54d8e283d7344ba2e715c63f2de776213457c3f43b21944/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:17 np0005539564 systemd[1]: Started /usr/bin/podman healthcheck run 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57.
Nov 29 02:32:17 np0005539564 podman[208809]: 2025-11-29 07:32:17.754819418 +0000 UTC m=+0.179117831 container init 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:32:17 np0005539564 multipathd[208824]: + sudo -E kolla_set_configs
Nov 29 02:32:17 np0005539564 podman[208809]: 2025-11-29 07:32:17.777067189 +0000 UTC m=+0.201365592 container start 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:32:17 np0005539564 podman[208809]: multipathd
Nov 29 02:32:17 np0005539564 systemd[1]: Started multipathd container.
Nov 29 02:32:17 np0005539564 multipathd[208824]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:32:17 np0005539564 multipathd[208824]: INFO:__main__:Validating config file
Nov 29 02:32:17 np0005539564 multipathd[208824]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:32:17 np0005539564 multipathd[208824]: INFO:__main__:Writing out command to execute
Nov 29 02:32:17 np0005539564 multipathd[208824]: ++ cat /run_command
Nov 29 02:32:17 np0005539564 multipathd[208824]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:32:17 np0005539564 multipathd[208824]: + ARGS=
Nov 29 02:32:17 np0005539564 multipathd[208824]: + sudo kolla_copy_cacerts
Nov 29 02:32:17 np0005539564 multipathd[208824]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:32:17 np0005539564 multipathd[208824]: + [[ ! -n '' ]]
Nov 29 02:32:17 np0005539564 multipathd[208824]: + . kolla_extend_start
Nov 29 02:32:17 np0005539564 multipathd[208824]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:32:17 np0005539564 multipathd[208824]: + umask 0022
Nov 29 02:32:17 np0005539564 multipathd[208824]: + exec /usr/sbin/multipathd -d
Nov 29 02:32:17 np0005539564 podman[208831]: 2025-11-29 07:32:17.861793958 +0000 UTC m=+0.075938243 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:32:17 np0005539564 multipathd[208824]: 4513.254580 | --------start up--------
Nov 29 02:32:17 np0005539564 multipathd[208824]: 4513.254593 | read /etc/multipath.conf
Nov 29 02:32:17 np0005539564 systemd[1]: 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57-430954843db78b5b.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:32:17 np0005539564 systemd[1]: 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57-430954843db78b5b.service: Failed with result 'exit-code'.
Nov 29 02:32:17 np0005539564 multipathd[208824]: 4513.262069 | path checkers start up
Nov 29 02:32:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:18 np0005539564 python3.9[209016]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:19 np0005539564 podman[209064]: 2025-11-29 07:32:19.53296391 +0000 UTC m=+0.088617496 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:32:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:19.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:19.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:19 np0005539564 python3.9[209195]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:32:20 np0005539564 python3.9[209347]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 02:32:20 np0005539564 kernel: Key type psk registered
Nov 29 02:32:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:21.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:21 np0005539564 python3.9[209510]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:32:22 np0005539564 python3.9[209633]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401541.3368125-1857-13371768421661/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:23 np0005539564 python3.9[209785]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:32:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:23 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:24 np0005539564 python3.9[209937]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:32:24 np0005539564 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:32:24 np0005539564 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:32:24 np0005539564 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:32:24 np0005539564 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:32:24 np0005539564 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:32:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:32:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:25 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:25 np0005539564 python3.9[210093]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:32:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:32:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:27.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:27 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:27.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:29 np0005539564 systemd[1]: Reloading.
Nov 29 02:32:29 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:29 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:32:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:29.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:29 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:29.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:29 np0005539564 systemd[1]: Reloading.
Nov 29 02:32:29 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:29 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:31.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:31.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:32 np0005539564 systemd-logind[785]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 02:32:32 np0005539564 systemd-logind[785]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 02:32:32 np0005539564 lvm[210208]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:32:32 np0005539564 lvm[210208]: VG ceph_vg0 finished
Nov 29 02:32:33 np0005539564 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:32:33 np0005539564 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:32:33 np0005539564 systemd[1]: Reloading.
Nov 29 02:32:33 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:33 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:33.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:33.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:34 np0005539564 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:32:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:35.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:35.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:37.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:39.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:41.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:41.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:41 np0005539564 podman[211550]: 2025-11-29 07:32:41.762008013 +0000 UTC m=+0.074358710 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:32:42 np0005539564 python3.9[211551]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:32:42 np0005539564 systemd[1]: Stopping Open-iSCSI...
Nov 29 02:32:42 np0005539564 iscsid[199496]: iscsid shutting down.
Nov 29 02:32:42 np0005539564 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 02:32:42 np0005539564 systemd[1]: Stopped Open-iSCSI.
Nov 29 02:32:42 np0005539564 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:32:42 np0005539564 systemd[1]: Starting Open-iSCSI...
Nov 29 02:32:42 np0005539564 systemd[1]: Started Open-iSCSI.
Nov 29 02:32:43 np0005539564 python3.9[211723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:32:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:43.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:43.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:43 np0005539564 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:32:43 np0005539564 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:32:43 np0005539564 systemd[1]: man-db-cache-update.service: Consumed 2.191s CPU time.
Nov 29 02:32:43 np0005539564 systemd[1]: run-r6afe2465e01e4f7cbf2b379d8e4748b4.service: Deactivated successfully.
Nov 29 02:32:44 np0005539564 python3.9[211880]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:45.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:45.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:46 np0005539564 python3.9[212032]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:32:46 np0005539564 systemd[1]: Reloading.
Nov 29 02:32:46 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:46 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:47 np0005539564 python3.9[212216]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:32:47 np0005539564 network[212233]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:32:47 np0005539564 network[212234]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:32:47 np0005539564 network[212235]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:32:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:47.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:47.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:48 np0005539564 podman[212243]: 2025-11-29 07:32:48.370575225 +0000 UTC m=+0.059463847 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:32:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:49.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:49.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:49 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 02:32:49 np0005539564 podman[212314]: 2025-11-29 07:32:49.842645708 +0000 UTC m=+0.246210483 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:32:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:50 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 02:32:50 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 02:32:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:51.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:51.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:32:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:32:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:32:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:32:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 02:32:52 np0005539564 python3.9[212556]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:53.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:53.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:53 np0005539564 python3.9[212709]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:53 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:32:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:55.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:55.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:55 np0005539564 python3.9[212862]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:56 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 02:32:57 np0005539564 python3.9[213015]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:57.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:57.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:58 np0005539564 python3.9[213168]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:59 np0005539564 python3.9[213321]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:59.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:32:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:59.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:00 np0005539564 python3.9[213602]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:33:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:01 np0005539564 python3.9[213759]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:33:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:01.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:33:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:33:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:02 np0005539564 python3.9[213912]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:03 np0005539564 python3.9[214066]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:03.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:03.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:33:03.682 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:33:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:33:03.683 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:33:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:33:03.684 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:33:04 np0005539564 python3.9[214218]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:33:04 np0005539564 python3.9[214370]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:05 np0005539564 python3.9[214522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:05.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:05.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:06 np0005539564 python3.9[214674]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:07 np0005539564 python3.9[214826]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:33:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:07.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:07.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:08 np0005539564 python3.9[214978]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:08 np0005539564 python3.9[215130]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:09.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:09 np0005539564 python3.9[215282]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:10 np0005539564 python3.9[215434]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:11 np0005539564 python3.9[215586]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:11.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:11.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:11 np0005539564 podman[215710]: 2025-11-29 07:33:11.961254575 +0000 UTC m=+0.099791128 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:33:12 np0005539564 python3.9[215755]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:12 np0005539564 python3.9[215907]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:13 np0005539564 python3.9[216059]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:13.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:13.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:14 np0005539564 python3.9[216211]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:15 np0005539564 python3.9[216363]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:15.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:15.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:16 np0005539564 python3.9[216515]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:33:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:17.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:17.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:18 np0005539564 podman[216668]: 2025-11-29 07:33:18.522095564 +0000 UTC m=+0.071115424 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd)
Nov 29 02:33:19 np0005539564 python3.9[216667]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:33:19 np0005539564 systemd[1]: Reloading.
Nov 29 02:33:19 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:33:19 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:33:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:19.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:19 np0005539564 podman[216722]: 2025-11-29 07:33:19.978635933 +0000 UTC m=+0.083890842 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 02:33:20 np0005539564 python3.9[216898]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:21 np0005539564 python3.9[217051]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:21.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:22 np0005539564 python3.9[217204]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:23 np0005539564 python3.9[217357]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:23.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:23.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:23 np0005539564 python3.9[217510]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:24 np0005539564 python3.9[217663]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:24 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:33:25 np0005539564 python3.9[217816]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:25.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:26 np0005539564 python3.9[217969]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:27.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:27.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:29.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:29.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:31.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:31.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:32 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:33:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:33.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:33.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:35.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:35.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:36 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:33:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:37.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:37.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:39.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:40 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:33:41 np0005539564 python3.9[218122]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:41.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:41.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:42 np0005539564 python3.9[218274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:42 np0005539564 podman[218275]: 2025-11-29 07:33:42.195302072 +0000 UTC m=+0.087571591 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:33:43 np0005539564 python3.9[218445]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).paxos(paxos updating c 1005..1735) lease_timeout -- calling new election
Nov 29 02:33:43 np0005539564 ceph-mon[81769]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:33:43 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(32) init, last seen epoch 32
Nov 29 02:33:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:43.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:43 np0005539564 python3.9[218597]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:44 np0005539564 python3.9[218750]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:44 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:33:45 np0005539564 python3.9[218902]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:45.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:45.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:46 np0005539564 python3.9[219054]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:46 np0005539564 python3.9[219206]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:47 np0005539564 python3.9[219358]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:47.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:47.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:48 np0005539564 podman[219510]: 2025-11-29 07:33:48.689388507 +0000 UTC m=+0.094996373 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 02:33:48 np0005539564 python3.9[219511]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:48 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:33:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:49.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:50 np0005539564 podman[219554]: 2025-11-29 07:33:50.545451207 +0000 UTC m=+0.102961830 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:33:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:51.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:53 np0005539564 ceph-mon[81769]: mon.compute-1 calling monitor election
Nov 29 02:33:53 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:33:53 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:33:53 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:33:53 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:33:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:53.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:53.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:55.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:55.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:56 np0005539564 python3.9[219707]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 02:33:57 np0005539564 python3.9[219860]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:33:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:57.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:57.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:59.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:33:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:59.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:00 np0005539564 python3.9[220018]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:34:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:01.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:34:03.684 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:34:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:34:03.686 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:34:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:34:03.687 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:34:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:34:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:03.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:05.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:05.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:07.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:07.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:08 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:34:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:09.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:09.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:12 np0005539564 podman[220051]: 2025-11-29 07:34:12.531464068 +0000 UTC m=+0.083218902 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:34:12 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:13.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:13.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:15.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:15.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:34:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:17.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:17.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:19 np0005539564 podman[220070]: 2025-11-29 07:34:19.506783396 +0000 UTC m=+0.068008690 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:34:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:19.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:21 np0005539564 podman[220091]: 2025-11-29 07:34:21.623339018 +0000 UTC m=+0.171610036 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 02:34:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:21.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:34:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:23.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:24 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:25.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:26 np0005539564 systemd-logind[785]: New session 50 of user zuul.
Nov 29 02:34:26 np0005539564 systemd[1]: Started Session 50 of User zuul.
Nov 29 02:34:26 np0005539564 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 02:34:26 np0005539564 systemd-logind[785]: Session 50 logged out. Waiting for processes to exit.
Nov 29 02:34:26 np0005539564 systemd-logind[785]: Removed session 50.
Nov 29 02:34:27 np0005539564 python3.9[220271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:34:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:27.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:34:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:27.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:28 np0005539564 python3.9[220392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401666.9389706-3440-100293889479839/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:28 np0005539564 python3.9[220542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:28 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:29 np0005539564 python3.9[220618]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:29.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:29.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:30 np0005539564 python3.9[220768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:30 np0005539564 python3.9[220889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401669.5467792-3440-200062741643460/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:31 np0005539564 python3.9[221039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:31.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:31.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:32 np0005539564 python3.9[221160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401670.975019-3440-261843210071540/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:32 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:32 np0005539564 python3.9[221310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:33 np0005539564 python3.9[221431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401672.410871-3440-9669135958212/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:33.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:34 np0005539564 python3.9[221581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:34 np0005539564 python3.9[221702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401673.7475328-3440-163089730311117/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:35.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:35 np0005539564 python3.9[221854]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:34:36 np0005539564 python3.9[222006]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:34:36 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:37.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:39.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:39.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:40 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:41.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:41.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:43 np0005539564 podman[222031]: 2025-11-29 07:34:43.542312345 +0000 UTC m=+0.090237623 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 02:34:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:43.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:43.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:44 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:45.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:45.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:47.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:48 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:49.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:49.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:50 np0005539564 podman[222050]: 2025-11-29 07:34:50.549372395 +0000 UTC m=+0.101646315 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:34:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:51.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:52 np0005539564 podman[222071]: 2025-11-29 07:34:52.593401685 +0000 UTC m=+0.141639101 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 02:34:52 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:53.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:53.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:55.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:55.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:56 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:34:57 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:57 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:34:57 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:34:57.472+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:34:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:34:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:57.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:58 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:58 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:34:58 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:34:58.521+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:59 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:59 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:34:59 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:34:59.472+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:59.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:34:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:59.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:00 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:00 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:00 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:35:00.503+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:00 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:01 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:01 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:01 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:35:01.511+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:01.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:01.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:02 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:02 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:02 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:35:02.555+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:03 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:03 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:03 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:35:03.557+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:35:03.685 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:35:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:35:03.685 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:35:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:35:03.686 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:35:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:03.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:04 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:04 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:04 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:35:04.554+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:04 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:05 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:05 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:05 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:35:05.538+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:05.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:05.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 get_health_metrics reporting 8 slow ops, oldest is mgrbeacon mgr.compute-1.jjnjed(38a37ed2-442a-5e0d-a69a-881fdd186450,24104, , 0)
Nov 29 02:35:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).paxos(paxos updating c 1005..1747) lease_timeout -- calling new election
Nov 29 02:35:06 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-1[81765]: 2025-11-29T07:35:06.278+0000 7f9741fa1640 -1 mon.compute-1@2(peon) e3 get_health_metrics reporting 8 slow ops, oldest is mgrbeacon mgr.compute-1.jjnjed(38a37ed2-442a-5e0d-a69a-881fdd186450,24104, , 0)
Nov 29 02:35:06 np0005539564 ceph-mon[81769]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:35:06 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(36) init, last seen epoch 36
Nov 29 02:35:06 np0005539564 ceph-osd[79212]: osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:06 np0005539564 ceph-osd[79212]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:06 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-1[79208]: 2025-11-29T07:35:06.520+0000 7efc34e65640 -1 osd.1 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.24128.0:994 9.12 9:4b3bdbb4:::obj_delete_at_hint.0000000074:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 39.382682800s, txc = 0x55ba504cef00
Nov 29 02:35:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 39.382675171s
Nov 29 02:35:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 39.382675171s
Nov 29 02:35:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:07 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 39.776115417s, txc = 0x55ba503ea300
Nov 29 02:35:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:07 np0005539564 python3.9[222225]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:35:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:07.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:07.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:08 np0005539564 python3.9[222377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:35:09 np0005539564 python3.9[222500]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764401707.8995059-3761-166399684622827/.source _original_basename=.is7zmxzt follow=False checksum=8434a4dd1f6c458e4b7cf759342b55dfcd51069b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 02:35:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:09.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:10 np0005539564 python3.9[222652]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:35:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 get_health_metrics reporting 6 slow ops, oldest is mdsbeacon(24137/cephfs.compute-1.oeerwd up:standby fs=cephfs seq=325 v9)
Nov 29 02:35:11 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-1[81765]: 2025-11-29T07:35:11.278+0000 7f9741fa1640 -1 mon.compute-1@2(peon) e3 get_health_metrics reporting 6 slow ops, oldest is mdsbeacon(24137/cephfs.compute-1.oeerwd up:standby fs=cephfs seq=325 v9)
Nov 29 02:35:11 np0005539564 python3.9[222804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:35:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:11.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:11 np0005539564 python3.9[222925]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401710.7078664-3839-42323283478295/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:35:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:11.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:12 np0005539564 python3.9[223075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:35:12 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:13 np0005539564 python3.9[223196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401712.1712892-3883-214573847310389/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:35:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:13.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:13.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:14 np0005539564 podman[223320]: 2025-11-29 07:35:14.533160307 +0000 UTC m=+0.111701428 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:35:14 np0005539564 python3.9[223363]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 02:35:15 np0005539564 python3.9[223518]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:35:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:15.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:15.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 get_health_metrics reporting 7 slow ops, oldest is mdsbeacon(24137/cephfs.compute-1.oeerwd up:standby fs=cephfs seq=325 v9)
Nov 29 02:35:16 np0005539564 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-1[81765]: 2025-11-29T07:35:16.278+0000 7f9741fa1640 -1 mon.compute-1@2(peon) e3 get_health_metrics reporting 7 slow ops, oldest is mdsbeacon(24137/cephfs.compute-1.oeerwd up:standby fs=cephfs seq=325 v9)
Nov 29 02:35:16 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:16 np0005539564 python3[223670]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:35:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:17.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:17.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:19.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:19.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: mon.compute-1 calling monitor election
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:20 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:35:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:21.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:23.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:23.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:25.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:25.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:27.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:27.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:28 np0005539564 ceph-mon[81769]: Health check failed: 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops. (SLOW_OPS)
Nov 29 02:35:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:35:28 np0005539564 podman[223772]: 2025-11-29 07:35:28.551546502 +0000 UTC m=+7.713641969 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:35:28 np0005539564 podman[223786]: 2025-11-29 07:35:28.58781274 +0000 UTC m=+5.154247130 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 02:35:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:28 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:29.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:29.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:31.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:31.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:32 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:33.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:33.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:35.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:35.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:36 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:37.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:37.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:39.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).paxos(paxos updating c 1005..1751) lease_timeout -- calling new election
Nov 29 02:35:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:39 np0005539564 ceph-mon[81769]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:35:39 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(38) init, last seen epoch 38
Nov 29 02:35:40 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:41.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:43.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:43.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:44 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:45 np0005539564 podman[223844]: 2025-11-29 07:35:45.731279945 +0000 UTC m=+0.138499993 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:35:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:45.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:45.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:47.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:48 np0005539564 podman[223684]: 2025-11-29 07:35:48.549697814 +0000 UTC m=+31.534394227 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:35:48 np0005539564 podman[223888]: 2025-11-29 07:35:48.671180812 +0000 UTC m=+0.019965425 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:35:48 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:49 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(39) init, last seen epoch 39, mid-election, bumping
Nov 29 02:35:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:49.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:49.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:50 np0005539564 podman[223888]: 2025-11-29 07:35:50.281156663 +0000 UTC m=+1.629941246 container create b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:35:50 np0005539564 python3[223670]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 02:35:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:50 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:35:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:50 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:35:50 np0005539564 ceph-mgr[82125]: ms_deliver_dispatch: unhandled message 0x5596bdcb91e0 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Nov 29 02:35:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 handle_timecheck drop unexpected msg
Nov 29 02:35:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:51.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:52 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: mon.compute-1 calling monitor election
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: Health detail: HEALTH_WARN 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: [WRN] SLOW_OPS: 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: Health detail: HEALTH_WARN 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:53 np0005539564 ceph-mon[81769]: [WRN] SLOW_OPS: 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:53.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:54 np0005539564 python3.9[224079]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:35:55 np0005539564 ceph-mon[81769]: Health check cleared: SLOW_OPS (was: 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.)
Nov 29 02:35:55 np0005539564 ceph-mon[81769]: Cluster is now healthy
Nov 29 02:35:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:55 np0005539564 python3.9[224233]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 02:35:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:55.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:56.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:56.256202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756256241, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2192, "num_deletes": 251, "total_data_size": 5360931, "memory_usage": 5443056, "flush_reason": "Manual Compaction"}
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 29 02:35:56 np0005539564 python3.9[224385]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756822003, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 3461324, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15694, "largest_seqno": 17881, "table_properties": {"data_size": 3452285, "index_size": 5469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20879, "raw_average_key_size": 21, "raw_value_size": 3433360, "raw_average_value_size": 3510, "num_data_blocks": 241, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401377, "oldest_key_time": 1764401377, "file_creation_time": 1764401756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 565863 microseconds, and 9627 cpu microseconds.
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:56.822062) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 3461324 bytes OK
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:56.822088) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:56.842684) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:56.842740) EVENT_LOG_v1 {"time_micros": 1764401756842730, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:56.842764) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 5351048, prev total WAL file size 5367213, number of live WAL files 2.
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:56.844415) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(3380KB)], [30(7386KB)]
Nov 29 02:35:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756844526, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 11025150, "oldest_snapshot_seqno": -1}
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4614 keys, 8884634 bytes, temperature: kUnknown
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401757010062, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 8884634, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8852152, "index_size": 19828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115208, "raw_average_key_size": 24, "raw_value_size": 8767027, "raw_average_value_size": 1900, "num_data_blocks": 828, "num_entries": 4614, "num_filter_entries": 4614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764401756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:57.010365) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8884634 bytes
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:57.016216) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.6 rd, 53.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5136, records dropped: 522 output_compression: NoCompression
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:57.016275) EVENT_LOG_v1 {"time_micros": 1764401757016233, "job": 16, "event": "compaction_finished", "compaction_time_micros": 165618, "compaction_time_cpu_micros": 34304, "output_level": 6, "num_output_files": 1, "total_output_size": 8884634, "num_input_records": 5136, "num_output_records": 4614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401757017246, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401757019208, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:56.844209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:57.019272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:57.019279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:57.019281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:57.019283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:35:57.019285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539564 python3[224537]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:35:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:57.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539564 podman[224576]: 2025-11-29 07:35:57.94795969 +0000 UTC m=+0.028704633 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:35:58 np0005539564 podman[224576]: 2025-11-29 07:35:58.544235776 +0000 UTC m=+0.624980749 container create 966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:35:58 np0005539564 python3[224537]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 02:35:59 np0005539564 python3.9[224766]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:35:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:35:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:59.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:00 np0005539564 python3.9[224920]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:01 np0005539564 python3.9[225071]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401760.4985435-4159-174925470617236/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:01 np0005539564 python3.9[225147]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:36:01 np0005539564 systemd[1]: Reloading.
Nov 29 02:36:01 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:01 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:01.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:02 np0005539564 python3.9[225259]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:36:02 np0005539564 systemd[1]: Reloading.
Nov 29 02:36:02 np0005539564 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:02 np0005539564 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:03 np0005539564 systemd[1]: Starting nova_compute container...
Nov 29 02:36:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:36:03.685 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:36:03.686 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:36:03.686 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:03.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:04.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:04 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:36:04 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:04 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:04 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:04 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:04 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:04 np0005539564 podman[225300]: 2025-11-29 07:36:04.371047766 +0000 UTC m=+0.987820979 container init 966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:36:04 np0005539564 podman[225300]: 2025-11-29 07:36:04.382362214 +0000 UTC m=+0.999135397 container start 966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + sudo -E kolla_set_configs
Nov 29 02:36:04 np0005539564 podman[225300]: nova_compute
Nov 29 02:36:04 np0005539564 systemd[1]: Started nova_compute container.
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Validating config file
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying service configuration files
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Writing out command to execute
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:04 np0005539564 nova_compute[225316]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:36:04 np0005539564 nova_compute[225316]: ++ cat /run_command
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + CMD=nova-compute
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + ARGS=
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + sudo kolla_copy_cacerts
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + [[ ! -n '' ]]
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + . kolla_extend_start
Nov 29 02:36:04 np0005539564 nova_compute[225316]: Running command: 'nova-compute'
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + umask 0022
Nov 29 02:36:04 np0005539564 nova_compute[225316]: + exec nova-compute
Nov 29 02:36:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:05 np0005539564 python3.9[225477]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:36:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:05.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:06.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:06 np0005539564 python3.9[225628]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:36:06 np0005539564 nova_compute[225316]: 2025-11-29 07:36:06.970 225320 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:06 np0005539564 nova_compute[225316]: 2025-11-29 07:36:06.971 225320 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:06 np0005539564 nova_compute[225316]: 2025-11-29 07:36:06.971 225320 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:06 np0005539564 nova_compute[225316]: 2025-11-29 07:36:06.972 225320 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.092631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767092709, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 354, "num_deletes": 255, "total_data_size": 309981, "memory_usage": 318448, "flush_reason": "Manual Compaction"}
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767127712, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 204831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17886, "largest_seqno": 18235, "table_properties": {"data_size": 202670, "index_size": 325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4808, "raw_average_key_size": 16, "raw_value_size": 198473, "raw_average_value_size": 677, "num_data_blocks": 15, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401756, "oldest_key_time": 1764401756, "file_creation_time": 1764401767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 35113 microseconds, and 1574 cpu microseconds.
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:36:07 np0005539564 nova_compute[225316]: 2025-11-29 07:36:07.130 225320 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.127753) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 204831 bytes OK
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.127774) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.135942) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.135989) EVENT_LOG_v1 {"time_micros": 1764401767135978, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.136014) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 307560, prev total WAL file size 307560, number of live WAL files 2.
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.136626) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(200KB)], [33(8676KB)]
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767136689, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 9089465, "oldest_snapshot_seqno": -1}
Nov 29 02:36:07 np0005539564 nova_compute[225316]: 2025-11-29 07:36:07.148 225320 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:36:07 np0005539564 nova_compute[225316]: 2025-11-29 07:36:07.149 225320 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4389 keys, 8745333 bytes, temperature: kUnknown
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767343400, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 8745333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8714437, "index_size": 18818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 111817, "raw_average_key_size": 25, "raw_value_size": 8633210, "raw_average_value_size": 1967, "num_data_blocks": 772, "num_entries": 4389, "num_filter_entries": 4389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764401767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.343705) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8745333 bytes
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.350504) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 43.9 rd, 42.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.5 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(87.1) write-amplify(42.7) OK, records in: 4907, records dropped: 518 output_compression: NoCompression
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.350545) EVENT_LOG_v1 {"time_micros": 1764401767350530, "job": 18, "event": "compaction_finished", "compaction_time_micros": 206822, "compaction_time_cpu_micros": 21472, "output_level": 6, "num_output_files": 1, "total_output_size": 8745333, "num_input_records": 4907, "num_output_records": 4389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767350769, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767352022, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.136477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.352107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.352112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.352114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.352115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:36:07.352117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539564 python3.9[225782]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:36:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:08.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.197 225320 INFO nova.virt.driver [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.309 225320 INFO nova.compute.provider_config [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.392 225320 DEBUG oslo_concurrency.lockutils [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.392 225320 DEBUG oslo_concurrency.lockutils [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.393 225320 DEBUG oslo_concurrency.lockutils [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.393 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.394 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.394 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.394 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.394 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.395 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.395 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.395 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.395 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.396 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.396 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.396 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.396 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.397 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.397 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.397 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.397 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.397 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.398 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.398 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.398 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.398 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.398 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.399 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.399 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.399 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.400 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.400 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.400 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.400 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.401 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.401 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.401 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.401 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.402 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.402 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.402 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.402 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.402 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.403 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.403 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.403 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.403 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.403 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.403 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.404 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.404 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.404 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.404 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.404 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.404 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.405 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.405 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.405 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.405 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.405 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.406 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.406 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.406 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.406 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.406 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.407 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.407 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.407 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.407 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.407 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.408 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.408 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.408 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.408 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.408 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.409 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.409 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.409 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.409 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.410 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.410 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.410 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.410 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.411 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.411 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.411 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.411 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.411 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.412 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.412 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.412 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.412 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.412 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.413 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.413 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.413 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.413 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.414 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.414 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.414 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.414 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.414 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.415 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.415 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.415 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.415 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.415 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.416 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.416 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.416 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.416 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.416 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.417 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.417 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.417 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.417 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.417 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.418 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.418 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.418 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.418 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.419 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.419 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.419 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.419 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.419 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.420 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.420 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.420 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.420 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.420 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.420 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.421 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.421 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.421 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.421 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.421 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.421 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.422 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.422 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.422 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.422 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.422 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.423 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.423 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.423 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.423 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.423 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.423 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.424 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.424 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.424 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.426 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.426 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.426 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.426 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.427 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.427 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.427 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.427 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.427 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.427 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.428 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.428 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.428 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.428 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.428 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.428 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.429 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.429 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.429 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.429 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.429 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.429 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.429 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.430 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.430 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.430 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.430 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.430 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.430 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.430 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.431 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.431 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.431 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.431 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.431 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.431 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.432 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.432 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.432 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.432 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.432 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.432 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.432 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.433 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.433 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.433 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.433 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.433 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.433 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.433 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.434 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.434 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.434 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.434 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.434 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.434 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.434 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.435 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.435 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.435 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.435 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.435 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.435 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.436 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.436 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.436 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.436 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.436 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.436 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.436 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.437 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.437 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.437 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.437 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.437 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.437 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.437 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.438 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.438 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.438 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.438 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.438 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.438 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.438 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.439 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.439 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.439 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.439 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.439 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.439 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.440 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.440 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.440 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.440 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.440 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.440 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.440 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.441 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.441 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.441 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.441 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.441 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.441 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.441 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.441 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.442 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.442 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.442 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.442 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.442 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.442 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.443 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.443 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.443 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.443 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.443 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.443 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.444 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.444 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.444 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.444 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.444 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.444 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.444 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.444 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.445 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.445 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.445 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.445 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.445 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.445 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.445 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.446 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.446 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.446 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.446 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.446 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.446 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.446 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.447 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.447 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.447 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.447 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.447 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.447 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.447 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.448 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.448 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.448 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.448 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.448 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.448 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.448 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.448 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.449 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.449 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.449 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.449 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.449 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.449 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.449 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.450 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.450 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.450 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.450 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.450 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.450 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.450 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.451 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.451 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.451 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.451 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.451 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.451 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.451 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.452 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.452 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.452 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.452 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.452 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.452 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.452 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.453 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.453 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.453 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.453 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.453 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.453 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.453 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.454 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.454 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.454 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.454 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.454 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.454 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.455 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.455 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.455 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.455 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.455 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.455 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.455 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.456 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.456 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.456 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.456 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.456 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.456 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.457 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.457 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.457 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.457 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.457 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.457 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.457 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.458 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.458 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.458 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.458 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.458 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.458 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.458 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.459 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.459 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.459 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.459 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.459 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.459 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.459 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.460 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.460 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.460 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.460 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.460 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.460 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.461 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.461 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.461 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.461 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.461 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.462 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.462 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.462 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.462 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.462 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.462 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.462 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.463 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.463 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.463 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.463 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.463 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.463 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.463 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.464 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.464 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.464 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.464 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.464 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.464 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.464 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.465 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.465 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.465 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.465 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.465 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.465 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.465 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.465 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.466 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.466 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.466 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.466 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.466 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.466 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.466 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.467 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.467 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.467 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.467 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.467 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.467 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.467 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.468 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.468 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.468 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.468 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.468 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.468 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.468 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.469 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.469 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.469 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.469 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.469 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.469 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.469 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.470 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.470 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.470 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.470 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.470 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.470 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.471 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.471 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.471 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.471 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.471 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.471 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.471 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.472 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.472 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.472 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.472 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.472 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.472 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.472 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.472 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.473 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.473 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.473 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.473 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.473 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.474 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.474 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.474 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.474 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.474 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.474 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.474 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.475 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.475 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.475 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.475 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.475 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.475 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.475 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.476 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.476 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.476 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.476 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.476 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.476 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.477 225320 WARNING oslo_config.cfg [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:36:08 np0005539564 nova_compute[225316]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:36:08 np0005539564 nova_compute[225316]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:36:08 np0005539564 nova_compute[225316]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:36:08 np0005539564 nova_compute[225316]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.477 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.477 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.477 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.477 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.477 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.478 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.478 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.478 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.478 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.478 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.478 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.479 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.479 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.479 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.479 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.479 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.479 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.479 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.480 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rbd_secret_uuid        = 38a37ed2-442a-5e0d-a69a-881fdd186450 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.480 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.480 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.480 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.480 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.480 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.480 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.481 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.481 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.481 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.481 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.481 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.481 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.481 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.482 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.482 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.482 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.482 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.482 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.482 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.483 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.483 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.483 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.483 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.483 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.483 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.484 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.484 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.484 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.484 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.484 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.484 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.484 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.485 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.485 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.485 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.485 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.485 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.485 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.486 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.486 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.486 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.486 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.486 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.486 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.486 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.487 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.487 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.487 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.487 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.487 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.487 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.488 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.488 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.488 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.488 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.488 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.488 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.488 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.489 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.489 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.489 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.489 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.489 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.489 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.489 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.490 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.490 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.490 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.490 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.490 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.491 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.491 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.491 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.491 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.491 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.491 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.491 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.492 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.492 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.492 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.492 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.492 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.492 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.492 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.493 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.493 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.493 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.493 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.493 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.493 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.494 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.494 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.494 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.494 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.494 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.494 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.495 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.495 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.495 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.495 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.495 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.495 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.495 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.496 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.496 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.496 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.496 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.496 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.496 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.496 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.497 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.497 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.497 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.497 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.497 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.497 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.497 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.498 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.498 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.498 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.498 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.498 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.498 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.499 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.499 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.499 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.499 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.499 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.499 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.499 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.500 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.500 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.500 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.500 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.500 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.500 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.500 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.501 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.501 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.501 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.501 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.501 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.501 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.502 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.502 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.502 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.502 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.502 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.502 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.502 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.502 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.503 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.503 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.503 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.503 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.503 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.503 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.503 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.504 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.504 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.504 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.504 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.504 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.504 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.505 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.505 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.505 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.505 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.505 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.506 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.506 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.506 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.506 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.506 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.506 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.507 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.507 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.507 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.507 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.508 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.508 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.508 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.508 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.508 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.508 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.509 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.509 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.509 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.509 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.509 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.509 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.510 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.510 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.510 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.510 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.510 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.511 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.511 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.511 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.511 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.511 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.511 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.512 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.512 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.512 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.512 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.512 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.513 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.513 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.513 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.513 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.513 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.513 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.514 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.514 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.514 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.514 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.514 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.514 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.515 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.515 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.515 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.515 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.515 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.516 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.516 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.516 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.516 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.516 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.517 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.517 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.517 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.517 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.518 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.518 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.518 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.518 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.518 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.519 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.519 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.519 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.519 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.519 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.520 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.520 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.520 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.520 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.520 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.520 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.521 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.521 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.521 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.521 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.521 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.522 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.522 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.522 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.522 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.522 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.523 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.523 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.523 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.523 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.523 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.523 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.524 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.524 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.524 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.524 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.524 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.525 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.525 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.525 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.525 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.525 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.526 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.526 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.526 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.526 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.526 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.526 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.527 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.527 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.527 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.527 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.527 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.528 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.528 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.528 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.528 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.528 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.529 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.529 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.529 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.529 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.529 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.529 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.530 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.530 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.530 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.530 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.531 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.531 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.531 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.531 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.531 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.531 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.532 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.532 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.532 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.532 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.533 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.533 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.533 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.533 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.533 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.533 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.534 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.534 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.534 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.534 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.534 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.535 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.535 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.535 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.535 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.535 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.536 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.536 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.536 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.536 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.536 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.536 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.537 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.537 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.537 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.537 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.537 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.538 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.538 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.538 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.538 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.538 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.538 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.539 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.539 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.539 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.539 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.539 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.540 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.540 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.540 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.540 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.540 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.540 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.541 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.541 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.541 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.541 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.541 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.542 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.542 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.542 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.542 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.542 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.542 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.543 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.543 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.543 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.543 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.543 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.544 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.544 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.544 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.544 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.544 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.544 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.545 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.545 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.545 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.545 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.545 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.545 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.545 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.546 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.546 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.546 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.546 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.546 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.546 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.546 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.547 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.547 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.547 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.547 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.547 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.547 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.547 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.548 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.548 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.548 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.548 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.548 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.548 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.548 225320 DEBUG oslo_service.service [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.550 225320 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.565 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.566 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.566 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.567 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:36:08 np0005539564 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:36:08 np0005539564 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.653 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fda186616a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.655 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fda186616a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.656 225320 INFO nova.virt.libvirt.driver [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.678 225320 WARNING nova.virt.libvirt.driver [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 29 02:36:08 np0005539564 nova_compute[225316]: 2025-11-29 07:36:08.679 225320 DEBUG nova.virt.libvirt.volume.mount [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:36:09 np0005539564 python3.9[225986]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:36:09 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:36:09 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.493 225320 INFO nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <host>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <uuid>2e858761-3292-4a17-b38f-a169c3064289</uuid>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <arch>x86_64</arch>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <microcode version='16777317'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='x2apic'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='tsc-deadline'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='osxsave'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='hypervisor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='tsc_adjust'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='spec-ctrl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='stibp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='arch-capabilities'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='cmp_legacy'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='topoext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='virt-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='lbrv'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='tsc-scale'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='vmcb-clean'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='pause-filter'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='pfthreshold'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='rdctl-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='mds-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature name='pschange-mc-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <pages unit='KiB' size='4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <pages unit='KiB' size='2048'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <power_management>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <suspend_mem/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </power_management>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <iommu support='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <migration_features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <live/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <uri_transports>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <uri_transport>tcp</uri_transport>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <uri_transport>rdma</uri_transport>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </uri_transports>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </migration_features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <topology>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <cells num='1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <cell id='0'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:          <memory unit='KiB'>7864320</memory>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:          <distances>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <sibling id='0' value='10'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:          </distances>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:          <cpus num='8'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:          </cpus>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        </cell>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </cells>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </topology>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <cache>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </cache>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <secmodel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model>selinux</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <doi>0</doi>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </secmodel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <secmodel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model>dac</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <doi>0</doi>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </secmodel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </host>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <guest>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <os_type>hvm</os_type>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <arch name='i686'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <wordsize>32</wordsize>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <domain type='qemu'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <domain type='kvm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </arch>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <pae/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <nonpae/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <acpi default='on' toggle='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <apic default='on' toggle='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <cpuselection/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <deviceboot/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <externalSnapshot/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </guest>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <guest>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <os_type>hvm</os_type>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <arch name='x86_64'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <wordsize>64</wordsize>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <domain type='qemu'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <domain type='kvm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </arch>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <acpi default='on' toggle='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <apic default='on' toggle='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <cpuselection/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <deviceboot/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <externalSnapshot/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </guest>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 
Nov 29 02:36:09 np0005539564 nova_compute[225316]: </capabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: #033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.501 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.521 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:36:09 np0005539564 nova_compute[225316]: <domainCapabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <domain>kvm</domain>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <arch>i686</arch>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <vcpu max='4096'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <iothreads supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <os supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <enum name='firmware'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <loader supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>rom</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pflash</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='readonly'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>yes</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>no</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='secure'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>no</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </loader>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </os>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>on</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>off</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='maximumMigratable'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>on</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>off</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='succor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='custom' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-128'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-256'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-512'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='KnightsMill'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SierraForest'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='athlon'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='athlon-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='core2duo'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='core2duo-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='coreduo'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='coreduo-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='n270'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='n270-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='phenom'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='phenom-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <memoryBacking supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <enum name='sourceType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>file</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>anonymous</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>memfd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </memoryBacking>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <devices>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <disk supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='diskDevice'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>disk</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>cdrom</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>floppy</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>lun</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='bus'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>fdc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>scsi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>sata</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </disk>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <graphics supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vnc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>egl-headless</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dbus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </graphics>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <video supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='modelType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vga</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>cirrus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>none</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>bochs</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ramfb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </video>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <hostdev supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='mode'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>subsystem</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='startupPolicy'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>default</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>mandatory</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>requisite</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>optional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='subsysType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pci</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>scsi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='capsType'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='pciBackend'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </hostdev>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <rng supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>random</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>egd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>builtin</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </rng>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <filesystem supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='driverType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>path</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>handle</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtiofs</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </filesystem>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <tpm supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tpm-tis</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tpm-crb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>emulator</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>external</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendVersion'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>2.0</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </tpm>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <redirdev supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='bus'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </redirdev>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <channel supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pty</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>unix</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </channel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <crypto supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>qemu</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>builtin</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </crypto>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <interface supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>default</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>passt</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </interface>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <panic supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>isa</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>hyperv</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </panic>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <console supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>null</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pty</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dev</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>file</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pipe</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>stdio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>udp</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tcp</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>unix</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>qemu-vdagent</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dbus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </console>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </devices>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <gic supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <genid supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <backup supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <async-teardown supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <ps2 supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <sev supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <sgx supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <hyperv supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='features'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>relaxed</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vapic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>spinlocks</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vpindex</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>runtime</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>synic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>stimer</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>reset</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vendor_id</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>frequencies</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>reenlightenment</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tlbflush</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ipi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>avic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>emsr_bitmap</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>xmm_input</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <defaults>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </defaults>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </hyperv>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <launchSecurity supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='sectype'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tdx</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </launchSecurity>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: </domainCapabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.527 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:36:09 np0005539564 nova_compute[225316]: <domainCapabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <domain>kvm</domain>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <arch>i686</arch>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <vcpu max='240'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <iothreads supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <os supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <enum name='firmware'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <loader supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>rom</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pflash</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='readonly'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>yes</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>no</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='secure'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>no</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </loader>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </os>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>on</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>off</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='maximumMigratable'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>on</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>off</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='succor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='custom' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-128'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-256'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-512'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='KnightsMill'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SierraForest'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='athlon'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='athlon-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='core2duo'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='core2duo-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='coreduo'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='coreduo-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='n270'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='n270-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='phenom'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='phenom-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <memoryBacking supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <enum name='sourceType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>file</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>anonymous</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>memfd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </memoryBacking>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <devices>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <disk supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='diskDevice'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>disk</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>cdrom</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>floppy</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>lun</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='bus'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ide</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>fdc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>scsi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>sata</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </disk>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <graphics supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vnc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>egl-headless</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dbus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </graphics>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <video supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='modelType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vga</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>cirrus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>none</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>bochs</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ramfb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </video>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <hostdev supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='mode'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>subsystem</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='startupPolicy'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>default</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>mandatory</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>requisite</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>optional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='subsysType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pci</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>scsi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='capsType'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='pciBackend'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </hostdev>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <rng supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>random</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>egd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>builtin</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </rng>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <filesystem supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='driverType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>path</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>handle</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtiofs</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </filesystem>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <tpm supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tpm-tis</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tpm-crb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>emulator</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>external</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendVersion'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>2.0</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </tpm>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <redirdev supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='bus'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </redirdev>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <channel supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pty</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>unix</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </channel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <crypto supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>qemu</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>builtin</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </crypto>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <interface supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>default</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>passt</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </interface>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <panic supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>isa</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>hyperv</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </panic>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <console supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>null</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pty</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dev</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>file</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pipe</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>stdio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>udp</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tcp</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>unix</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>qemu-vdagent</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dbus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </console>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </devices>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <gic supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <genid supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <backup supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <async-teardown supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <ps2 supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <sev supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <sgx supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <hyperv supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='features'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>relaxed</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vapic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>spinlocks</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vpindex</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>runtime</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>synic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>stimer</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>reset</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vendor_id</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>frequencies</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>reenlightenment</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tlbflush</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ipi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>avic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>emsr_bitmap</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>xmm_input</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <defaults>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </defaults>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </hyperv>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <launchSecurity supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='sectype'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tdx</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </launchSecurity>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: </domainCapabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.561 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.565 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:36:09 np0005539564 nova_compute[225316]: <domainCapabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <domain>kvm</domain>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <arch>x86_64</arch>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <vcpu max='4096'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <iothreads supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <os supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <enum name='firmware'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>efi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <loader supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>rom</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pflash</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='readonly'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>yes</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>no</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='secure'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>yes</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>no</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </loader>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </os>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>on</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>off</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='maximumMigratable'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>on</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>off</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='succor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='custom' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-128'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-256'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-512'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='KnightsMill'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SierraForest'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='athlon'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='athlon-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='core2duo'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='core2duo-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='coreduo'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='coreduo-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='n270'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='n270-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='phenom'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='phenom-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <memoryBacking supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <enum name='sourceType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>file</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>anonymous</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>memfd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </memoryBacking>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <devices>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <disk supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='diskDevice'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>disk</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>cdrom</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>floppy</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>lun</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='bus'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>fdc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>scsi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>sata</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </disk>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <graphics supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vnc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>egl-headless</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dbus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </graphics>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <video supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='modelType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vga</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>cirrus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>none</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>bochs</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ramfb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </video>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <hostdev supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='mode'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>subsystem</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='startupPolicy'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>default</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>mandatory</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>requisite</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>optional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='subsysType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pci</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>scsi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='capsType'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='pciBackend'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </hostdev>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <rng supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>random</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>egd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>builtin</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </rng>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <filesystem supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='driverType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>path</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>handle</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtiofs</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </filesystem>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <tpm supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tpm-tis</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tpm-crb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>emulator</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>external</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendVersion'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>2.0</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </tpm>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <redirdev supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='bus'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </redirdev>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <channel supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pty</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>unix</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </channel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <crypto supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>qemu</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>builtin</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </crypto>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <interface supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>default</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>passt</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </interface>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <panic supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>isa</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>hyperv</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </panic>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <console supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>null</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pty</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dev</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>file</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pipe</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>stdio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>udp</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tcp</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>unix</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>qemu-vdagent</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dbus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </console>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </devices>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <gic supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <genid supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <backup supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <async-teardown supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <ps2 supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <sev supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <sgx supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <hyperv supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='features'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>relaxed</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vapic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>spinlocks</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vpindex</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>runtime</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>synic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>stimer</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>reset</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vendor_id</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>frequencies</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>reenlightenment</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tlbflush</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ipi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>avic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>emsr_bitmap</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>xmm_input</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <defaults>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </defaults>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </hyperv>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <launchSecurity supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='sectype'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tdx</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </launchSecurity>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: </domainCapabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.628 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:36:09 np0005539564 nova_compute[225316]: <domainCapabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <domain>kvm</domain>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <arch>x86_64</arch>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <vcpu max='240'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <iothreads supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <os supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <enum name='firmware'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <loader supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>rom</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pflash</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='readonly'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>yes</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>no</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='secure'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>no</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </loader>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </os>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>on</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>off</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='maximumMigratable'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>on</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>off</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='succor'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <mode name='custom' supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Denverton-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='EPYC-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-128'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-256'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx10-512'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Haswell-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='KnightsMill'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SierraForest'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='athlon'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='athlon-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='core2duo'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='core2duo-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='coreduo'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='coreduo-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='n270'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='n270-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='phenom'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <blockers model='phenom-v1'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </blockers>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </mode>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <memoryBacking supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <enum name='sourceType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>file</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>anonymous</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <value>memfd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </memoryBacking>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <devices>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <disk supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='diskDevice'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>disk</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>cdrom</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>floppy</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>lun</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='bus'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ide</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>fdc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>scsi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>sata</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </disk>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <graphics supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vnc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>egl-headless</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dbus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </graphics>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <video supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='modelType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vga</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>cirrus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>none</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>bochs</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ramfb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </video>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <hostdev supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='mode'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>subsystem</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='startupPolicy'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>default</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>mandatory</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>requisite</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>optional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='subsysType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pci</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>scsi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='capsType'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='pciBackend'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </hostdev>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <rng supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>random</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>egd</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>builtin</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </rng>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <filesystem supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='driverType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>path</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>handle</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>virtiofs</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </filesystem>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <tpm supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tpm-tis</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tpm-crb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>emulator</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>external</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendVersion'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>2.0</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </tpm>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <redirdev supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='bus'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>usb</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </redirdev>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <channel supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pty</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>unix</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </channel>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <crypto supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>qemu</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>builtin</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </crypto>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <interface supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='backendType'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>default</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>passt</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </interface>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <panic supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='model'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>isa</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>hyperv</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </panic>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <console supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='type'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>null</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vc</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pty</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dev</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>file</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>pipe</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>stdio</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>udp</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tcp</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>unix</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>qemu-vdagent</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>dbus</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </console>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </devices>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <gic supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <genid supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <backup supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <async-teardown supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <ps2 supported='yes'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <sev supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <sgx supported='no'/>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <hyperv supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='features'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>relaxed</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vapic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>spinlocks</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vpindex</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>runtime</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>synic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>stimer</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>reset</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>vendor_id</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>frequencies</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>reenlightenment</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tlbflush</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>ipi</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>avic</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>emsr_bitmap</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>xmm_input</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <defaults>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </defaults>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </hyperv>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    <launchSecurity supported='yes'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      <enum name='sectype'>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:        <value>tdx</value>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:      </enum>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:    </launchSecurity>
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  </features>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: </domainCapabilities>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.701 225320 DEBUG nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.701 225320 INFO nova.virt.libvirt.host [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.704 225320 INFO nova.virt.libvirt.driver [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.704 225320 INFO nova.virt.libvirt.driver [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.716 225320 DEBUG nova.virt.libvirt.driver [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:36:09 np0005539564 nova_compute[225316]:  <model>Nehalem</model>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: </cpu>
Nov 29 02:36:09 np0005539564 nova_compute[225316]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.720 225320 DEBUG nova.virt.libvirt.driver [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.753 225320 INFO nova.virt.node [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Determined node identity ea190a43-1246-44b8-8f8b-a61b155a1d3b from /var/lib/nova/compute_id#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.783 225320 WARNING nova.compute.manager [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Compute nodes ['ea190a43-1246-44b8-8f8b-a61b155a1d3b'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.814 225320 INFO nova.compute.manager [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.855 225320 WARNING nova.compute.manager [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.856 225320 DEBUG oslo_concurrency.lockutils [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.856 225320 DEBUG oslo_concurrency.lockutils [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.856 225320 DEBUG oslo_concurrency.lockutils [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.856 225320 DEBUG nova.compute.resource_tracker [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:36:09 np0005539564 nova_compute[225316]: 2025-11-29 07:36:09.856 225320 DEBUG oslo_concurrency.processutils [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:36:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:09.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:36:10 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2138765783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:36:10 np0005539564 nova_compute[225316]: 2025-11-29 07:36:10.333 225320 DEBUG oslo_concurrency.processutils [None req-62f5f89a-6b20-4860-b662-22c2bcd6a293 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:36:10 np0005539564 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:36:10 np0005539564 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:36:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:10 np0005539564 python3.9[226194]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:36:10 np0005539564 systemd[1]: Stopping nova_compute container...
Nov 29 02:36:10 np0005539564 nova_compute[225316]: 2025-11-29 07:36:10.625 225320 DEBUG oslo_concurrency.lockutils [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:36:10 np0005539564 nova_compute[225316]: 2025-11-29 07:36:10.625 225320 DEBUG oslo_concurrency.lockutils [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:36:10 np0005539564 nova_compute[225316]: 2025-11-29 07:36:10.625 225320 DEBUG oslo_concurrency.lockutils [None req-f5396648-3b41-4fcc-8934-b2bbe29d5a74 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:36:11 np0005539564 virtqemud[225880]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 02:36:11 np0005539564 virtqemud[225880]: hostname: compute-1
Nov 29 02:36:11 np0005539564 virtqemud[225880]: End of file while reading data: Input/output error
Nov 29 02:36:11 np0005539564 systemd[1]: libpod-966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef.scope: Deactivated successfully.
Nov 29 02:36:11 np0005539564 systemd[1]: libpod-966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef.scope: Consumed 3.800s CPU time.
Nov 29 02:36:11 np0005539564 podman[226221]: 2025-11-29 07:36:11.023127391 +0000 UTC m=+0.536310715 container died 966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:36:11 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef-userdata-shm.mount: Deactivated successfully.
Nov 29 02:36:11 np0005539564 systemd[1]: var-lib-containers-storage-overlay-d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d-merged.mount: Deactivated successfully.
Nov 29 02:36:11 np0005539564 podman[226221]: 2025-11-29 07:36:11.639709531 +0000 UTC m=+1.152892855 container cleanup 966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:36:11 np0005539564 podman[226221]: nova_compute
Nov 29 02:36:11 np0005539564 podman[226253]: nova_compute
Nov 29 02:36:11 np0005539564 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 02:36:11 np0005539564 systemd[1]: Stopped nova_compute container.
Nov 29 02:36:11 np0005539564 systemd[1]: Starting nova_compute container...
Nov 29 02:36:11 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:36:11 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9577a6a4520786ef8d00b55e617f34ed0a1ef0610844a0448149654318d355d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539564 podman[226266]: 2025-11-29 07:36:11.893659776 +0000 UTC m=+0.165434165 container init 966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:36:11 np0005539564 podman[226266]: 2025-11-29 07:36:11.901644734 +0000 UTC m=+0.173419093 container start 966a1301fe3e4d853c7e974259c796212c0bcce82933ac15f69e5fd3388d07ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 29 02:36:11 np0005539564 nova_compute[226295]: + sudo -E kolla_set_configs
Nov 29 02:36:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:11.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Validating config file
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying service configuration files
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Writing out command to execute
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:11 np0005539564 nova_compute[226295]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:36:11 np0005539564 nova_compute[226295]: ++ cat /run_command
Nov 29 02:36:11 np0005539564 nova_compute[226295]: + CMD=nova-compute
Nov 29 02:36:11 np0005539564 nova_compute[226295]: + ARGS=
Nov 29 02:36:11 np0005539564 nova_compute[226295]: + sudo kolla_copy_cacerts
Nov 29 02:36:12 np0005539564 nova_compute[226295]: + [[ ! -n '' ]]
Nov 29 02:36:12 np0005539564 nova_compute[226295]: + . kolla_extend_start
Nov 29 02:36:12 np0005539564 nova_compute[226295]: Running command: 'nova-compute'
Nov 29 02:36:12 np0005539564 nova_compute[226295]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:36:12 np0005539564 nova_compute[226295]: + umask 0022
Nov 29 02:36:12 np0005539564 nova_compute[226295]: + exec nova-compute
Nov 29 02:36:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:12.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:12 np0005539564 podman[226266]: nova_compute
Nov 29 02:36:12 np0005539564 systemd[1]: Started nova_compute container.
Nov 29 02:36:12 np0005539564 podman[226267]: 2025-11-29 07:36:12.242157046 +0000 UTC m=+0.508613830 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:36:12 np0005539564 podman[226288]: 2025-11-29 07:36:12.309644864 +0000 UTC m=+0.522680924 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 02:36:13 np0005539564 python3.9[226489]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:36:13 np0005539564 systemd[1]: Started libpod-conmon-b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941.scope.
Nov 29 02:36:13 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:36:13 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ebe0033c605ffbeccbc23049a655859a21e4d363c1187a39d1712db94cfe6f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ebe0033c605ffbeccbc23049a655859a21e4d363c1187a39d1712db94cfe6f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ebe0033c605ffbeccbc23049a655859a21e4d363c1187a39d1712db94cfe6f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:13.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:14 np0005539564 podman[226513]: 2025-11-29 07:36:14.008624079 +0000 UTC m=+0.617151267 container init b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 02:36:14 np0005539564 podman[226513]: 2025-11-29 07:36:14.016617457 +0000 UTC m=+0.625144615 container start b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:36:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:14.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 02:36:14 np0005539564 nova_compute_init[226535]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 02:36:14 np0005539564 systemd[1]: libpod-b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539564 python3.9[226489]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 02:36:14 np0005539564 podman[226537]: 2025-11-29 07:36:14.111673805 +0000 UTC m=+0.030983175 container died b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.144 226310 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.144 226310 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.144 226310 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.145 226310 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:36:14 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941-userdata-shm.mount: Deactivated successfully.
Nov 29 02:36:14 np0005539564 systemd[1]: var-lib-containers-storage-overlay-76ebe0033c605ffbeccbc23049a655859a21e4d363c1187a39d1712db94cfe6f-merged.mount: Deactivated successfully.
Nov 29 02:36:14 np0005539564 podman[226537]: 2025-11-29 07:36:14.26683222 +0000 UTC m=+0.186141570 container cleanup b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:36:14 np0005539564 systemd[1]: libpod-conmon-b771ddedd846d1db1e165cb7f6ba017896fc29bb4e81ce157a500d503051a941.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.315 226310 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.329 226310 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.330 226310 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.834 226310 INFO nova.virt.driver [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.942 226310 INFO nova.compute.provider_config [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.953 226310 DEBUG oslo_concurrency.lockutils [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.953 226310 DEBUG oslo_concurrency.lockutils [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.954 226310 DEBUG oslo_concurrency.lockutils [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.954 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.954 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.954 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.954 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.955 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.955 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.955 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 systemd-logind[785]: Session 49 logged out. Waiting for processes to exit.
Nov 29 02:36:14 np0005539564 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539564 systemd[1]: session-49.scope: Consumed 2min 37.427s CPU time.
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.955 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.955 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.955 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.955 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.956 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.956 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.956 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.956 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.956 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.956 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.956 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.957 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.957 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.957 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 systemd-logind[785]: Removed session 49.
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.957 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.957 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.957 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.957 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.958 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.958 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.958 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.958 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.958 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.959 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.959 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.959 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.959 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.959 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.959 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.960 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.960 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.960 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.960 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.960 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.960 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.961 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.961 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.961 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.961 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.961 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.962 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.962 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.962 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.962 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.962 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.962 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.963 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.963 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.963 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.963 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.963 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.963 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.963 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.964 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.964 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.964 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.964 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.964 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.964 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.965 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.965 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.965 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.965 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.965 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.965 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.966 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.966 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.966 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.966 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.966 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.966 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.966 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.967 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.967 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.967 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.967 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.967 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.967 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.967 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.968 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.968 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.968 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.968 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.968 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.968 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.969 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.969 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.969 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.969 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.969 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.969 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.969 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.969 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.970 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.970 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.970 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.970 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.970 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.970 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.970 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.971 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.971 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.971 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.971 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.971 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.971 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.972 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.972 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.972 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.972 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.972 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.972 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.973 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.973 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.973 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.973 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.973 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.973 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.973 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.974 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.974 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.974 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.974 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.974 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.974 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.974 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.975 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.975 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.975 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.975 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.975 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.975 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.975 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.976 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.976 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.976 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.976 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.976 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.976 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.976 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.977 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.977 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.977 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.977 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.977 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.977 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.978 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.978 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.978 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.978 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.978 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.978 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.979 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.979 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.979 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.979 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.979 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.979 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.980 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.980 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.980 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.980 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.980 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.980 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.980 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.981 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.981 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.981 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.981 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.981 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.981 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.982 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.982 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.982 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.982 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.982 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.982 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.983 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.983 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.983 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.983 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.983 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.984 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.984 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.984 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.984 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.984 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.985 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.985 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.985 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.985 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.985 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.985 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.985 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.986 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.986 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.986 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.986 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.986 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.986 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.986 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.987 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.987 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.987 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.987 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.987 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.987 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.987 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.988 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.988 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.988 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.988 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.988 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.988 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.988 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.988 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.989 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.989 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.989 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.989 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.989 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.989 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.989 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.990 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.990 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.990 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.990 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.990 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.990 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.990 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.991 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.991 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.991 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.991 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.991 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.991 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.991 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.992 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.992 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.992 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.992 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.992 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.992 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.992 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.993 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.993 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.993 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.993 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.993 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.993 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.994 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.994 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.994 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.994 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.994 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.994 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.995 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.995 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.995 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.995 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.995 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.996 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.996 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.996 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.996 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.997 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.997 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.997 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.997 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.997 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.997 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.998 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.998 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.998 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.998 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.998 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.999 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.999 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.999 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:14 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.999 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.999 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:14.999 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.000 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.000 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.000 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.000 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.000 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.000 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.001 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.001 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.001 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.001 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.001 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.001 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.002 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.002 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.002 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.002 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.002 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.002 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.002 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.003 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.003 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.003 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.003 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.003 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.003 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.003 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.004 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.004 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.004 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.004 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.004 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.004 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.004 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.005 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.005 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.005 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.005 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.005 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.005 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.005 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.006 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.006 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.006 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.006 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.006 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.006 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.006 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.007 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.007 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.007 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.007 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.007 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.007 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.007 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.008 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.008 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.008 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.008 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.008 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.008 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.009 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.009 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.009 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.009 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.009 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.009 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.010 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.010 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.010 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.010 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.010 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.010 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.010 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.011 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.011 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.011 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.011 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.011 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.011 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.011 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.012 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.012 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.012 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.012 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.012 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.012 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.013 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.013 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.013 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.013 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.013 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.013 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.013 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.013 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.014 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.014 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.014 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.014 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.014 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.014 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.014 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.015 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.015 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.015 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.015 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.015 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.015 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.015 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.016 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.016 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.016 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.016 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.016 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.016 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.016 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.017 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.017 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.017 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.017 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.017 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.017 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.017 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.018 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.018 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.018 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.018 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.018 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.018 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.018 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.019 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.019 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.019 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.019 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.019 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.019 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.019 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.020 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.020 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.020 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.020 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.020 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.020 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.020 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.021 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.021 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.021 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.021 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.021 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.021 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.021 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.022 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.022 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.022 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.022 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.022 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.022 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.022 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.023 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.023 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.023 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.023 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.023 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.023 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.023 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.024 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.024 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.024 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.024 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.024 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.024 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.024 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.025 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.025 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.025 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.025 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.025 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.025 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.025 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.026 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.026 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.026 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.026 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.026 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.026 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.027 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.027 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.027 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.027 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.027 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.027 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.027 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.028 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.028 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.028 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.028 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.028 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.029 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.029 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.029 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.029 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.029 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.029 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.030 226310 WARNING oslo_config.cfg [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:36:15 np0005539564 nova_compute[226295]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:36:15 np0005539564 nova_compute[226295]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:36:15 np0005539564 nova_compute[226295]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:36:15 np0005539564 nova_compute[226295]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.030 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.030 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.030 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.030 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.030 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.031 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.031 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.031 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.031 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.031 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.031 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.032 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.032 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.032 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.032 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.033 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.033 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.033 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.033 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rbd_secret_uuid        = 38a37ed2-442a-5e0d-a69a-881fdd186450 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.033 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.034 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.034 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.034 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.034 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.034 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.035 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.035 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.035 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.035 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.035 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.036 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.036 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.036 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.036 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.037 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.037 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.037 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.037 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.037 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.038 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.038 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.038 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.038 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.038 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.038 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.039 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.039 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.039 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.039 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.039 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.039 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.040 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.040 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.040 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.040 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.040 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.040 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.041 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.041 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.041 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.041 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.041 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.041 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.041 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.042 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.042 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.042 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.042 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.042 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.042 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.042 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.043 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.043 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.043 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.043 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.043 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.043 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.043 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.044 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.044 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.044 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.044 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.044 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.045 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.045 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.045 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.045 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.045 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.045 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.046 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.046 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.046 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.046 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.046 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.046 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.046 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.047 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.047 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.047 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.047 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.047 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.047 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.047 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.047 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.048 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.048 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.048 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.048 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.048 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.048 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.049 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.049 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.049 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.049 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.049 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.049 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.049 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.050 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.050 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.050 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.050 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.050 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.050 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.050 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.051 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.051 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.051 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.051 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.051 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.051 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.051 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.052 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.052 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.052 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.052 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.052 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.052 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.052 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.053 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.054 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.054 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.054 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.054 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.054 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.054 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.055 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.055 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.055 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.055 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.055 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.055 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.056 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.056 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.056 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.056 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.057 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.057 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.057 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.057 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.057 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.057 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.058 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.058 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.058 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.058 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.058 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.058 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.058 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.059 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.059 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.059 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.059 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.059 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.059 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.059 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.060 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.060 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.060 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.060 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.060 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.060 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.060 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.061 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.061 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.061 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.061 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.061 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.061 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.061 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.062 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.062 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.062 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.062 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.062 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.062 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.063 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.063 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.063 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.063 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.063 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.063 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.064 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.064 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.064 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.064 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.064 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.064 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.064 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.065 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.065 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.065 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.065 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.065 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.065 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.065 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.066 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.066 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.066 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.066 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.066 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.066 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.066 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.067 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.067 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.067 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.067 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.067 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.067 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.067 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.068 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.068 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.068 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.068 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.068 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.068 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.068 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.069 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.069 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.069 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.069 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.069 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.069 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.069 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.070 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.070 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.070 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.070 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.070 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.070 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.071 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.071 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.071 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.071 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.071 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.071 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.071 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.072 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.072 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.072 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.072 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.072 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.072 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.072 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.073 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.073 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.073 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.073 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.073 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.073 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.073 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.074 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.074 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.074 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.074 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.074 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.074 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.074 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.075 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.075 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.075 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.075 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.075 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.075 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.075 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.076 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.076 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.076 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.076 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.076 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.076 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.077 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.077 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.077 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.077 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.077 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.078 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.078 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.078 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.078 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.078 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.078 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.079 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.079 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.079 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.079 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.079 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.079 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.079 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.080 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.080 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.080 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.080 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.080 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.080 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.080 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.081 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.081 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.081 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.081 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.081 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.081 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.082 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.082 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.082 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.082 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.082 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.082 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.083 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.083 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.083 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.083 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.083 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.083 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.083 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.084 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.084 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.084 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.084 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.084 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.085 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.085 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.085 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.085 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.085 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.085 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.086 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.086 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.086 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.086 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.086 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.086 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.086 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.087 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.087 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.087 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.087 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.087 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.087 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.087 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.088 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.088 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.088 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.088 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.088 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.088 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.089 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.089 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.089 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.089 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.089 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.090 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.090 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.090 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.090 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.090 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.090 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.090 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.091 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.091 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.091 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.091 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.091 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.091 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.091 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.092 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.092 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.092 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.092 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.092 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.093 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.093 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.093 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.093 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.093 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.093 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.093 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.094 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.094 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.094 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.094 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.094 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.094 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.095 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.095 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.095 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.095 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.095 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.096 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.096 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.096 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.096 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.096 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.096 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.096 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.097 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.097 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.097 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.097 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.097 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.097 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.098 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.098 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.098 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.098 226310 DEBUG oslo_service.service [None req-06bd92f2-04f9-426f-80ab-873e40c8fc5e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.099 226310 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.158 226310 INFO nova.virt.node [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Determined node identity ea190a43-1246-44b8-8f8b-a61b155a1d3b from /var/lib/nova/compute_id#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.159 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.159 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.159 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.160 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.171 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1fab73c280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.173 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1fab73c280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.174 226310 INFO nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.180 226310 INFO nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <host>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <uuid>2e858761-3292-4a17-b38f-a169c3064289</uuid>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <arch>x86_64</arch>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <microcode version='16777317'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='x2apic'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='tsc-deadline'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='osxsave'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='hypervisor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='tsc_adjust'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='spec-ctrl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='stibp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='arch-capabilities'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='cmp_legacy'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='topoext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='virt-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='lbrv'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='tsc-scale'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='vmcb-clean'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='pause-filter'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='pfthreshold'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='rdctl-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='mds-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature name='pschange-mc-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <pages unit='KiB' size='4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <pages unit='KiB' size='2048'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <power_management>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <suspend_mem/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </power_management>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <iommu support='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <migration_features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <live/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <uri_transports>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <uri_transport>tcp</uri_transport>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <uri_transport>rdma</uri_transport>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </uri_transports>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </migration_features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <topology>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <cells num='1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <cell id='0'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:          <memory unit='KiB'>7864320</memory>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:          <distances>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <sibling id='0' value='10'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:          </distances>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:          <cpus num='8'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:          </cpus>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        </cell>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </cells>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </topology>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <cache>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </cache>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <secmodel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model>selinux</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <doi>0</doi>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </secmodel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <secmodel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model>dac</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <doi>0</doi>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </secmodel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </host>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <guest>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <os_type>hvm</os_type>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <arch name='i686'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <wordsize>32</wordsize>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <domain type='qemu'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <domain type='kvm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </arch>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <pae/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <nonpae/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <acpi default='on' toggle='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <apic default='on' toggle='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <cpuselection/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <deviceboot/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <externalSnapshot/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </guest>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <guest>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <os_type>hvm</os_type>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <arch name='x86_64'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <wordsize>64</wordsize>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <domain type='qemu'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <domain type='kvm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </arch>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <acpi default='on' toggle='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <apic default='on' toggle='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <cpuselection/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <deviceboot/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <externalSnapshot/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </guest>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 
Nov 29 02:36:15 np0005539564 nova_compute[226295]: </capabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: #033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.185 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.190 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:36:15 np0005539564 nova_compute[226295]: <domainCapabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <domain>kvm</domain>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <arch>i686</arch>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <vcpu max='4096'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <iothreads supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <os supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <enum name='firmware'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <loader supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>rom</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pflash</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='readonly'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>yes</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>no</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='secure'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>no</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </loader>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>on</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>off</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='maximumMigratable'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>on</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>off</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='succor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='custom' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-128'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-256'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-512'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='KnightsMill'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SierraForest'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='athlon'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='athlon-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='core2duo'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='core2duo-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='coreduo'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='coreduo-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='n270'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='n270-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='phenom'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='phenom-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <memoryBacking supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <enum name='sourceType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>file</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>anonymous</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>memfd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </memoryBacking>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <disk supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='diskDevice'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>disk</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>cdrom</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>floppy</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>lun</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='bus'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>fdc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>scsi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>sata</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <graphics supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vnc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>egl-headless</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dbus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <video supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='modelType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vga</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>cirrus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>none</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>bochs</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ramfb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <hostdev supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='mode'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>subsystem</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='startupPolicy'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>default</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>mandatory</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>requisite</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>optional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='subsysType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pci</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>scsi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='capsType'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='pciBackend'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </hostdev>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <rng supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>random</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>egd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>builtin</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <filesystem supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='driverType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>path</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>handle</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtiofs</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </filesystem>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <tpm supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tpm-tis</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tpm-crb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>emulator</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>external</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendVersion'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>2.0</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </tpm>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <redirdev supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='bus'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </redirdev>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <channel supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pty</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>unix</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </channel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <crypto supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>qemu</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>builtin</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </crypto>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <interface supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>default</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>passt</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <panic supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>isa</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>hyperv</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </panic>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <console supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>null</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pty</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dev</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>file</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pipe</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>stdio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>udp</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tcp</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>unix</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>qemu-vdagent</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dbus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </console>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <gic supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <genid supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <backup supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <async-teardown supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <ps2 supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <sev supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <sgx supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <hyperv supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='features'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>relaxed</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vapic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>spinlocks</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vpindex</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>runtime</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>synic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>stimer</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>reset</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vendor_id</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>frequencies</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>reenlightenment</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tlbflush</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ipi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>avic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>emsr_bitmap</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>xmm_input</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <defaults>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </defaults>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </hyperv>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <launchSecurity supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='sectype'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tdx</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </launchSecurity>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: </domainCapabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.196 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:36:15 np0005539564 nova_compute[226295]: <domainCapabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <domain>kvm</domain>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <arch>i686</arch>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <vcpu max='240'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <iothreads supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <os supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <enum name='firmware'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <loader supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>rom</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pflash</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='readonly'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>yes</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>no</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='secure'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>no</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </loader>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>on</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>off</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='maximumMigratable'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>on</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>off</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='succor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='custom' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-128'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-256'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-512'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='KnightsMill'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SierraForest'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='athlon'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='athlon-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='core2duo'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='core2duo-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='coreduo'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='coreduo-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='n270'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='n270-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='phenom'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='phenom-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <memoryBacking supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <enum name='sourceType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>file</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>anonymous</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>memfd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </memoryBacking>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <disk supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='diskDevice'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>disk</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>cdrom</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>floppy</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>lun</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='bus'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ide</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>fdc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>scsi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>sata</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <graphics supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vnc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>egl-headless</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dbus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <video supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='modelType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vga</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>cirrus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>none</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>bochs</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ramfb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <hostdev supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='mode'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>subsystem</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='startupPolicy'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>default</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>mandatory</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>requisite</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>optional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='subsysType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pci</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>scsi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='capsType'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='pciBackend'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </hostdev>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <rng supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>random</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>egd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>builtin</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <filesystem supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='driverType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>path</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>handle</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtiofs</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </filesystem>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <tpm supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tpm-tis</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tpm-crb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>emulator</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>external</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendVersion'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>2.0</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </tpm>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <redirdev supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='bus'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </redirdev>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <channel supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pty</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>unix</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </channel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <crypto supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>qemu</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>builtin</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </crypto>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <interface supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>default</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>passt</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <panic supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>isa</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>hyperv</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </panic>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <console supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>null</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pty</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dev</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>file</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pipe</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>stdio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>udp</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tcp</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>unix</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>qemu-vdagent</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dbus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </console>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <gic supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <genid supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <backup supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <async-teardown supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <ps2 supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <sev supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <sgx supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <hyperv supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='features'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>relaxed</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vapic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>spinlocks</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vpindex</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>runtime</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>synic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>stimer</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>reset</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vendor_id</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>frequencies</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>reenlightenment</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tlbflush</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ipi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>avic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>emsr_bitmap</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>xmm_input</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <defaults>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </defaults>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </hyperv>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <launchSecurity supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='sectype'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tdx</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </launchSecurity>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: </domainCapabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.222 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.224 226310 DEBUG nova.virt.libvirt.volume.mount [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.228 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:36:15 np0005539564 nova_compute[226295]: <domainCapabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <domain>kvm</domain>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <arch>x86_64</arch>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <vcpu max='4096'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <iothreads supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <os supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <enum name='firmware'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>efi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <loader supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>rom</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pflash</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='readonly'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>yes</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>no</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='secure'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>yes</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>no</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </loader>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>on</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>off</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='maximumMigratable'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>on</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>off</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='succor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='custom' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-128'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-256'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-512'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='KnightsMill'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SierraForest'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='athlon'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='athlon-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='core2duo'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='core2duo-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='coreduo'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='coreduo-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='n270'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='n270-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='phenom'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='phenom-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <memoryBacking supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <enum name='sourceType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>file</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>anonymous</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>memfd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </memoryBacking>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <disk supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='diskDevice'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>disk</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>cdrom</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>floppy</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>lun</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='bus'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>fdc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>scsi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>sata</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <graphics supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vnc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>egl-headless</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dbus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <video supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='modelType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vga</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>cirrus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>none</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>bochs</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ramfb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <hostdev supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='mode'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>subsystem</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='startupPolicy'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>default</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>mandatory</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>requisite</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>optional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='subsysType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pci</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>scsi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='capsType'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='pciBackend'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </hostdev>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <rng supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>random</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>egd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>builtin</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <filesystem supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='driverType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>path</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>handle</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtiofs</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </filesystem>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <tpm supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tpm-tis</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tpm-crb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>emulator</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>external</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendVersion'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>2.0</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </tpm>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <redirdev supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='bus'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </redirdev>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <channel supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pty</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>unix</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </channel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <crypto supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>qemu</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>builtin</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </crypto>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <interface supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>default</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>passt</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <panic supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>isa</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>hyperv</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </panic>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <console supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>null</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pty</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dev</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>file</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pipe</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>stdio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>udp</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tcp</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>unix</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>qemu-vdagent</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dbus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </console>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <gic supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <genid supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <backup supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <async-teardown supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <ps2 supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <sev supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <sgx supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <hyperv supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='features'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>relaxed</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vapic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>spinlocks</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vpindex</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>runtime</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>synic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>stimer</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>reset</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vendor_id</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>frequencies</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>reenlightenment</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tlbflush</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ipi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>avic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>emsr_bitmap</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>xmm_input</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <defaults>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </defaults>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </hyperv>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <launchSecurity supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='sectype'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tdx</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </launchSecurity>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: </domainCapabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.285 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:36:15 np0005539564 nova_compute[226295]: <domainCapabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <domain>kvm</domain>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <arch>x86_64</arch>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <vcpu max='240'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <iothreads supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <os supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <enum name='firmware'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <loader supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>rom</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pflash</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='readonly'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>yes</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>no</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='secure'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>no</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </loader>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>on</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>off</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='maximumMigratable'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>on</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>off</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='succor'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <mode name='custom' supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Denverton-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='EPYC-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-128'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-256'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx10-512'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Haswell-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='KnightsMill'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SierraForest'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='athlon'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='athlon-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='core2duo'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='core2duo-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='coreduo'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='coreduo-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='n270'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='n270-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='phenom'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <blockers model='phenom-v1'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </blockers>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </mode>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <memoryBacking supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <enum name='sourceType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>file</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>anonymous</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <value>memfd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </memoryBacking>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <disk supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='diskDevice'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>disk</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>cdrom</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>floppy</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>lun</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='bus'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ide</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>fdc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>scsi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>sata</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <graphics supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vnc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>egl-headless</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dbus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <video supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='modelType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vga</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>cirrus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>none</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>bochs</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ramfb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <hostdev supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='mode'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>subsystem</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='startupPolicy'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>default</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>mandatory</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>requisite</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>optional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='subsysType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pci</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>scsi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='capsType'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='pciBackend'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </hostdev>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <rng supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>random</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>egd</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>builtin</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <filesystem supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='driverType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>path</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>handle</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>virtiofs</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </filesystem>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <tpm supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tpm-tis</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tpm-crb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>emulator</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>external</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendVersion'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>2.0</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </tpm>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <redirdev supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='bus'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>usb</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </redirdev>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <channel supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pty</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>unix</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </channel>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <crypto supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>qemu</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>builtin</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </crypto>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <interface supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='backendType'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>default</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>passt</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <panic supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='model'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>isa</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>hyperv</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </panic>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <console supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='type'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>null</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vc</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pty</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dev</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>file</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>pipe</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>stdio</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>udp</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tcp</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>unix</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>qemu-vdagent</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>dbus</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </console>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <gic supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <genid supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <backup supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <async-teardown supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <ps2 supported='yes'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <sev supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <sgx supported='no'/>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <hyperv supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='features'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>relaxed</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vapic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>spinlocks</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vpindex</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>runtime</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>synic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>stimer</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>reset</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>vendor_id</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>frequencies</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>reenlightenment</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tlbflush</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>ipi</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>avic</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>emsr_bitmap</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>xmm_input</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <defaults>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </defaults>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </hyperv>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    <launchSecurity supported='yes'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      <enum name='sectype'>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:        <value>tdx</value>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:      </enum>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:    </launchSecurity>
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: </domainCapabilities>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.344 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.345 226310 INFO nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.347 226310 INFO nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.347 226310 INFO nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.357 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:36:15 np0005539564 nova_compute[226295]:  <model>Nehalem</model>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: </cpu>
Nov 29 02:36:15 np0005539564 nova_compute[226295]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.360 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.399 226310 INFO nova.virt.node [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Determined node identity ea190a43-1246-44b8-8f8b-a61b155a1d3b from /var/lib/nova/compute_id#033[00m
Nov 29 02:36:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.433 226310 WARNING nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Compute nodes ['ea190a43-1246-44b8-8f8b-a61b155a1d3b'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.508 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.545 226310 WARNING nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.545 226310 DEBUG oslo_concurrency.lockutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.545 226310 DEBUG oslo_concurrency.lockutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.546 226310 DEBUG oslo_concurrency.lockutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.546 226310 DEBUG nova.compute.resource_tracker [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:36:15 np0005539564 nova_compute[226295]: 2025-11-29 07:36:15.546 226310 DEBUG oslo_concurrency.processutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:36:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:15.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:16.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:16 np0005539564 podman[226645]: 2025-11-29 07:36:16.545810268 +0000 UTC m=+0.098779951 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 02:36:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:36:16 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1294229932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:36:16 np0005539564 nova_compute[226295]: 2025-11-29 07:36:16.744 226310 DEBUG oslo_concurrency.processutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:36:16 np0005539564 nova_compute[226295]: 2025-11-29 07:36:16.907 226310 WARNING nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:36:16 np0005539564 nova_compute[226295]: 2025-11-29 07:36:16.908 226310 DEBUG nova.compute.resource_tracker [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5270MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:36:16 np0005539564 nova_compute[226295]: 2025-11-29 07:36:16.909 226310 DEBUG oslo_concurrency.lockutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:16 np0005539564 nova_compute[226295]: 2025-11-29 07:36:16.909 226310 DEBUG oslo_concurrency.lockutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.043 226310 WARNING nova.compute.resource_tracker [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] No compute node record for compute-1.ctlplane.example.com:ea190a43-1246-44b8-8f8b-a61b155a1d3b: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host ea190a43-1246-44b8-8f8b-a61b155a1d3b could not be found.#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.084 226310 INFO nova.compute.resource_tracker [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: ea190a43-1246-44b8-8f8b-a61b155a1d3b#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.195 226310 DEBUG nova.compute.resource_tracker [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.196 226310 DEBUG nova.compute.resource_tracker [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.349 226310 INFO nova.scheduler.client.report [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [req-880ae9a0-546c-41c8-bbba-819842a3089c] Created resource provider record via placement API for resource provider with UUID ea190a43-1246-44b8-8f8b-a61b155a1d3b and name compute-1.ctlplane.example.com.#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.424 226310 DEBUG oslo_concurrency.processutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.911 226310 DEBUG oslo_concurrency.processutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.917 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 02:36:17 np0005539564 nova_compute[226295]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.917 226310 INFO nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.918 226310 DEBUG nova.compute.provider_tree [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.919 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:36:17 np0005539564 nova_compute[226295]: 2025-11-29 07:36:17.921 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Libvirt baseline CPU <cpu>
Nov 29 02:36:17 np0005539564 nova_compute[226295]:  <arch>x86_64</arch>
Nov 29 02:36:17 np0005539564 nova_compute[226295]:  <model>Nehalem</model>
Nov 29 02:36:17 np0005539564 nova_compute[226295]:  <vendor>AMD</vendor>
Nov 29 02:36:17 np0005539564 nova_compute[226295]:  <topology sockets="8" cores="1" threads="1"/>
Nov 29 02:36:17 np0005539564 nova_compute[226295]: </cpu>
Nov 29 02:36:17 np0005539564 nova_compute[226295]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Nov 29 02:36:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:17.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.010 226310 DEBUG nova.scheduler.client.report [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Updated inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.011 226310 DEBUG nova.compute.provider_tree [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Updating resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.011 226310 DEBUG nova.compute.provider_tree [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:36:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:18.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.308 226310 DEBUG nova.compute.provider_tree [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Updating resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.358 226310 DEBUG nova.compute.resource_tracker [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.359 226310 DEBUG oslo_concurrency.lockutils [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.359 226310 DEBUG nova.service [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.502 226310 DEBUG nova.service [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 29 02:36:18 np0005539564 nova_compute[226295]: 2025-11-29 07:36:18.503 226310 DEBUG nova.servicegroup.drivers.db [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 29 02:36:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:19.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:20.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:21.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:22.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:36:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:36:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:23.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:36:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:36:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:36:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:25.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:36:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:26.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:27.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:28.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:29.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:30.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:31.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:32.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:33.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:34.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:35.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:36.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:38.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:39.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:40.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:42.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:42 np0005539564 nova_compute[226295]: 2025-11-29 07:36:42.505 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:36:42 np0005539564 podman[226869]: 2025-11-29 07:36:42.532787875 +0000 UTC m=+0.081749157 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 02:36:42 np0005539564 podman[226868]: 2025-11-29 07:36:42.537249975 +0000 UTC m=+0.092002725 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 29 02:36:42 np0005539564 nova_compute[226295]: 2025-11-29 07:36:42.681 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:36:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:36:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:36:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:43.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:44.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:46.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:47 np0005539564 podman[226912]: 2025-11-29 07:36:47.503980905 +0000 UTC m=+0.064792545 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:36:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:47.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:48.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:49.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:50.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:51.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:52.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:36:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:53.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:36:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:54.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:56.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:57.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:58.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:36:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:59.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:00.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:01.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:02.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:37:03.686 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:37:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:37:03.687 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:37:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:37:03.687 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:37:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:04.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:06.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:07.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:08.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:10.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:10.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:12.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:13 np0005539564 podman[226932]: 2025-11-29 07:37:13.493395451 +0000 UTC m=+0.051189286 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:37:13 np0005539564 podman[226931]: 2025-11-29 07:37:13.569904064 +0000 UTC m=+0.131935304 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:37:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:14.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.346 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.346 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.346 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.450 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.450 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.451 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.451 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.451 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.452 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.452 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.452 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.452 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.503 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.503 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.503 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.503 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.504 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:37:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:37:14 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2266064152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:37:14 np0005539564 nova_compute[226295]: 2025-11-29 07:37:14.988 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:37:15 np0005539564 nova_compute[226295]: 2025-11-29 07:37:15.259 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:37:15 np0005539564 nova_compute[226295]: 2025-11-29 07:37:15.261 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5337MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:37:15 np0005539564 nova_compute[226295]: 2025-11-29 07:37:15.261 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:37:15 np0005539564 nova_compute[226295]: 2025-11-29 07:37:15.261 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:37:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:16.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:16 np0005539564 nova_compute[226295]: 2025-11-29 07:37:16.229 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:37:16 np0005539564 nova_compute[226295]: 2025-11-29 07:37:16.229 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:37:16 np0005539564 nova_compute[226295]: 2025-11-29 07:37:16.272 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:37:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:37:16 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/115100883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:37:16 np0005539564 nova_compute[226295]: 2025-11-29 07:37:16.675 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:37:16 np0005539564 nova_compute[226295]: 2025-11-29 07:37:16.683 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:37:16 np0005539564 nova_compute[226295]: 2025-11-29 07:37:16.931 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:37:16 np0005539564 nova_compute[226295]: 2025-11-29 07:37:16.933 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:37:16 np0005539564 nova_compute[226295]: 2025-11-29 07:37:16.933 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:37:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:18.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:18 np0005539564 podman[227019]: 2025-11-29 07:37:18.529111279 +0000 UTC m=+0.083072163 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:37:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:20.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:22.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:22.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:24.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:26.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:26.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:28.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:28.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:30.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:30.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:32.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:32.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:36.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:37:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:37:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:40.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:40.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:44.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:46.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:46.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:48.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:48.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:48 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:37:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:50.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:52.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:52 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:37:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:56.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:56 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:37:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:58.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:37:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:58.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:00.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:00.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:00 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:38:00 np0005539564 podman[227213]: 2025-11-29 07:38:00.996270511 +0000 UTC m=+17.894073978 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:38:01 np0005539564 podman[227227]: 2025-11-29 07:38:01.174791172 +0000 UTC m=+16.727232674 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Nov 29 02:38:01 np0005539564 podman[227228]: 2025-11-29 07:38:01.195721252 +0000 UTC m=+16.741901883 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:38:01 np0005539564 podman[227248]: 2025-11-29 07:38:01.206874476 +0000 UTC m=+11.753903695 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:38:01 np0005539564 podman[227296]: 2025-11-29 07:38:01.212221751 +0000 UTC m=+0.065211107 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:38:01 np0005539564 podman[227213]: 2025-11-29 07:38:01.388699187 +0000 UTC m=+18.286502644 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:38:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:02.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:38:03.686 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:38:03.687 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:38:03.687 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:04.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:04 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:38:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:06.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:06.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:06 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(47) init, last seen epoch 47, mid-election, bumping
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:38:07 np0005539564 ceph-mon[81769]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:38:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:08.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:08.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:09 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 02:38:09 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 02:38:09 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:38:09 np0005539564 ceph-mon[81769]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:38:09 np0005539564 ceph-mon[81769]: Cluster is now healthy
Nov 29 02:38:09 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:38:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:10.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:10.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:12.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:12.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:14.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:14.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:16.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:16.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.923 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.923 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.953 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.954 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.954 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.968 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.968 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.969 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.969 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.969 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.969 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.970 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.970 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.970 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.998 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.998 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.998 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:16 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.999 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:16.999 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:38:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:38:17 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3537593331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:17.455 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:17.620 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:17.621 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5327MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:17.622 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:17.622 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:17.704 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:17.704 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:38:17 np0005539564 nova_compute[226295]: 2025-11-29 07:38:17.731 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:38:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:38:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1337845297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:38:18 np0005539564 nova_compute[226295]: 2025-11-29 07:38:18.238 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:38:18 np0005539564 nova_compute[226295]: 2025-11-29 07:38:18.244 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:38:18 np0005539564 nova_compute[226295]: 2025-11-29 07:38:18.276 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:38:18 np0005539564 nova_compute[226295]: 2025-11-29 07:38:18.278 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:38:18 np0005539564 nova_compute[226295]: 2025-11-29 07:38:18.279 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:18.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:38:21.757 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:38:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:38:21.758 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:38:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:38:21.759 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:38:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:22.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:26.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:38:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069074270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:38:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:38:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069074270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:38:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:28.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:28.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:30.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:30.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:31 np0005539564 podman[227628]: 2025-11-29 07:38:31.524015299 +0000 UTC m=+0.070657775 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:38:31 np0005539564 podman[227627]: 2025-11-29 07:38:31.540088607 +0000 UTC m=+0.086418154 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:38:31 np0005539564 podman[227626]: 2025-11-29 07:38:31.591213019 +0000 UTC m=+0.136961461 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:38:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:32.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:32.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:34.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:34.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:36.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:36.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:38.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:40.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:42.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:42.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:44.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:44.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:46.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:48.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:50.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:52.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:54.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:56.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:56.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:58.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:38:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:58.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:00.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:39:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:00.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:39:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:02.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:02 np0005539564 podman[227691]: 2025-11-29 07:39:02.508729668 +0000 UTC m=+0.063948862 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:39:02 np0005539564 podman[227692]: 2025-11-29 07:39:02.529901974 +0000 UTC m=+0.084454440 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:39:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:02 np0005539564 podman[227690]: 2025-11-29 07:39:02.556633643 +0000 UTC m=+0.111621121 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:39:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:39:03.687 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:39:03.687 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:39:03.687 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:04.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:06.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:06.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:08.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:08.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:10.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:10.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:12.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:12.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:14.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:14.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:16.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:16.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:39:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:39:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:18.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.281 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.282 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.282 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.282 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.313 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.313 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.313 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.313 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.314 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.314 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.314 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.314 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.314 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.373 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.373 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:39:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:18.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:39:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4283126515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.807 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.979 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.981 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5327MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.981 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:18 np0005539564 nova_compute[226295]: 2025-11-29 07:39:18.981 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:19 np0005539564 nova_compute[226295]: 2025-11-29 07:39:19.065 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:39:19 np0005539564 nova_compute[226295]: 2025-11-29 07:39:19.065 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:39:19 np0005539564 nova_compute[226295]: 2025-11-29 07:39:19.081 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:39:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:39:19 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3261707078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:39:19 np0005539564 nova_compute[226295]: 2025-11-29 07:39:19.562 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:39:19 np0005539564 nova_compute[226295]: 2025-11-29 07:39:19.568 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:39:19 np0005539564 nova_compute[226295]: 2025-11-29 07:39:19.592 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:39:19 np0005539564 nova_compute[226295]: 2025-11-29 07:39:19.593 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:39:19 np0005539564 nova_compute[226295]: 2025-11-29 07:39:19.594 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:20.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:22.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:22.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:24.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:26.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:28.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:28.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:30.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:30.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:32.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:32.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:33 np0005539564 podman[227982]: 2025-11-29 07:39:33.513837831 +0000 UTC m=+0.057180808 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 29 02:39:33 np0005539564 podman[227983]: 2025-11-29 07:39:33.51380547 +0000 UTC m=+0.061315451 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:39:33 np0005539564 podman[227981]: 2025-11-29 07:39:33.535382198 +0000 UTC m=+0.089572801 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:39:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:34.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:34.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:36.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:36.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:38.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:38.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:40.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:40.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:42.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:42.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:44.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:44.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:46.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:46.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:48.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:48.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:50.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:50.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - - [29/Nov/2025:07:39:52.103 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.000000000s
Nov 29 02:39:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:52.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:52.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:54.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:54.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:56.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:56.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:58.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:39:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 02:40:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:00.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:02.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:02.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 02:40:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:40:03.688 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:40:03.689 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:40:03.689 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:04.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:04 np0005539564 podman[228042]: 2025-11-29 07:40:04.537304116 +0000 UTC m=+0.070461438 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 02:40:04 np0005539564 podman[228043]: 2025-11-29 07:40:04.538128238 +0000 UTC m=+0.078345611 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:40:04 np0005539564 podman[228041]: 2025-11-29 07:40:04.571580452 +0000 UTC m=+0.111036584 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:40:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:04.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:06.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:06 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:40:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:06.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:08.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:10.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:10.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:12.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:14.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:16.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:16.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.648 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.676 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.677 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.677 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.688 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.689 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.690 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.690 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.690 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.690 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.691 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:17 np0005539564 nova_compute[226295]: 2025-11-29 07:40:17.691 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:40:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:18.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:19 np0005539564 nova_compute[226295]: 2025-11-29 07:40:19.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539564 nova_compute[226295]: 2025-11-29 07:40:19.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539564 nova_compute[226295]: 2025-11-29 07:40:19.387 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:19 np0005539564 nova_compute[226295]: 2025-11-29 07:40:19.388 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:19 np0005539564 nova_compute[226295]: 2025-11-29 07:40:19.388 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:19 np0005539564 nova_compute[226295]: 2025-11-29 07:40:19.389 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:40:19 np0005539564 nova_compute[226295]: 2025-11-29 07:40:19.389 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:40:19 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 29 02:40:19 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:19.392179) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:40:19 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 29 02:40:19 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402019392231, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2306, "num_deletes": 251, "total_data_size": 5739154, "memory_usage": 5818928, "flush_reason": "Manual Compaction"}
Nov 29 02:40:19 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 29 02:40:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402020193633, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3821160, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18240, "largest_seqno": 20541, "table_properties": {"data_size": 3811574, "index_size": 6016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20105, "raw_average_key_size": 20, "raw_value_size": 3792071, "raw_average_value_size": 3869, "num_data_blocks": 268, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401768, "oldest_key_time": 1764401768, "file_creation_time": 1764402019, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 801525 microseconds, and 16193 cpu microseconds.
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:40:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:20.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.193701) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3821160 bytes OK
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.193729) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.284180) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.284223) EVENT_LOG_v1 {"time_micros": 1764402020284212, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.284249) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5728833, prev total WAL file size 5730614, number of live WAL files 2.
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.286983) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3731KB)], [36(8540KB)]
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402020287041, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 12566493, "oldest_snapshot_seqno": -1}
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4840 keys, 10483712 bytes, temperature: kUnknown
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402020600244, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 10483712, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10448457, "index_size": 22025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 122042, "raw_average_key_size": 25, "raw_value_size": 10357831, "raw_average_value_size": 2140, "num_data_blocks": 910, "num_entries": 4840, "num_filter_entries": 4840, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764402020, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:40:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:20.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.600581) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10483712 bytes
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.746766) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.1 rd, 33.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5369, records dropped: 529 output_compression: NoCompression
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.746800) EVENT_LOG_v1 {"time_micros": 1764402020746787, "job": 20, "event": "compaction_finished", "compaction_time_micros": 313282, "compaction_time_cpu_micros": 43578, "output_level": 6, "num_output_files": 1, "total_output_size": 10483712, "num_input_records": 5369, "num_output_records": 4840, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402020747487, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402020748794, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.286825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.748851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.748854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.748855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.748857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:40:20.748858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:40:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3744940819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:40:20 np0005539564 nova_compute[226295]: 2025-11-29 07:40:20.925 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.119 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.121 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5334MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.122 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.122 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.203 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.203 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.225 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:40:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:40:21 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/157036288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.692 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.697 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.719 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.720 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:40:21 np0005539564 nova_compute[226295]: 2025-11-29 07:40:21.720 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:24 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:40:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:26.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:26.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:28.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 02:40:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:30.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:32.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:34.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:35 np0005539564 podman[228403]: 2025-11-29 07:40:35.548988789 +0000 UTC m=+0.083570702 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:40:35 np0005539564 podman[228402]: 2025-11-29 07:40:35.585986699 +0000 UTC m=+0.121596389 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:40:35 np0005539564 podman[228401]: 2025-11-29 07:40:35.59672248 +0000 UTC m=+0.135256229 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 02:40:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:36.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:36.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:38.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:38.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:40.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:42.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:40:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:44.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:46.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:40:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6875 writes, 26K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6875 writes, 1421 syncs, 4.84 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 548 writes, 1028 keys, 548 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s#012Interval WAL: 548 writes, 253 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:40:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:46.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:48.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:48 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:40:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:50.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:40:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:50.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:52.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:52 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:40:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:54.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:54.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 02:40:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:56.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:56.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:40:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:00.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:02.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:02.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:41:03.690 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:41:03.690 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:41:03.690 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:04.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:04.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:06.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:06 np0005539564 podman[228463]: 2025-11-29 07:41:06.509248293 +0000 UTC m=+0.054984399 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:41:06 np0005539564 podman[228464]: 2025-11-29 07:41:06.523849598 +0000 UTC m=+0.064899207 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:41:06 np0005539564 podman[228462]: 2025-11-29 07:41:06.537672722 +0000 UTC m=+0.090407858 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:41:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:06.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:08.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:08.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:10.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:10.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:12.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:14 np0005539564 nova_compute[226295]: 2025-11-29 07:41:14.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:14 np0005539564 nova_compute[226295]: 2025-11-29 07:41:14.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:41:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:14.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:14 np0005539564 nova_compute[226295]: 2025-11-29 07:41:14.473 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:41:14 np0005539564 nova_compute[226295]: 2025-11-29 07:41:14.475 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:14 np0005539564 nova_compute[226295]: 2025-11-29 07:41:14.475 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:41:14 np0005539564 nova_compute[226295]: 2025-11-29 07:41:14.488 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:14.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:41:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:41:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:16 np0005539564 nova_compute[226295]: 2025-11-29 07:41:16.535 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:16 np0005539564 nova_compute[226295]: 2025-11-29 07:41:16.536 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:41:16 np0005539564 nova_compute[226295]: 2025-11-29 07:41:16.536 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:41:16 np0005539564 nova_compute[226295]: 2025-11-29 07:41:16.636 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:41:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:16.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:17 np0005539564 nova_compute[226295]: 2025-11-29 07:41:17.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:17 np0005539564 nova_compute[226295]: 2025-11-29 07:41:17.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:17 np0005539564 nova_compute[226295]: 2025-11-29 07:41:17.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:41:18 np0005539564 nova_compute[226295]: 2025-11-29 07:41:18.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:18 np0005539564 nova_compute[226295]: 2025-11-29 07:41:18.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:18 np0005539564 nova_compute[226295]: 2025-11-29 07:41:18.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:18 np0005539564 nova_compute[226295]: 2025-11-29 07:41:18.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:18.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:20 np0005539564 nova_compute[226295]: 2025-11-29 07:41:20.338 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:20.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:21 np0005539564 nova_compute[226295]: 2025-11-29 07:41:21.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:21 np0005539564 nova_compute[226295]: 2025-11-29 07:41:21.404 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:21 np0005539564 nova_compute[226295]: 2025-11-29 07:41:21.405 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:21 np0005539564 nova_compute[226295]: 2025-11-29 07:41:21.405 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:21 np0005539564 nova_compute[226295]: 2025-11-29 07:41:21.405 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:41:21 np0005539564 nova_compute[226295]: 2025-11-29 07:41:21.405 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:22.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:41:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2425489226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:41:22 np0005539564 nova_compute[226295]: 2025-11-29 07:41:22.785 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:22 np0005539564 nova_compute[226295]: 2025-11-29 07:41:22.987 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:41:22 np0005539564 nova_compute[226295]: 2025-11-29 07:41:22.988 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5343MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:41:22 np0005539564 nova_compute[226295]: 2025-11-29 07:41:22.989 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:22 np0005539564 nova_compute[226295]: 2025-11-29 07:41:22.989 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:23 np0005539564 nova_compute[226295]: 2025-11-29 07:41:23.375 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:41:23 np0005539564 nova_compute[226295]: 2025-11-29 07:41:23.376 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:41:23 np0005539564 nova_compute[226295]: 2025-11-29 07:41:23.482 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:41:23 np0005539564 nova_compute[226295]: 2025-11-29 07:41:23.847 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:41:23 np0005539564 nova_compute[226295]: 2025-11-29 07:41:23.848 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:41:23 np0005539564 nova_compute[226295]: 2025-11-29 07:41:23.896 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:41:23 np0005539564 nova_compute[226295]: 2025-11-29 07:41:23.930 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:41:23 np0005539564 nova_compute[226295]: 2025-11-29 07:41:23.965 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:24.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:41:25.221 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:41:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:41:25.224 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:41:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:26.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:41:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3993350651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:41:26 np0005539564 nova_compute[226295]: 2025-11-29 07:41:26.733 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.769s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:26 np0005539564 nova_compute[226295]: 2025-11-29 07:41:26.743 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:41:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:26.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:26 np0005539564 nova_compute[226295]: 2025-11-29 07:41:26.817 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:41:26 np0005539564 nova_compute[226295]: 2025-11-29 07:41:26.820 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:41:26 np0005539564 nova_compute[226295]: 2025-11-29 07:41:26.821 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:28.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:30.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:41:32.228 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:41:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:32.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:41:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1930520548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:41:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:32.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:34.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:41:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 3875 writes, 21K keys, 3875 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 3875 writes, 3875 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 892 writes, 4425 keys, 892 commit groups, 1.0 writes per commit group, ingest: 9.73 MB, 0.02 MB/s#012Interval WAL: 892 writes, 892 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.5      1.93              0.10        10    0.193       0      0       0.0       0.0#012  L6      1/0   10.00 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0     24.7     20.5      3.81              0.26         9    0.424     42K   4874       0.0       0.0#012 Sum      1/0   10.00 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0     16.4     18.1      5.74              0.35        19    0.302     42K   4874       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     14.9     16.3      2.09              0.13         6    0.348     15K   1569       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     24.7     20.5      3.81              0.26         9    0.424     42K   4874       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      1.93              0.10         9    0.214       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.025, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 5.7 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 2.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 7.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(408,7.40 MB,2.43546%) FilterBlock(19,126.05 KB,0.040491%) IndexBlock(19,246.36 KB,0.0791399%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:41:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:34.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:36.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:36.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:37 np0005539564 podman[228623]: 2025-11-29 07:41:37.495674472 +0000 UTC m=+0.055912944 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:41:37 np0005539564 podman[228622]: 2025-11-29 07:41:37.501299974 +0000 UTC m=+0.063975262 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:41:37 np0005539564 podman[228621]: 2025-11-29 07:41:37.532727424 +0000 UTC m=+0.095373051 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:41:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:38.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:38.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:40.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 02:41:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:40.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 02:41:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:42.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:42.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:44.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:44.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:46.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:46.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:48.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:48.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:50.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:50.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:52.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:52.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:54.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 02:41:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:41:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:58.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:00.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:00.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:02.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:02.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:42:03.691 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:42:03.692 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:42:03.692 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:04.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:04.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:06.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:06.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:08.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:08 np0005539564 podman[228687]: 2025-11-29 07:42:08.537896642 +0000 UTC m=+0.077113167 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:42:08 np0005539564 podman[228693]: 2025-11-29 07:42:08.538460977 +0000 UTC m=+0.067351042 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:42:08 np0005539564 podman[228686]: 2025-11-29 07:42:08.564704507 +0000 UTC m=+0.118291631 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:42:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:08.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:10.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:10.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:10 np0005539564 nova_compute[226295]: 2025-11-29 07:42:10.939 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "5068c873-ee82-4faa-a05b-3df3ed25d792" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:10 np0005539564 nova_compute[226295]: 2025-11-29 07:42:10.939 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:10 np0005539564 nova_compute[226295]: 2025-11-29 07:42:10.982 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:42:11 np0005539564 nova_compute[226295]: 2025-11-29 07:42:11.123 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:11 np0005539564 nova_compute[226295]: 2025-11-29 07:42:11.124 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:11 np0005539564 nova_compute[226295]: 2025-11-29 07:42:11.132 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:42:11 np0005539564 nova_compute[226295]: 2025-11-29 07:42:11.133 226310 INFO nova.compute.claims [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:42:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:11 np0005539564 nova_compute[226295]: 2025-11-29 07:42:11.327 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1677040645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.080 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.753s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.088 226310 DEBUG nova.compute.provider_tree [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.127 226310 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.170 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.172 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.239 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.240 226310 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.277 226310 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.322 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:42:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:12.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.467 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.470 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.471 226310 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Creating image(s)#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.764 226310 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 5068c873-ee82-4faa-a05b-3df3ed25d792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.799 226310 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 5068c873-ee82-4faa-a05b-3df3ed25d792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.825 226310 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 5068c873-ee82-4faa-a05b-3df3ed25d792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.828 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:12 np0005539564 nova_compute[226295]: 2025-11-29 07:42:12.829 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:12.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 02:42:14 np0005539564 nova_compute[226295]: 2025-11-29 07:42:14.047 226310 DEBUG nova.virt.libvirt.imagebackend [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/1be11678-cfa4-4dee-b54c-6c7e547e5a6a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/1be11678-cfa4-4dee-b54c-6c7e547e5a6a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:42:14 np0005539564 nova_compute[226295]: 2025-11-29 07:42:14.213 226310 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Automatically allocating a network for project 0d3a6ccbb2794f6e85d683953ac4b5fd. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Nov 29 02:42:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:14.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:16.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:16.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:18.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:18.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:20.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 02:42:20 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.096296310s, txc = 0x55ba5043a900
Nov 29 02:42:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:20.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:22.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.822 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.822 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.842 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.842 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.842 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.856 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.857 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.857 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.857 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.857 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.858 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.858 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.858 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.858 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.859 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.886 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.887 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.887 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.887 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.888 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:22.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:22 np0005539564 nova_compute[226295]: 2025-11-29 07:42:22.940 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:23 np0005539564 nova_compute[226295]: 2025-11-29 07:42:23.020 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:23 np0005539564 nova_compute[226295]: 2025-11-29 07:42:23.021 226310 DEBUG nova.virt.images [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] 1be11678-cfa4-4dee-b54c-6c7e547e5a6a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:42:23 np0005539564 nova_compute[226295]: 2025-11-29 07:42:23.022 226310 DEBUG nova.privsep.utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:42:23 np0005539564 nova_compute[226295]: 2025-11-29 07:42:23.023 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:23 np0005539564 nova_compute[226295]: 2025-11-29 07:42:23.685 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:23 np0005539564 nova_compute[226295]: 2025-11-29 07:42:23.691 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:23 np0005539564 nova_compute[226295]: 2025-11-29 07:42:23.927 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted --force-share --output=json" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:23 np0005539564 nova_compute[226295]: 2025-11-29 07:42:23.928 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 11.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:23 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1872631784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.152 226310 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 5068c873-ee82-4faa-a05b-3df3ed25d792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.155 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5068c873-ee82-4faa-a05b-3df3ed25d792_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.171 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.352 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.354 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5278MB free_disk=20.933074951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.354 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.354 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.464 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 5068c873-ee82-4faa-a05b-3df3ed25d792 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.464 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.465 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:42:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:24.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:24 np0005539564 nova_compute[226295]: 2025-11-29 07:42:24.506 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:24.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:25 np0005539564 podman[229282]: 2025-11-29 07:42:25.169899213 +0000 UTC m=+0.030819575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:25 np0005539564 podman[229282]: 2025-11-29 07:42:25.700739041 +0000 UTC m=+0.561659403 container create 65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:42:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:42:25.791 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:42:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:42:25.792 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:42:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:26.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:27 np0005539564 systemd[1]: Started libpod-conmon-65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9.scope.
Nov 29 02:42:27 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.543 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.550 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.603 226310 ERROR nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [req-8c0bcc77-5c43-471d-98bf-2db66234a0b4] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID ea190a43-1246-44b8-8f8b-a61b155a1d3b.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-8c0bcc77-5c43-471d-98bf-2db66234a0b4"}]}#033[00m
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.634 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:42:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.657 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.658 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.680 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.712 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:42:27 np0005539564 nova_compute[226295]: 2025-11-29 07:42:27.757 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:28.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:28.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:29 np0005539564 podman[229282]: 2025-11-29 07:42:29.336584234 +0000 UTC m=+4.197504616 container init 65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:42:29 np0005539564 podman[229282]: 2025-11-29 07:42:29.34713421 +0000 UTC m=+4.208054572 container start 65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:42:29 np0005539564 optimistic_shockley[229315]: 167 167
Nov 29 02:42:29 np0005539564 systemd[1]: libpod-65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9.scope: Deactivated successfully.
Nov 29 02:42:29 np0005539564 podman[229282]: 2025-11-29 07:42:29.818773716 +0000 UTC m=+4.679694088 container attach 65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:42:29 np0005539564 podman[229282]: 2025-11-29 07:42:29.82114591 +0000 UTC m=+4.682066292 container died 65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:42:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1452225657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:30 np0005539564 nova_compute[226295]: 2025-11-29 07:42:30.411 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:30 np0005539564 nova_compute[226295]: 2025-11-29 07:42:30.416 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:42:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:30 np0005539564 nova_compute[226295]: 2025-11-29 07:42:30.519 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updated inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 02:42:30 np0005539564 nova_compute[226295]: 2025-11-29 07:42:30.520 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:42:30 np0005539564 nova_compute[226295]: 2025-11-29 07:42:30.521 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:42:30 np0005539564 nova_compute[226295]: 2025-11-29 07:42:30.545 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:42:30 np0005539564 nova_compute[226295]: 2025-11-29 07:42:30.546 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:30 np0005539564 systemd[1]: var-lib-containers-storage-overlay-344b311f260abc5fcc6f3883925c56e75399c36e8dea24e19af75785f9388837-merged.mount: Deactivated successfully.
Nov 29 02:42:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:42:30.793 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:30.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:31 np0005539564 podman[229282]: 2025-11-29 07:42:31.925043076 +0000 UTC m=+6.785963418 container remove 65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:42:31 np0005539564 systemd[1]: libpod-conmon-65418f025b020824a17f54cfb41f8ffd4c8e6b8ea902e4843acbc984bfcd42c9.scope: Deactivated successfully.
Nov 29 02:42:31 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 02:42:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 02:42:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:32 np0005539564 podman[229367]: 2025-11-29 07:42:32.128020947 +0000 UTC m=+0.083408187 container create 9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:42:32 np0005539564 podman[229367]: 2025-11-29 07:42:32.067961382 +0000 UTC m=+0.023348632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:32 np0005539564 systemd[1]: Started libpod-conmon-9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af.scope.
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.184 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5068c873-ee82-4faa-a05b-3df3ed25d792_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:32 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:42:32 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a8f019cace7dbd35fe8598bbce497d6a92866e9492239818569fdea0c5d0b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:32 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a8f019cace7dbd35fe8598bbce497d6a92866e9492239818569fdea0c5d0b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:32 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a8f019cace7dbd35fe8598bbce497d6a92866e9492239818569fdea0c5d0b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:32 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a8f019cace7dbd35fe8598bbce497d6a92866e9492239818569fdea0c5d0b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.281 226310 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] resizing rbd image 5068c873-ee82-4faa-a05b-3df3ed25d792_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:42:32 np0005539564 podman[229367]: 2025-11-29 07:42:32.359810085 +0000 UTC m=+0.315197405 container init 9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:42:32 np0005539564 podman[229367]: 2025-11-29 07:42:32.36698381 +0000 UTC m=+0.322371040 container start 9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:42:32 np0005539564 podman[229367]: 2025-11-29 07:42:32.404545026 +0000 UTC m=+0.359932296 container attach 9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_williamson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.468 226310 DEBUG nova.objects.instance [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'migration_context' on Instance uuid 5068c873-ee82-4faa-a05b-3df3ed25d792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:42:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:32.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.490 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.491 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.495 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.495 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Ensure instance console log exists: /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.496 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.496 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.496 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.512 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.575 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.575 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.582 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.583 226310 INFO nova.compute.claims [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:42:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:32 np0005539564 nova_compute[226295]: 2025-11-29 07:42:32.738 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:32.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3316083430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.221 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.228 226310 DEBUG nova.compute.provider_tree [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.242 226310 DEBUG nova.scheduler.client.report [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.268 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.269 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.350 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.351 226310 DEBUG nova.network.neutron [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.377 226310 INFO nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.401 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.489 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.490 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.490 226310 INFO nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Creating image(s)#033[00m
Nov 29 02:42:33 np0005539564 sad_williamson[229383]: [
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:    {
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        "available": false,
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        "ceph_device": false,
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        "lsm_data": {},
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        "lvs": [],
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        "path": "/dev/sr0",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        "rejected_reasons": [
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "Has a FileSystem",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "Insufficient space (<5GB)"
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        ],
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        "sys_api": {
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "actuators": null,
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "device_nodes": "sr0",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "devname": "sr0",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "human_readable_size": "482.00 KB",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "id_bus": "ata",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "model": "QEMU DVD-ROM",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "nr_requests": "2",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "parent": "/dev/sr0",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "partitions": {},
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "path": "/dev/sr0",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "removable": "1",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "rev": "2.5+",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "ro": "0",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "rotational": "1",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "sas_address": "",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "sas_device_handle": "",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "scheduler_mode": "mq-deadline",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "sectors": 0,
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "sectorsize": "2048",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "size": 493568.0,
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "support_discard": "2048",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "type": "disk",
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:            "vendor": "QEMU"
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:        }
Nov 29 02:42:33 np0005539564 sad_williamson[229383]:    }
Nov 29 02:42:33 np0005539564 sad_williamson[229383]: ]
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.519 226310 DEBUG nova.storage.rbd_utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] rbd image 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:33 np0005539564 systemd[1]: libpod-9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af.scope: Deactivated successfully.
Nov 29 02:42:33 np0005539564 systemd[1]: libpod-9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af.scope: Consumed 1.155s CPU time.
Nov 29 02:42:33 np0005539564 podman[229367]: 2025-11-29 07:42:33.527693695 +0000 UTC m=+1.483080925 container died 9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.551 226310 DEBUG nova.storage.rbd_utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] rbd image 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.581 226310 DEBUG nova.storage.rbd_utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] rbd image 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.585 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.647 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.648 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.648 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.649 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.680 226310 DEBUG nova.storage.rbd_utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] rbd image 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.685 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.777 226310 WARNING oslo_policy.policy [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.777 226310 WARNING oslo_policy.policy [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 02:42:33 np0005539564 nova_compute[226295]: 2025-11-29 07:42:33.780 226310 DEBUG nova.policy [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '197276e850054352a94a54ad4b0274be', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '422656fa169e49dcb91b5d4a8819f5ff', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:42:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:34.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:34.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:35 np0005539564 systemd[1]: var-lib-containers-storage-overlay-18a8f019cace7dbd35fe8598bbce497d6a92866e9492239818569fdea0c5d0b2-merged.mount: Deactivated successfully.
Nov 29 02:42:35 np0005539564 nova_compute[226295]: 2025-11-29 07:42:35.888 226310 DEBUG nova.network.neutron [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Successfully created port: d00ce7a8-c85b-4013-844e-d84467a05bff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:42:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:36.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:36 np0005539564 nova_compute[226295]: 2025-11-29 07:42:36.829 226310 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Automatically allocated network: {'id': '6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'name': 'auto_allocated_network', 'tenant_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['54f77d9b-4fc0-4513-9e8e-0b66d5a5d1b2', 'd3409058-7381-4024-9d79-5f6d3aec308c'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-11-29T07:42:14Z', 'updated_at': '2025-11-29T07:42:25Z', 'revision_number': 4, 'project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Nov 29 02:42:36 np0005539564 nova_compute[226295]: 2025-11-29 07:42:36.831 226310 DEBUG nova.policy [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cf2495f54add463c8ce9d2dd8623347c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:42:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:36.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:37 np0005539564 podman[229367]: 2025-11-29 07:42:37.064077837 +0000 UTC m=+5.019465067 container remove 9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:42:37 np0005539564 systemd[1]: libpod-conmon-9fe9d34decd01644a736f6ec9dec71ad90bf50995b7c0c0dcf050c8014d707af.scope: Deactivated successfully.
Nov 29 02:42:37 np0005539564 nova_compute[226295]: 2025-11-29 07:42:37.095 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:37 np0005539564 nova_compute[226295]: 2025-11-29 07:42:37.400 226310 DEBUG nova.network.neutron [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Successfully updated port: d00ce7a8-c85b-4013-844e-d84467a05bff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:42:37 np0005539564 nova_compute[226295]: 2025-11-29 07:42:37.405 226310 DEBUG nova.compute.manager [req-2e2c4fe4-23f0-407f-8d1a-ba8649ea1bf4 req-d2e75415-6e71-4aaf-a3b8-b49c885dfeb1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received event network-changed-d00ce7a8-c85b-4013-844e-d84467a05bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:42:37 np0005539564 nova_compute[226295]: 2025-11-29 07:42:37.405 226310 DEBUG nova.compute.manager [req-2e2c4fe4-23f0-407f-8d1a-ba8649ea1bf4 req-d2e75415-6e71-4aaf-a3b8-b49c885dfeb1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Refreshing instance network info cache due to event network-changed-d00ce7a8-c85b-4013-844e-d84467a05bff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:42:37 np0005539564 nova_compute[226295]: 2025-11-29 07:42:37.405 226310 DEBUG oslo_concurrency.lockutils [req-2e2c4fe4-23f0-407f-8d1a-ba8649ea1bf4 req-d2e75415-6e71-4aaf-a3b8-b49c885dfeb1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:42:37 np0005539564 nova_compute[226295]: 2025-11-29 07:42:37.405 226310 DEBUG oslo_concurrency.lockutils [req-2e2c4fe4-23f0-407f-8d1a-ba8649ea1bf4 req-d2e75415-6e71-4aaf-a3b8-b49c885dfeb1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:42:37 np0005539564 nova_compute[226295]: 2025-11-29 07:42:37.406 226310 DEBUG nova.network.neutron [req-2e2c4fe4-23f0-407f-8d1a-ba8649ea1bf4 req-d2e75415-6e71-4aaf-a3b8-b49c885dfeb1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Refreshing network info cache for port d00ce7a8-c85b-4013-844e-d84467a05bff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:42:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:38.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:38.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:39 np0005539564 nova_compute[226295]: 2025-11-29 07:42:39.276 226310 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Successfully created port: bb62692a-9e57-4f29-a570-df3272a8f05c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:42:39 np0005539564 nova_compute[226295]: 2025-11-29 07:42:39.282 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:42:39 np0005539564 nova_compute[226295]: 2025-11-29 07:42:39.310 226310 DEBUG nova.storage.rbd_utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] resizing rbd image 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:42:39 np0005539564 podman[230718]: 2025-11-29 07:42:39.511756712 +0000 UTC m=+0.067878807 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 02:42:39 np0005539564 podman[230717]: 2025-11-29 07:42:39.524842266 +0000 UTC m=+0.080182870 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:42:39 np0005539564 podman[230716]: 2025-11-29 07:42:39.55310233 +0000 UTC m=+0.105984908 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:42:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:40.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:40 np0005539564 nova_compute[226295]: 2025-11-29 07:42:40.856 226310 DEBUG nova.network.neutron [req-2e2c4fe4-23f0-407f-8d1a-ba8649ea1bf4 req-d2e75415-6e71-4aaf-a3b8-b49c885dfeb1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:42:40 np0005539564 nova_compute[226295]: 2025-11-29 07:42:40.865 226310 DEBUG nova.objects.instance [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lazy-loading 'migration_context' on Instance uuid 980ff2fe-9165-4009-82a9-3c3b2055f29a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:42:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:40 np0005539564 nova_compute[226295]: 2025-11-29 07:42:40.883 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:42:40 np0005539564 nova_compute[226295]: 2025-11-29 07:42:40.884 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Ensure instance console log exists: /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:42:40 np0005539564 nova_compute[226295]: 2025-11-29 07:42:40.884 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:40 np0005539564 nova_compute[226295]: 2025-11-29 07:42:40.884 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:40 np0005539564 nova_compute[226295]: 2025-11-29 07:42:40.884 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:40.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:40 np0005539564 nova_compute[226295]: 2025-11-29 07:42:40.969 226310 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Successfully updated port: bb62692a-9e57-4f29-a570-df3272a8f05c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:42:41 np0005539564 nova_compute[226295]: 2025-11-29 07:42:41.025 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:42:41 np0005539564 nova_compute[226295]: 2025-11-29 07:42:41.025 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquired lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:42:41 np0005539564 nova_compute[226295]: 2025-11-29 07:42:41.025 226310 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:42:41 np0005539564 nova_compute[226295]: 2025-11-29 07:42:41.438 226310 DEBUG nova.compute.manager [req-db43464f-65a5-4d8a-ba58-0dd60d8e1ef6 req-7556db2c-481d-450c-972a-02c06eed0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received event network-changed-bb62692a-9e57-4f29-a570-df3272a8f05c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:42:41 np0005539564 nova_compute[226295]: 2025-11-29 07:42:41.438 226310 DEBUG nova.compute.manager [req-db43464f-65a5-4d8a-ba58-0dd60d8e1ef6 req-7556db2c-481d-450c-972a-02c06eed0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Refreshing instance network info cache due to event network-changed-bb62692a-9e57-4f29-a570-df3272a8f05c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:42:41 np0005539564 nova_compute[226295]: 2025-11-29 07:42:41.439 226310 DEBUG oslo_concurrency.lockutils [req-db43464f-65a5-4d8a-ba58-0dd60d8e1ef6 req-7556db2c-481d-450c-972a-02c06eed0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:42:41 np0005539564 nova_compute[226295]: 2025-11-29 07:42:41.912 226310 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:42.358099) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162358163, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1365, "num_deletes": 257, "total_data_size": 3421995, "memory_usage": 3472528, "flush_reason": "Manual Compaction"}
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 29 02:42:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:42.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162657706, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1527264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20548, "largest_seqno": 21906, "table_properties": {"data_size": 1521991, "index_size": 2541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13628, "raw_average_key_size": 21, "raw_value_size": 1510587, "raw_average_value_size": 2352, "num_data_blocks": 111, "num_entries": 642, "num_filter_entries": 642, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402020, "oldest_key_time": 1764402020, "file_creation_time": 1764402162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 299649 microseconds, and 4484 cpu microseconds.
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:42.657754) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1527264 bytes OK
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:42.657773) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:42.662644) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:42.662687) EVENT_LOG_v1 {"time_micros": 1764402162662676, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:42.662711) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3415207, prev total WAL file size 3415497, number of live WAL files 2.
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:42.664228) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373630' seq:0, type:0; will stop at (end)
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1491KB)], [39(10238KB)]
Nov 29 02:42:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162664261, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 12010976, "oldest_snapshot_seqno": -1}
Nov 29 02:42:42 np0005539564 nova_compute[226295]: 2025-11-29 07:42:42.850 226310 DEBUG nova.network.neutron [req-2e2c4fe4-23f0-407f-8d1a-ba8649ea1bf4 req-d2e75415-6e71-4aaf-a3b8-b49c885dfeb1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:42:42 np0005539564 nova_compute[226295]: 2025-11-29 07:42:42.898 226310 DEBUG oslo_concurrency.lockutils [req-2e2c4fe4-23f0-407f-8d1a-ba8649ea1bf4 req-d2e75415-6e71-4aaf-a3b8-b49c885dfeb1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:42:42 np0005539564 nova_compute[226295]: 2025-11-29 07:42:42.898 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquired lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:42:42 np0005539564 nova_compute[226295]: 2025-11-29 07:42:42.898 226310 DEBUG nova.network.neutron [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:42:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:42.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:43 np0005539564 nova_compute[226295]: 2025-11-29 07:42:43.481 226310 DEBUG nova.network.neutron [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4990 keys, 8705032 bytes, temperature: kUnknown
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402163521539, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8705032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8671927, "index_size": 19555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125969, "raw_average_key_size": 25, "raw_value_size": 8581657, "raw_average_value_size": 1719, "num_data_blocks": 803, "num_entries": 4990, "num_filter_entries": 4990, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764402162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:43.530759) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8705032 bytes
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:43.533514) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 14.0 rd, 10.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.0 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(13.6) write-amplify(5.7) OK, records in: 5482, records dropped: 492 output_compression: NoCompression
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:43.533545) EVENT_LOG_v1 {"time_micros": 1764402163533533, "job": 22, "event": "compaction_finished", "compaction_time_micros": 857372, "compaction_time_cpu_micros": 22220, "output_level": 6, "num_output_files": 1, "total_output_size": 8705032, "num_input_records": 5482, "num_output_records": 4990, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402163533985, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402163536705, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:42.663711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:43.536811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:43.536817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:43.536820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:43.536823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:42:43.536825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:42:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:44.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.020 226310 DEBUG nova.network.neutron [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Updating instance_info_cache with network_info: [{"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.098 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Releasing lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.099 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Instance network_info: |[{"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.101 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Start _get_guest_xml network_info=[{"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.106 226310 WARNING nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.111 226310 DEBUG nova.virt.libvirt.host [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.112 226310 DEBUG nova.virt.libvirt.host [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.116 226310 DEBUG nova.virt.libvirt.host [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.116 226310 DEBUG nova.virt.libvirt.host [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.119 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.120 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.120 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.120 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.120 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.121 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.121 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.121 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.121 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.121 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.121 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.122 226310 DEBUG nova.virt.hardware [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.125 226310 DEBUG nova.privsep.utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.125 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.273 226310 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Updating instance_info_cache with network_info: [{"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.312 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Releasing lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.312 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Instance network_info: |[{"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.313 226310 DEBUG oslo_concurrency.lockutils [req-db43464f-65a5-4d8a-ba58-0dd60d8e1ef6 req-7556db2c-481d-450c-972a-02c06eed0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.313 226310 DEBUG nova.network.neutron [req-db43464f-65a5-4d8a-ba58-0dd60d8e1ef6 req-7556db2c-481d-450c-972a-02c06eed0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Refreshing network info cache for port bb62692a-9e57-4f29-a570-df3272a8f05c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.317 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Start _get_guest_xml network_info=[{"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.320 226310 WARNING nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.325 226310 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.326 226310 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.332 226310 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.332 226310 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.334 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.334 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.334 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.335 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.335 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.335 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.336 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.336 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.336 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.336 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.337 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.337 226310 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.340 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:42:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3513594248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.573 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.605 226310 DEBUG nova.storage.rbd_utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] rbd image 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.611 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:42:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/78130158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.820 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.851 226310 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 5068c873-ee82-4faa-a05b-3df3ed25d792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:45 np0005539564 nova_compute[226295]: 2025-11-29 07:42:45.856 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:42:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1819114952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.093 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.095 226310 DEBUG nova.virt.libvirt.vif [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-162883616',display_name='tempest-VolumesAssistedSnapshotsTest-server-162883616',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-162883616',id=6,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/EBCCQsqBwozvSx/5HwAAjIs444VIg4wRd07RCNsB+iYI7M7vypwpfTHRW7mBe8j4u3HfnXjFueKGbu88UbhEtiS8vm73ejQqPJwakBU5jmh9B+oxfN06CQAsz70oQNw==',key_name='tempest-keypair-2144013158',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='422656fa169e49dcb91b5d4a8819f5ff',ramdisk_id='',reservation_id='r-pkt050z4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1468079842',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1468079842-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197276e850054352a94a54ad4b0274be',uuid=980ff2fe-9165-4009-82a9-3c3b2055f29a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.095 226310 DEBUG nova.network.os_vif_util [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Converting VIF {"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.096 226310 DEBUG nova.network.os_vif_util [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:df:aa,bridge_name='br-int',has_traffic_filtering=True,id=d00ce7a8-c85b-4013-844e-d84467a05bff,network=Network(1bd43cdb-189f-493f-a1a6-454e88ff3fb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd00ce7a8-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.098 226310 DEBUG nova.objects.instance [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 980ff2fe-9165-4009-82a9-3c3b2055f29a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.115 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <uuid>980ff2fe-9165-4009-82a9-3c3b2055f29a</uuid>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <name>instance-00000006</name>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:name>tempest-VolumesAssistedSnapshotsTest-server-162883616</nova:name>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:42:45</nova:creationTime>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:user uuid="197276e850054352a94a54ad4b0274be">tempest-VolumesAssistedSnapshotsTest-1468079842-project-member</nova:user>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:project uuid="422656fa169e49dcb91b5d4a8819f5ff">tempest-VolumesAssistedSnapshotsTest-1468079842</nova:project>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:port uuid="d00ce7a8-c85b-4013-844e-d84467a05bff">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="serial">980ff2fe-9165-4009-82a9-3c3b2055f29a</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="uuid">980ff2fe-9165-4009-82a9-3c3b2055f29a</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/980ff2fe-9165-4009-82a9-3c3b2055f29a_disk">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/980ff2fe-9165-4009-82a9-3c3b2055f29a_disk.config">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:4c:df:aa"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <target dev="tapd00ce7a8-c8"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a/console.log" append="off"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:42:46 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:42:46 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.116 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Preparing to wait for external event network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.116 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.117 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.117 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.117 226310 DEBUG nova.virt.libvirt.vif [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-162883616',display_name='tempest-VolumesAssistedSnapshotsTest-server-162883616',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-162883616',id=6,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/EBCCQsqBwozvSx/5HwAAjIs444VIg4wRd07RCNsB+iYI7M7vypwpfTHRW7mBe8j4u3HfnXjFueKGbu88UbhEtiS8vm73ejQqPJwakBU5jmh9B+oxfN06CQAsz70oQNw==',key_name='tempest-keypair-2144013158',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='422656fa169e49dcb91b5d4a8819f5ff',ramdisk_id='',reservation_id='r-pkt050z4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1468079842',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1468079842-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197276e850054352a94a54ad4b0274be',uuid=980ff2fe-9165-4009-82a9-3c3b2055f29a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.118 226310 DEBUG nova.network.os_vif_util [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Converting VIF {"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.118 226310 DEBUG nova.network.os_vif_util [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:df:aa,bridge_name='br-int',has_traffic_filtering=True,id=d00ce7a8-c85b-4013-844e-d84467a05bff,network=Network(1bd43cdb-189f-493f-a1a6-454e88ff3fb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd00ce7a8-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.119 226310 DEBUG os_vif [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:df:aa,bridge_name='br-int',has_traffic_filtering=True,id=d00ce7a8-c85b-4013-844e-d84467a05bff,network=Network(1bd43cdb-189f-493f-a1a6-454e88ff3fb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd00ce7a8-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.151 226310 DEBUG ovsdbapp.backend.ovs_idl [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.151 226310 DEBUG ovsdbapp.backend.ovs_idl [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.152 226310 DEBUG ovsdbapp.backend.ovs_idl [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.152 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.155 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.157 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.165 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.166 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.166 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.167 226310 INFO oslo.privsep.daemon [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp57u9t4w2/privsep.sock']#033[00m
Nov 29 02:42:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:42:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4000369868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.290 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.292 226310 DEBUG nova.virt.libvirt.vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-2',id=3,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:12Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=5068c873-ee82-4faa-a05b-3df3ed25d792,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.292 226310 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.293 226310 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:02:71,bridge_name='br-int',has_traffic_filtering=True,id=bb62692a-9e57-4f29-a570-df3272a8f05c,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb62692a-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.295 226310 DEBUG nova.objects.instance [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'pci_devices' on Instance uuid 5068c873-ee82-4faa-a05b-3df3ed25d792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.316 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <uuid>5068c873-ee82-4faa-a05b-3df3ed25d792</uuid>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <name>instance-00000003</name>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:name>tempest-tempest.common.compute-instance-1667508244-2</nova:name>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:42:45</nova:creationTime>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:user uuid="cf2495f54add463c8ce9d2dd8623347c">tempest-AutoAllocateNetworkTest-752491155-project-member</nova:user>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:project uuid="0d3a6ccbb2794f6e85d683953ac4b5fd">tempest-AutoAllocateNetworkTest-752491155</nova:project>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <nova:port uuid="bb62692a-9e57-4f29-a570-df3272a8f05c">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="fdfe:381f:8400::229" ipVersion="6"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.1.0.28" ipVersion="4"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="serial">5068c873-ee82-4faa-a05b-3df3ed25d792</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="uuid">5068c873-ee82-4faa-a05b-3df3ed25d792</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5068c873-ee82-4faa-a05b-3df3ed25d792_disk">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5068c873-ee82-4faa-a05b-3df3ed25d792_disk.config">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:c6:02:71"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <target dev="tapbb62692a-9e"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792/console.log" append="off"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:42:46 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:42:46 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:42:46 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:42:46 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.317 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Preparing to wait for external event network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.318 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.318 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.318 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.319 226310 DEBUG nova.virt.libvirt.vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-2',id=3,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:12Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=5068c873-ee82-4faa-a05b-3df3ed25d792,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.319 226310 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.320 226310 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:02:71,bridge_name='br-int',has_traffic_filtering=True,id=bb62692a-9e57-4f29-a570-df3272a8f05c,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb62692a-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.321 226310 DEBUG os_vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:02:71,bridge_name='br-int',has_traffic_filtering=True,id=bb62692a-9e57-4f29-a570-df3272a8f05c,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb62692a-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.321 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.322 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:46 np0005539564 nova_compute[226295]: 2025-11-29 07:42:46.322 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:42:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:46.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:46.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.258 226310 INFO oslo.privsep.daemon [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.129 230925 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.234 230925 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.237 230925 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.238 230925 INFO oslo.privsep.daemon [-] privsep daemon running as pid 230925#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.262 226310 WARNING oslo_privsep.priv_context [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] privsep daemon already running#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.644 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.645 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd00ce7a8-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.646 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd00ce7a8-c8, col_values=(('external_ids', {'iface-id': 'd00ce7a8-c85b-4013-844e-d84467a05bff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:df:aa', 'vm-uuid': '980ff2fe-9165-4009-82a9-3c3b2055f29a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.647 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:47 np0005539564 NetworkManager[48997]: <info>  [1764402167.6495] manager: (tapd00ce7a8-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.649 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.654 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.654 226310 INFO os_vif [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:df:aa,bridge_name='br-int',has_traffic_filtering=True,id=d00ce7a8-c85b-4013-844e-d84467a05bff,network=Network(1bd43cdb-189f-493f-a1a6-454e88ff3fb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd00ce7a8-c8')#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.655 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.656 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb62692a-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.656 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbb62692a-9e, col_values=(('external_ids', {'iface-id': 'bb62692a-9e57-4f29-a570-df3272a8f05c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:02:71', 'vm-uuid': '5068c873-ee82-4faa-a05b-3df3ed25d792'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:47 np0005539564 NetworkManager[48997]: <info>  [1764402167.6585] manager: (tapbb62692a-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.660 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.664 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.665 226310 INFO os_vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:02:71,bridge_name='br-int',has_traffic_filtering=True,id=bb62692a-9e57-4f29-a570-df3272a8f05c,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb62692a-9e')#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.815 226310 DEBUG nova.network.neutron [req-db43464f-65a5-4d8a-ba58-0dd60d8e1ef6 req-7556db2c-481d-450c-972a-02c06eed0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Updated VIF entry in instance network info cache for port bb62692a-9e57-4f29-a570-df3272a8f05c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.816 226310 DEBUG nova.network.neutron [req-db43464f-65a5-4d8a-ba58-0dd60d8e1ef6 req-7556db2c-481d-450c-972a-02c06eed0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Updating instance_info_cache with network_info: [{"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:42:47 np0005539564 nova_compute[226295]: 2025-11-29 07:42:47.842 226310 DEBUG oslo_concurrency.lockutils [req-db43464f-65a5-4d8a-ba58-0dd60d8e1ef6 req-7556db2c-481d-450c-972a-02c06eed0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:42:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:48.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:48.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.212 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.441 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.442 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.442 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No VIF found with MAC fa:16:3e:c6:02:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.443 226310 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Using config drive#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.473 226310 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 5068c873-ee82-4faa-a05b-3df3ed25d792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.533 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.534 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.534 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] No VIF found with MAC fa:16:3e:4c:df:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.535 226310 INFO nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Using config drive#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.617 226310 DEBUG nova.storage.rbd_utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] rbd image 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.835 226310 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Creating config drive at /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792/disk.config#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.839 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp8r8riuy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.970 226310 INFO nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Creating config drive at /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a/disk.config#033[00m
Nov 29 02:42:49 np0005539564 nova_compute[226295]: 2025-11-29 07:42:49.981 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp59o8sr66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:50 np0005539564 nova_compute[226295]: 2025-11-29 07:42:50.006 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp8r8riuy" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:50 np0005539564 nova_compute[226295]: 2025-11-29 07:42:50.049 226310 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 5068c873-ee82-4faa-a05b-3df3ed25d792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:50 np0005539564 nova_compute[226295]: 2025-11-29 07:42:50.054 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792/disk.config 5068c873-ee82-4faa-a05b-3df3ed25d792_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:50 np0005539564 nova_compute[226295]: 2025-11-29 07:42:50.122 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp59o8sr66" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:50.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:52.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:52 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:42:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:53 np0005539564 nova_compute[226295]: 2025-11-29 07:42:53.993 226310 DEBUG nova.storage.rbd_utils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] rbd image 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:54 np0005539564 nova_compute[226295]: 2025-11-29 07:42:54.001 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a/disk.config 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:54 np0005539564 nova_compute[226295]: 2025-11-29 07:42:54.021 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:54 np0005539564 nova_compute[226295]: 2025-11-29 07:42:54.257 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:54.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:54.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:56.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:56.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:58 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 02:42:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.004000107s ======
Nov 29 02:42:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:58.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000107s
Nov 29 02:42:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:42:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:58.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:59 np0005539564 nova_compute[226295]: 2025-11-29 07:42:59.024 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:59 np0005539564 nova_compute[226295]: 2025-11-29 07:42:59.260 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:59 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 02:43:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:00.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:00.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:02.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:02.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:03 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 02:43:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:03.692 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:03.693 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:03.693 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:04 np0005539564 nova_compute[226295]: 2025-11-29 07:43:04.027 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:04 np0005539564 nova_compute[226295]: 2025-11-29 07:43:04.294 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:04.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:04 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:43:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:04.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:06.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.910012245s, txc = 0x55ba5069d500
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.908780575s, txc = 0x55ba50456300
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.902741432s, txc = 0x55ba4f225800
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.900752544s, txc = 0x55ba503bef00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.898799896s, txc = 0x55ba5067af00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.897940159s, txc = 0x55ba50460900
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.897082806s, txc = 0x55ba4e52a600
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.895745754s, txc = 0x55ba503fd800
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.894979000s, txc = 0x55ba5069d200
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.894613266s, txc = 0x55ba503fd500
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.894383907s, txc = 0x55ba5069cf00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.894380569s, txc = 0x55ba504cf800
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.894214153s, txc = 0x55ba5043ac00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.893882751s, txc = 0x55ba5046a600
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.893622875s, txc = 0x55ba4e53bb00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.893086910s, txc = 0x55ba503bf800
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.891914845s, txc = 0x55ba5043bb00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.890188217s, txc = 0x55ba503e2600
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.889698505s, txc = 0x55ba5041f800
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.888932705s, txc = 0x55ba50432300
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.888336658s, txc = 0x55ba503fd200
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.887164116s, txc = 0x55ba50444600
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.886375427s, txc = 0x55ba5043b500
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.885861397s, txc = 0x55ba503e7b00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.885518551s, txc = 0x55ba5043a900
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.885279179s, txc = 0x55ba4e525500
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.884983063s, txc = 0x55ba50c5a300
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.884499073s, txc = 0x55ba50dcf500
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.884285927s, txc = 0x55ba5067a600
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.883970737s, txc = 0x55ba503bec00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.883446217s, txc = 0x55ba50456c00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.883211136s, txc = 0x55ba5046af00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.882729530s, txc = 0x55ba5069cc00
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.876610279s, txc = 0x55ba503af200
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.875058174s, txc = 0x55ba503be900
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.873157024s, txc = 0x55ba506ec000
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.871903896s, txc = 0x55ba506cd500
Nov 29 02:43:06 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.871429920s, txc = 0x55ba503fc300
Nov 29 02:43:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:06.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:07 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.183043003s, txc = 0x55ba503e3b00
Nov 29 02:43:07 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.981661320s, txc = 0x55ba503aef00
Nov 29 02:43:07 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:43:07 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.206 226310 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792/disk.config 5068c873-ee82-4faa-a05b-3df3ed25d792_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 17.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.207 226310 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Deleting local config drive /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792/disk.config because it was imported into RBD.#033[00m
Nov 29 02:43:07 np0005539564 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:43:07 np0005539564 systemd[1]: Started libvirt secret daemon.
Nov 29 02:43:07 np0005539564 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 29 02:43:07 np0005539564 NetworkManager[48997]: <info>  [1764402187.5375] manager: (tapbb62692a-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 02:43:07 np0005539564 kernel: tapbb62692a-9e: entered promiscuous mode
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.543 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:07Z|00027|binding|INFO|Claiming lport bb62692a-9e57-4f29-a570-df3272a8f05c for this chassis.
Nov 29 02:43:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:07Z|00028|binding|INFO|bb62692a-9e57-4f29-a570-df3272a8f05c: Claiming fa:16:3e:c6:02:71 10.1.0.28 fdfe:381f:8400::229
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.553 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:07 np0005539564 systemd-udevd[231086]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:43:07 np0005539564 NetworkManager[48997]: <info>  [1764402187.5968] device (tapbb62692a-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:43:07 np0005539564 NetworkManager[48997]: <info>  [1764402187.5982] device (tapbb62692a-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:43:07 np0005539564 systemd-machined[190128]: New machine qemu-1-instance-00000003.
Nov 29 02:43:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:07.632 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:02:71 10.1.0.28 fdfe:381f:8400::229'], port_security=['fa:16:3e:c6:02:71 10.1.0.28 fdfe:381f:8400::229'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.28/26 fdfe:381f:8400::229/64', 'neutron:device_id': '5068c873-ee82-4faa-a05b-3df3ed25d792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '441b5877-d47a-4ccc-b96a-381864fe0f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab4638fe-12b3-4f0f-a7fc-23f58f536508, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=bb62692a-9e57-4f29-a570-df3272a8f05c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:43:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:07.633 139780 INFO neutron.agent.ovn.metadata.agent [-] Port bb62692a-9e57-4f29-a570-df3272a8f05c in datapath 6c117dd1-5064-4e69-b07c-c93c3d729d3c bound to our chassis#033[00m
Nov 29 02:43:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:07.635 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c117dd1-5064-4e69-b07c-c93c3d729d3c#033[00m
Nov 29 02:43:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:07.637 139780 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpdxqep7s7/privsep.sock']#033[00m
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.655 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:07 np0005539564 systemd[1]: Started Virtual Machine qemu-1-instance-00000003.
Nov 29 02:43:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:07Z|00029|binding|INFO|Setting lport bb62692a-9e57-4f29-a570-df3272a8f05c ovn-installed in OVS
Nov 29 02:43:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:07Z|00030|binding|INFO|Setting lport bb62692a-9e57-4f29-a570-df3272a8f05c up in Southbound
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.789 226310 DEBUG oslo_concurrency.processutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a/disk.config 980ff2fe-9165-4009-82a9-3c3b2055f29a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 13.788s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.790 226310 INFO nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Deleting local config drive /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a/disk.config because it was imported into RBD.#033[00m
Nov 29 02:43:07 np0005539564 NetworkManager[48997]: <info>  [1764402187.8506] manager: (tapd00ce7a8-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 02:43:07 np0005539564 kernel: tapd00ce7a8-c8: entered promiscuous mode
Nov 29 02:43:07 np0005539564 systemd-udevd[231090]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.890 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:07Z|00031|binding|INFO|Claiming lport d00ce7a8-c85b-4013-844e-d84467a05bff for this chassis.
Nov 29 02:43:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:07Z|00032|binding|INFO|d00ce7a8-c85b-4013-844e-d84467a05bff: Claiming fa:16:3e:4c:df:aa 10.100.0.13
Nov 29 02:43:07 np0005539564 NetworkManager[48997]: <info>  [1764402187.8973] device (tapd00ce7a8-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:43:07 np0005539564 NetworkManager[48997]: <info>  [1764402187.8988] device (tapd00ce7a8-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:43:07 np0005539564 systemd-machined[190128]: New machine qemu-2-instance-00000006.
Nov 29 02:43:07 np0005539564 systemd[1]: Started Virtual Machine qemu-2-instance-00000006.
Nov 29 02:43:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:07Z|00033|binding|INFO|Setting lport d00ce7a8-c85b-4013-844e-d84467a05bff ovn-installed in OVS
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.979 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:07 np0005539564 nova_compute[226295]: 2025-11-29 07:43:07.982 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.030 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:df:aa 10.100.0.13'], port_security=['fa:16:3e:4c:df:aa 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '980ff2fe-9165-4009-82a9-3c3b2055f29a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1bd43cdb-189f-493f-a1a6-454e88ff3fb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '422656fa169e49dcb91b5d4a8819f5ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0766cb7d-04a9-440f-bd3d-d25f33d3e578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a8fc4d-6367-4b5b-8259-468538126128, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=d00ce7a8-c85b-4013-844e-d84467a05bff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:43:08 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:08Z|00034|binding|INFO|Setting lport d00ce7a8-c85b-4013-844e-d84467a05bff up in Southbound
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.388 139780 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.388 139780 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpdxqep7s7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.248 231140 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.254 231140 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.257 231140 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.257 231140 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231140#033[00m
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.391 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9cbb28-a826-4f32-8128-fa914bcaa267]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.562 226310 DEBUG nova.compute.manager [req-7124b22b-7191-437a-ac4b-4a7ca0135fdd req-f1f7a01e-a3ab-49b7-98f2-68086229c2ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received event network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.564 226310 DEBUG oslo_concurrency.lockutils [req-7124b22b-7191-437a-ac4b-4a7ca0135fdd req-f1f7a01e-a3ab-49b7-98f2-68086229c2ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.565 226310 DEBUG oslo_concurrency.lockutils [req-7124b22b-7191-437a-ac4b-4a7ca0135fdd req-f1f7a01e-a3ab-49b7-98f2-68086229c2ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.565 226310 DEBUG oslo_concurrency.lockutils [req-7124b22b-7191-437a-ac4b-4a7ca0135fdd req-f1f7a01e-a3ab-49b7-98f2-68086229c2ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.566 226310 DEBUG nova.compute.manager [req-7124b22b-7191-437a-ac4b-4a7ca0135fdd req-f1f7a01e-a3ab-49b7-98f2-68086229c2ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Processing event network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:43:08 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.880 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402188.8796391, 980ff2fe-9165-4009-82a9-3c3b2055f29a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.881 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] VM Started (Lifecycle Event)#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.903 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.907 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402188.87993, 980ff2fe-9165-4009-82a9-3c3b2055f29a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.908 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.930 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.936 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:43:08 np0005539564 nova_compute[226295]: 2025-11-29 07:43:08.961 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:43:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:08.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.998 231140 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.999 231140 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:08.999 231140 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:09 np0005539564 nova_compute[226295]: 2025-11-29 07:43:09.028 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:09 np0005539564 nova_compute[226295]: 2025-11-29 07:43:09.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:09.744 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a684d040-6064-4edb-9abe-44e3fea9830a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:09.745 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c117dd1-51 in ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:09.747 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c117dd1-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:09.747 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7b0b48-9a2c-4343-a312-95558b9a0f11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:09.752 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2fbaead1-3302-4eb1-bf2d-45b1e68a321d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:09.785 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[90420062-cc63-4e01-8449-2f6c0eb9bf31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:09.820 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c949a0-5acc-4a10-a852-cc7a3c94b222]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:09.823 139780 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp2wps_t3v/privsep.sock']#033[00m
Nov 29 02:43:09 np0005539564 podman[231216]: 2025-11-29 07:43:09.934885238 +0000 UTC m=+0.105151925 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 02:43:09 np0005539564 podman[231213]: 2025-11-29 07:43:09.960966994 +0000 UTC m=+0.135286931 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 29 02:43:09 np0005539564 podman[231215]: 2025-11-29 07:43:09.962990218 +0000 UTC m=+0.137046948 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:43:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:10.534 139780 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:43:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:10.536 139780 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2wps_t3v/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:43:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:10.395 231279 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:43:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:10.403 231279 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:43:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:10.407 231279 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 02:43:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:10.408 231279 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231279#033[00m
Nov 29 02:43:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:10.540 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[afa036b3-acb9-46c5-a183-3520ff012594]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:10.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.690 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402190.6902456, 5068c873-ee82-4faa-a05b-3df3ed25d792 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.691 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] VM Started (Lifecycle Event)#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.694 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.698 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.701 226310 INFO nova.virt.libvirt.driver [-] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Instance spawned successfully.#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.701 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.720 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.725 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.729 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.729 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.729 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.730 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.730 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.731 226310 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.760 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.761 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402190.6903534, 5068c873-ee82-4faa-a05b-3df3ed25d792 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.761 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.780 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.785 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402190.6969824, 5068c873-ee82-4faa-a05b-3df3ed25d792 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.786 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.792 226310 INFO nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Took 58.32 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.792 226310 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.805 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.809 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.839 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.886 226310 INFO nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Took 59.81 seconds to build instance.#033[00m
Nov 29 02:43:10 np0005539564 nova_compute[226295]: 2025-11-29 07:43:10.906 226310 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 59.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:10.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.020 231279 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.020 231279 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.020 231279 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.581 226310 DEBUG nova.compute.manager [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received event network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.581 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.581 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.581 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4e8214-16fc-4c0a-abee-1c7c4e45efa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.582 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.582 226310 DEBUG nova.compute.manager [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] No waiting events found dispatching network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.582 226310 WARNING nova.compute.manager [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received unexpected event network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c for instance with vm_state active and task_state None.#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.582 226310 DEBUG nova.compute.manager [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received event network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.583 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.583 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.583 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.583 226310 DEBUG nova.compute.manager [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Processing event network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.584 226310 DEBUG nova.compute.manager [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received event network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.584 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.584 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.584 226310 DEBUG oslo_concurrency.lockutils [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.585 226310 DEBUG nova.compute.manager [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] No waiting events found dispatching network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.585 226310 WARNING nova.compute.manager [req-28599a37-82ae-4147-98e4-a6808ab22281 req-10644826-d167-4aa7-905a-b1a8b44534f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received unexpected event network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.586 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.594 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402191.5942454, 980ff2fe-9165-4009-82a9-3c3b2055f29a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.595 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.605 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.608 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e2e42b-013a-453b-aeaf-c6a7f5c43166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 NetworkManager[48997]: <info>  [1764402191.6104] manager: (tap6c117dd1-50): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.611 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:11 np0005539564 systemd-udevd[231295]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.618 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.622 226310 INFO nova.virt.libvirt.driver [-] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Instance spawned successfully.#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.623 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.652 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.659 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.660 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.660 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.661 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.661 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.662 226310 DEBUG nova.virt.libvirt.driver [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.655 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6d671397-b978-4320-86d5-e61f1ab1eeb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.676 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e2728d-2629-43e2-b488-ac87d6a859c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 NetworkManager[48997]: <info>  [1764402191.7080] device (tap6c117dd1-50): carrier: link connected
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.709 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b9798c-a885-49b8-9534-437310f03023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.727 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab88ffb-0cef-4e1a-9031-fb6a4059c712]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c117dd1-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e4:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516638, 'reachable_time': 24272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231315, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.738 226310 INFO nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Took 38.25 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.739 226310 DEBUG nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.744 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c40b66-9c47-417c-8636-fbe4f7beda3b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:e465'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516638, 'tstamp': 516638}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231316, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.760 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f93cbd2f-e317-4157-ab04-7386c0e12874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c117dd1-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e4:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516638, 'reachable_time': 24272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231317, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.796 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1250ad3a-3201-485b-b6fe-2d33ad45a9e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.890 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0548d258-a507-4f51-b916-081ec1e27689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.893 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c117dd1-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.893 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.894 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c117dd1-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:43:11 np0005539564 NetworkManager[48997]: <info>  [1764402191.8983] manager: (tap6c117dd1-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.897 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:11 np0005539564 kernel: tap6c117dd1-50: entered promiscuous mode
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.904 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.906 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c117dd1-50, col_values=(('external_ids', {'iface-id': '32f6a270-f2be-48b5-9316-7ff23d26e5c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:43:11 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:11Z|00035|binding|INFO|Releasing lport 32f6a270-f2be-48b5-9316-7ff23d26e5c2 from this chassis (sb_readonly=0)
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.907 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:11 np0005539564 nova_compute[226295]: 2025-11-29 07:43:11.932 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.933 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.935 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfa1ced-cdef-412d-b37b-d9f69a671db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.937 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-6c117dd1-5064-4e69-b07c-c93c3d729d3c
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 6c117dd1-5064-4e69-b07c-c93c3d729d3c
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:43:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:11.938 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'env', 'PROCESS_TAG=haproxy-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c117dd1-5064-4e69-b07c-c93c3d729d3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:43:12 np0005539564 nova_compute[226295]: 2025-11-29 07:43:12.026 226310 INFO nova.compute.manager [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Took 39.47 seconds to build instance.#033[00m
Nov 29 02:43:12 np0005539564 nova_compute[226295]: 2025-11-29 07:43:12.077 226310 DEBUG oslo_concurrency.lockutils [None req-81563cc5-5565-4dfa-a0d7-8c2d3177dd88 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 39.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:12 np0005539564 podman[231350]: 2025-11-29 07:43:12.356774505 +0000 UTC m=+0.037427484 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:43:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:12 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 02:43:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:12.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:13 np0005539564 podman[231350]: 2025-11-29 07:43:13.466946533 +0000 UTC m=+1.147599412 container create b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.030 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.100 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.1065] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.1073] device (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.1097] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.1106] device (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.1125] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.1138] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.1152] device (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.1162] device (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:43:14 np0005539564 systemd[1]: Started libpod-conmon-b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66.scope.
Nov 29 02:43:14 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:43:14 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904105b814d91c5b2f3d0d2381c13c27638e00c0d59f8d9b077358fc092a8aca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.359 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:14 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:14Z|00036|binding|INFO|Releasing lport 32f6a270-f2be-48b5-9316-7ff23d26e5c2 from this chassis (sb_readonly=0)
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.371 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:14 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:14Z|00037|binding|INFO|Releasing lport 32f6a270-f2be-48b5-9316-7ff23d26e5c2 from this chassis (sb_readonly=0)
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.422 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:14 np0005539564 podman[231350]: 2025-11-29 07:43:14.438083751 +0000 UTC m=+2.118736630 container init b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:43:14 np0005539564 podman[231350]: 2025-11-29 07:43:14.446283422 +0000 UTC m=+2.126936301 container start b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 02:43:14 np0005539564 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[231365]: [NOTICE]   (231370) : New worker (231372) forked
Nov 29 02:43:14 np0005539564 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[231365]: [NOTICE]   (231370) : Loading success.
Nov 29 02:43:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:14.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.604 139780 INFO neutron.agent.ovn.metadata.agent [-] Port d00ce7a8-c85b-4013-844e-d84467a05bff in datapath 1bd43cdb-189f-493f-a1a6-454e88ff3fb6 unbound from our chassis#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.607 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1bd43cdb-189f-493f-a1a6-454e88ff3fb6#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.621 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[06059f13-2e7b-461c-a67a-d4e8c51cba60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.622 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1bd43cdb-11 in ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.624 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1bd43cdb-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.624 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c17c1115-54da-4b39-b294-0efec65c5cb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.625 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[71a844e9-5389-4c52-8ecf-eab064685058]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.657 226310 DEBUG nova.compute.manager [req-7193ddd5-76d4-4a2d-9f9b-5fbef0cbcfb3 req-f4a91e85-4ed3-48ca-b301-3fef61717581 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received event network-changed-d00ce7a8-c85b-4013-844e-d84467a05bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.658 226310 DEBUG nova.compute.manager [req-7193ddd5-76d4-4a2d-9f9b-5fbef0cbcfb3 req-f4a91e85-4ed3-48ca-b301-3fef61717581 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Refreshing instance network info cache due to event network-changed-d00ce7a8-c85b-4013-844e-d84467a05bff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.658 226310 DEBUG oslo_concurrency.lockutils [req-7193ddd5-76d4-4a2d-9f9b-5fbef0cbcfb3 req-f4a91e85-4ed3-48ca-b301-3fef61717581 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.659 226310 DEBUG oslo_concurrency.lockutils [req-7193ddd5-76d4-4a2d-9f9b-5fbef0cbcfb3 req-f4a91e85-4ed3-48ca-b301-3fef61717581 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.659 226310 DEBUG nova.network.neutron [req-7193ddd5-76d4-4a2d-9f9b-5fbef0cbcfb3 req-f4a91e85-4ed3-48ca-b301-3fef61717581 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Refreshing network info cache for port d00ce7a8-c85b-4013-844e-d84467a05bff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.662 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[b192bb9f-800e-4723-9022-4276d3d4fd1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.683 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d3099ffd-5df6-458f-9296-d665c80c7ef0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.721 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6f166dab-635a-43d6-a882-46f743f9d3df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.732 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e73f5c29-b2ca-42dd-8a0c-7c523bc2d0cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.7349] manager: (tap1bd43cdb-10): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 02:43:14 np0005539564 systemd-udevd[231389]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.775 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2311f7-dfaa-4fea-80d4-a44edf76120e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.779 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb1ea4a-959c-4380-bb12-5a751c8aaf8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 NetworkManager[48997]: <info>  [1764402194.8118] device (tap1bd43cdb-10): carrier: link connected
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.817 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[73dedced-4aca-45e9-bf20-cbcd09de3b2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.836 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3988593b-55a8-46d6-a0da-a6238f7fbc7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1bd43cdb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:01:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516949, 'reachable_time': 27907, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231408, 'error': None, 'target': 'ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.852 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9010a9-1286-41e6-b89a-ed16bc43ceac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:183'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516949, 'tstamp': 516949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231409, 'error': None, 'target': 'ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.866 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[557ac3de-acd2-4d35-bfc5-d236134a60ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1bd43cdb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:01:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516949, 'reachable_time': 27907, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231410, 'error': None, 'target': 'ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.890 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e45175a7-6139-4482-ab20-a914604f5c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:14.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.990 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a6a6e7-0cb8-42ba-b6b0-5bc118be4373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.993 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bd43cdb-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.993 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:43:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:14.994 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bd43cdb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:43:14 np0005539564 nova_compute[226295]: 2025-11-29 07:43:14.996 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:15 np0005539564 NetworkManager[48997]: <info>  [1764402195.0001] manager: (tap1bd43cdb-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 02:43:15 np0005539564 kernel: tap1bd43cdb-10: entered promiscuous mode
Nov 29 02:43:15 np0005539564 nova_compute[226295]: 2025-11-29 07:43:15.004 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:15.006 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1bd43cdb-10, col_values=(('external_ids', {'iface-id': 'fcdda78e-9175-4355-88b9-3a498b29f18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:43:15 np0005539564 nova_compute[226295]: 2025-11-29 07:43:15.007 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:15 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:15Z|00038|binding|INFO|Releasing lport fcdda78e-9175-4355-88b9-3a498b29f18f from this chassis (sb_readonly=0)
Nov 29 02:43:15 np0005539564 nova_compute[226295]: 2025-11-29 07:43:15.033 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:15.035 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1bd43cdb-189f-493f-a1a6-454e88ff3fb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1bd43cdb-189f-493f-a1a6-454e88ff3fb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:15.036 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f23a3a6a-28e0-4cb9-98e8-6bd1eb15f943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:15.037 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-1bd43cdb-189f-493f-a1a6-454e88ff3fb6
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/1bd43cdb-189f-493f-a1a6-454e88ff3fb6.pid.haproxy
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 1bd43cdb-189f-493f-a1a6-454e88ff3fb6
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:43:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:15.039 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6', 'env', 'PROCESS_TAG=haproxy-1bd43cdb-189f-493f-a1a6-454e88ff3fb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1bd43cdb-189f-493f-a1a6-454e88ff3fb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:43:15 np0005539564 podman[231443]: 2025-11-29 07:43:15.431885661 +0000 UTC m=+0.025622745 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:43:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:16.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:16.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:18.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:19 np0005539564 nova_compute[226295]: 2025-11-29 07:43:19.037 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:19 np0005539564 nova_compute[226295]: 2025-11-29 07:43:19.364 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:19 np0005539564 nova_compute[226295]: 2025-11-29 07:43:19.931 226310 DEBUG nova.network.neutron [req-7193ddd5-76d4-4a2d-9f9b-5fbef0cbcfb3 req-f4a91e85-4ed3-48ca-b301-3fef61717581 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Updated VIF entry in instance network info cache for port d00ce7a8-c85b-4013-844e-d84467a05bff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:43:19 np0005539564 nova_compute[226295]: 2025-11-29 07:43:19.931 226310 DEBUG nova.network.neutron [req-7193ddd5-76d4-4a2d-9f9b-5fbef0cbcfb3 req-f4a91e85-4ed3-48ca-b301-3fef61717581 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Updating instance_info_cache with network_info: [{"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:43:19 np0005539564 nova_compute[226295]: 2025-11-29 07:43:19.952 226310 DEBUG oslo_concurrency.lockutils [req-7193ddd5-76d4-4a2d-9f9b-5fbef0cbcfb3 req-f4a91e85-4ed3-48ca-b301-3fef61717581 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:20 np0005539564 podman[231443]: 2025-11-29 07:43:20.727672671 +0000 UTC m=+5.321409705 container create 5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:43:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:20.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:22.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:22 np0005539564 systemd[1]: Started libpod-conmon-5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f.scope.
Nov 29 02:43:22 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:43:22 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ba302962eb74dbe965e7008433b085ae567bdfe532bd6a410ac5bfd8cfeaef2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:23 np0005539564 podman[231443]: 2025-11-29 07:43:23.734171431 +0000 UTC m=+8.327908495 container init 5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:43:23 np0005539564 podman[231443]: 2025-11-29 07:43:23.74190565 +0000 UTC m=+8.335642714 container start 5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 02:43:23 np0005539564 neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6[231460]: [NOTICE]   (231464) : New worker (231466) forked
Nov 29 02:43:23 np0005539564 neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6[231460]: [NOTICE]   (231464) : Loading success.
Nov 29 02:43:24 np0005539564 nova_compute[226295]: 2025-11-29 07:43:24.039 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:24 np0005539564 nova_compute[226295]: 2025-11-29 07:43:24.367 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:24.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:25.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:26.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:27.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:28.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:29.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:29 np0005539564 nova_compute[226295]: 2025-11-29 07:43:29.041 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:29 np0005539564 nova_compute[226295]: 2025-11-29 07:43:29.369 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:30 np0005539564 nova_compute[226295]: 2025-11-29 07:43:30.548 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:30 np0005539564 nova_compute[226295]: 2025-11-29 07:43:30.549 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:30 np0005539564 nova_compute[226295]: 2025-11-29 07:43:30.550 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:43:30 np0005539564 nova_compute[226295]: 2025-11-29 07:43:30.550 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:43:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:30.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:30 np0005539564 nova_compute[226295]: 2025-11-29 07:43:30.888 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:30 np0005539564 nova_compute[226295]: 2025-11-29 07:43:30.889 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:30 np0005539564 nova_compute[226295]: 2025-11-29 07:43:30.889 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:43:30 np0005539564 nova_compute[226295]: 2025-11-29 07:43:30.890 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5068c873-ee82-4faa-a05b-3df3ed25d792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:43:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:32 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:43:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.429 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.132117271s, txc = 0x55ba5046a600
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.131082058s, txc = 0x55ba5043ac00
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.130276680s, txc = 0x55ba504cf800
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.965155125s, txc = 0x55ba503e2900
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.957889080s, txc = 0x55ba5069cc00
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.956810951s, txc = 0x55ba506ed800
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.947404385s, txc = 0x55ba503af200
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.908286572s, txc = 0x55ba50c5a300
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.896120548s, txc = 0x55ba50697200
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.892949104s, txc = 0x55ba503fd500
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.862259388s, txc = 0x55ba506cc000
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.806488037s, txc = 0x55ba503e3200
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.805934906s, txc = 0x55ba5042e900
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.803563595s, txc = 0x55ba503ae900
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.743099689s, txc = 0x55ba50445200
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.741923809s, txc = 0x55ba50687b00
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.739813805s, txc = 0x55ba503fdb00
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.736976147s, txc = 0x55ba5041e000
Nov 29 02:43:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.736602783s, txc = 0x55ba503ae000
Nov 29 02:43:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:34.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.749 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Updating instance_info_cache with network_info: [{"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.814 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-5068c873-ee82-4faa-a05b-3df3ed25d792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.814 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.814 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.815 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.815 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.815 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.816 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.816 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.816 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.816 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.843 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.843 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.844 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.844 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:43:34 np0005539564 nova_compute[226295]: 2025-11-29 07:43:34.845 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:35.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:36.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:36 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:43:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:37.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.549806595s, txc = 0x55ba506cd800
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.584959030s, txc = 0x55ba4e52a600
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.821691513s, txc = 0x55ba50696f00
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.821281433s, txc = 0x55ba50687500
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.820740700s, txc = 0x55ba5041e300
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.820413113s, txc = 0x55ba50697b00
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.819881916s, txc = 0x55ba50640c00
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.819657803s, txc = 0x55ba50432300
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.818646908s, txc = 0x55ba503afb00
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.819386482s, txc = 0x55ba50dcf500
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.818498135s, txc = 0x55ba503e2f00
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.818586826s, txc = 0x55ba503bfb00
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.818340302s, txc = 0x55ba506f7500
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.818068504s, txc = 0x55ba503fd200
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.818206310s, txc = 0x55ba5041e600
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.818197727s, txc = 0x55ba503af800
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.817104816s, txc = 0x55ba5069f200
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.814854622s, txc = 0x55ba50686900
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.816723824s, txc = 0x55ba50686c00
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.816723347s, txc = 0x55ba503e6c00
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.814560890s, txc = 0x55ba50640300
Nov 29 02:43:37 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.814603806s, txc = 0x55ba50687800
Nov 29 02:43:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:38.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:39.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:39 np0005539564 nova_compute[226295]: 2025-11-29 07:43:39.048 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:39 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.725197792s, txc = 0x55ba50640000
Nov 29 02:43:39 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.728837490s, txc = 0x55ba50434c00
Nov 29 02:43:39 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.725378036s, txc = 0x55ba503af500
Nov 29 02:43:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:39 np0005539564 nova_compute[226295]: 2025-11-29 07:43:39.431 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:43:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1544090733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:43:39 np0005539564 nova_compute[226295]: 2025-11-29 07:43:39.816 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.971s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:39 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.384031773s, txc = 0x55ba5069ef00
Nov 29 02:43:39 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.385251999s, txc = 0x55ba5041c900
Nov 29 02:43:39 np0005539564 nova_compute[226295]: 2025-11-29 07:43:39.917 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:43:39 np0005539564 nova_compute[226295]: 2025-11-29 07:43:39.918 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:43:39 np0005539564 nova_compute[226295]: 2025-11-29 07:43:39.922 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:43:39 np0005539564 nova_compute[226295]: 2025-11-29 07:43:39.922 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.149 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.150 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4619MB free_disk=20.781497955322266GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.150 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.150 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.224 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 5068c873-ee82-4faa-a05b-3df3ed25d792 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.224 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 980ff2fe-9165-4009-82a9-3c3b2055f29a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.224 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.224 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.281 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:40 np0005539564 podman[231523]: 2025-11-29 07:43:40.517221597 +0000 UTC m=+0.067233879 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:43:40 np0005539564 podman[231521]: 2025-11-29 07:43:40.531645707 +0000 UTC m=+0.085797491 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:40 np0005539564 podman[231524]: 2025-11-29 07:43:40.539525491 +0000 UTC m=+0.083991943 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:43:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:40.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.714 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.722 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.744 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.780 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:43:40 np0005539564 nova_compute[226295]: 2025-11-29 07:43:40.781 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:41.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:42.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:43.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:44 np0005539564 nova_compute[226295]: 2025-11-29 07:43:44.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:44 np0005539564 nova_compute[226295]: 2025-11-29 07:43:44.434 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:44.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:45 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:45Z|00039|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 29 02:43:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:45.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:46.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:47.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:47 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:47Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:df:aa 10.100.0.13
Nov 29 02:43:47 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:47Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:df:aa 10.100.0.13
Nov 29 02:43:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:48.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:49.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:49 np0005539564 nova_compute[226295]: 2025-11-29 07:43:49.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:49 np0005539564 nova_compute[226295]: 2025-11-29 07:43:49.437 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:50.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:51.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:52 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:52Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:02:71 10.1.0.28
Nov 29 02:43:52 np0005539564 ovn_controller[130591]: 2025-11-29T07:43:52Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:02:71 10.1.0.28
Nov 29 02:43:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:52.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:53.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:54 np0005539564 nova_compute[226295]: 2025-11-29 07:43:54.056 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:43:54 np0005539564 nova_compute[226295]: 2025-11-29 07:43:54.491 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:54.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:54.717 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:43:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:43:54.718 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:43:54 np0005539564 nova_compute[226295]: 2025-11-29 07:43:54.719 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:55.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:56.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:43:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:58.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:59 np0005539564 nova_compute[226295]: 2025-11-29 07:43:59.059 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:43:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:59.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:59 np0005539564 nova_compute[226295]: 2025-11-29 07:43:59.494 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:00.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:01.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:02.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:03.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:03.693 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:03.694 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:03.694 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.061 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.537 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:04.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:04.720 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.838 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "5068c873-ee82-4faa-a05b-3df3ed25d792" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.839 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.839 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.840 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.840 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.842 226310 INFO nova.compute.manager [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Terminating instance#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.845 226310 DEBUG nova.compute.manager [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:44:04 np0005539564 kernel: tapbb62692a-9e (unregistering): left promiscuous mode
Nov 29 02:44:04 np0005539564 NetworkManager[48997]: <info>  [1764402244.9677] device (tapbb62692a-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:44:04 np0005539564 ovn_controller[130591]: 2025-11-29T07:44:04Z|00040|binding|INFO|Releasing lport bb62692a-9e57-4f29-a570-df3272a8f05c from this chassis (sb_readonly=0)
Nov 29 02:44:04 np0005539564 ovn_controller[130591]: 2025-11-29T07:44:04Z|00041|binding|INFO|Setting lport bb62692a-9e57-4f29-a570-df3272a8f05c down in Southbound
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.975 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:04 np0005539564 ovn_controller[130591]: 2025-11-29T07:44:04Z|00042|binding|INFO|Removing iface tapbb62692a-9e ovn-installed in OVS
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.977 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:04.985 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:02:71 10.1.0.28 fdfe:381f:8400::229'], port_security=['fa:16:3e:c6:02:71 10.1.0.28 fdfe:381f:8400::229'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.28/26 fdfe:381f:8400::229/64', 'neutron:device_id': '5068c873-ee82-4faa-a05b-3df3ed25d792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '441b5877-d47a-4ccc-b96a-381864fe0f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab4638fe-12b3-4f0f-a7fc-23f58f536508, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=bb62692a-9e57-4f29-a570-df3272a8f05c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:44:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:04.986 139780 INFO neutron.agent.ovn.metadata.agent [-] Port bb62692a-9e57-4f29-a570-df3272a8f05c in datapath 6c117dd1-5064-4e69-b07c-c93c3d729d3c unbound from our chassis#033[00m
Nov 29 02:44:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:04.987 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c117dd1-5064-4e69-b07c-c93c3d729d3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:44:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:04.988 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f79a5884-5318-4512-8224-e6a4d4709bbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:04.989 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c namespace which is not needed anymore#033[00m
Nov 29 02:44:04 np0005539564 nova_compute[226295]: 2025-11-29 07:44:04.991 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:05 np0005539564 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 29 02:44:05 np0005539564 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Consumed 17.143s CPU time.
Nov 29 02:44:05 np0005539564 systemd-machined[190128]: Machine qemu-1-instance-00000003 terminated.
Nov 29 02:44:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:05.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.085 226310 INFO nova.virt.libvirt.driver [-] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Instance destroyed successfully.#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.085 226310 DEBUG nova.objects.instance [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'resources' on Instance uuid 5068c873-ee82-4faa-a05b-3df3ed25d792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.114 226310 DEBUG nova.virt.libvirt.vif [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-2',id=3,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-29T07:43:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:43:10Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=5068c873-ee82-4faa-a05b-3df3ed25d792,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.114 226310 DEBUG nova.network.os_vif_util [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "bb62692a-9e57-4f29-a570-df3272a8f05c", "address": "fa:16:3e:c6:02:71", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::229", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb62692a-9e", "ovs_interfaceid": "bb62692a-9e57-4f29-a570-df3272a8f05c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.115 226310 DEBUG nova.network.os_vif_util [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:02:71,bridge_name='br-int',has_traffic_filtering=True,id=bb62692a-9e57-4f29-a570-df3272a8f05c,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb62692a-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.115 226310 DEBUG os_vif [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:02:71,bridge_name='br-int',has_traffic_filtering=True,id=bb62692a-9e57-4f29-a570-df3272a8f05c,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb62692a-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.117 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.118 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb62692a-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.119 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.120 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.122 226310 INFO os_vif [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:02:71,bridge_name='br-int',has_traffic_filtering=True,id=bb62692a-9e57-4f29-a570-df3272a8f05c,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb62692a-9e')#033[00m
Nov 29 02:44:05 np0005539564 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[231365]: [NOTICE]   (231370) : haproxy version is 2.8.14-c23fe91
Nov 29 02:44:05 np0005539564 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[231365]: [NOTICE]   (231370) : path to executable is /usr/sbin/haproxy
Nov 29 02:44:05 np0005539564 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[231365]: [WARNING]  (231370) : Exiting Master process...
Nov 29 02:44:05 np0005539564 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[231365]: [WARNING]  (231370) : Exiting Master process...
Nov 29 02:44:05 np0005539564 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[231365]: [ALERT]    (231370) : Current worker (231372) exited with code 143 (Terminated)
Nov 29 02:44:05 np0005539564 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[231365]: [WARNING]  (231370) : All workers exited. Exiting... (0)
Nov 29 02:44:05 np0005539564 systemd[1]: libpod-b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66.scope: Deactivated successfully.
Nov 29 02:44:05 np0005539564 podman[231661]: 2025-11-29 07:44:05.173544693 +0000 UTC m=+0.099659403 container died b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:44:05 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66-userdata-shm.mount: Deactivated successfully.
Nov 29 02:44:05 np0005539564 systemd[1]: var-lib-containers-storage-overlay-904105b814d91c5b2f3d0d2381c13c27638e00c0d59f8d9b077358fc092a8aca-merged.mount: Deactivated successfully.
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.372 226310 DEBUG nova.compute.manager [req-e8cdb7c6-ed7f-4efa-9c8e-b6c0d1f9ad60 req-e4bdf10c-04ff-4999-a43b-e51b16d1262a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received event network-vif-unplugged-bb62692a-9e57-4f29-a570-df3272a8f05c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.372 226310 DEBUG oslo_concurrency.lockutils [req-e8cdb7c6-ed7f-4efa-9c8e-b6c0d1f9ad60 req-e4bdf10c-04ff-4999-a43b-e51b16d1262a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.373 226310 DEBUG oslo_concurrency.lockutils [req-e8cdb7c6-ed7f-4efa-9c8e-b6c0d1f9ad60 req-e4bdf10c-04ff-4999-a43b-e51b16d1262a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.373 226310 DEBUG oslo_concurrency.lockutils [req-e8cdb7c6-ed7f-4efa-9c8e-b6c0d1f9ad60 req-e4bdf10c-04ff-4999-a43b-e51b16d1262a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.373 226310 DEBUG nova.compute.manager [req-e8cdb7c6-ed7f-4efa-9c8e-b6c0d1f9ad60 req-e4bdf10c-04ff-4999-a43b-e51b16d1262a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] No waiting events found dispatching network-vif-unplugged-bb62692a-9e57-4f29-a570-df3272a8f05c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:44:05 np0005539564 nova_compute[226295]: 2025-11-29 07:44:05.373 226310 DEBUG nova.compute.manager [req-e8cdb7c6-ed7f-4efa-9c8e-b6c0d1f9ad60 req-e4bdf10c-04ff-4999-a43b-e51b16d1262a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received event network-vif-unplugged-bb62692a-9e57-4f29-a570-df3272a8f05c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:44:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:06.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:06 np0005539564 podman[231661]: 2025-11-29 07:44:06.694853362 +0000 UTC m=+1.620968112 container cleanup b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:44:06 np0005539564 systemd[1]: libpod-conmon-b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66.scope: Deactivated successfully.
Nov 29 02:44:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:07.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:07 np0005539564 nova_compute[226295]: 2025-11-29 07:44:07.647 226310 DEBUG nova.compute.manager [req-4b9016a1-1b11-4c03-9362-c3a3e77dc650 req-8a54786f-4b28-467f-b1ef-08ba4bc69f77 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received event network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:44:07 np0005539564 nova_compute[226295]: 2025-11-29 07:44:07.648 226310 DEBUG oslo_concurrency.lockutils [req-4b9016a1-1b11-4c03-9362-c3a3e77dc650 req-8a54786f-4b28-467f-b1ef-08ba4bc69f77 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:07 np0005539564 nova_compute[226295]: 2025-11-29 07:44:07.648 226310 DEBUG oslo_concurrency.lockutils [req-4b9016a1-1b11-4c03-9362-c3a3e77dc650 req-8a54786f-4b28-467f-b1ef-08ba4bc69f77 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:07 np0005539564 nova_compute[226295]: 2025-11-29 07:44:07.649 226310 DEBUG oslo_concurrency.lockutils [req-4b9016a1-1b11-4c03-9362-c3a3e77dc650 req-8a54786f-4b28-467f-b1ef-08ba4bc69f77 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:07 np0005539564 nova_compute[226295]: 2025-11-29 07:44:07.649 226310 DEBUG nova.compute.manager [req-4b9016a1-1b11-4c03-9362-c3a3e77dc650 req-8a54786f-4b28-467f-b1ef-08ba4bc69f77 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] No waiting events found dispatching network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:44:07 np0005539564 nova_compute[226295]: 2025-11-29 07:44:07.650 226310 WARNING nova.compute.manager [req-4b9016a1-1b11-4c03-9362-c3a3e77dc650 req-8a54786f-4b28-467f-b1ef-08ba4bc69f77 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received unexpected event network-vif-plugged-bb62692a-9e57-4f29-a570-df3272a8f05c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:44:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:08.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:09.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:09 np0005539564 podman[231719]: 2025-11-29 07:44:09.37716357 +0000 UTC m=+2.649757227 container remove b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.384 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf6b783-2d43-4f2e-9ac8-4de4aa952528]: (4, ('Sat Nov 29 07:44:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c (b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66)\nb74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66\nSat Nov 29 07:44:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c (b74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66)\nb74b66a7dd233a7e095c2f884dc8f9127ce89f0565a9327b40887033794fda66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.385 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2f82d4-061e-46b0-964d-c4a317f5df91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.386 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c117dd1-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:44:09 np0005539564 nova_compute[226295]: 2025-11-29 07:44:09.388 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:09 np0005539564 kernel: tap6c117dd1-50: left promiscuous mode
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.393 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bb775f03-3851-4169-926d-4b88192b6fc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:09 np0005539564 nova_compute[226295]: 2025-11-29 07:44:09.424 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.426 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a008fb6e-18a2-4d2c-89f6-51b6bba2408b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.427 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6411f8-17e6-4959-b4e6-b1bd7396d614]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.446 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ed13da-83c4-4348-b88e-03f22d03eff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516625, 'reachable_time': 28823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231733, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:09 np0005539564 systemd[1]: run-netns-ovnmeta\x2d6c117dd1\x2d5064\x2d4e69\x2db07c\x2dc93c3d729d3c.mount: Deactivated successfully.
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.456 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:44:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:09.457 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a9f2a3-b139-45bc-a412-b47bb6250622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:09 np0005539564 nova_compute[226295]: 2025-11-29 07:44:09.539 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:10 np0005539564 nova_compute[226295]: 2025-11-29 07:44:10.120 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:10.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:11.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:11 np0005539564 podman[231740]: 2025-11-29 07:44:11.546086798 +0000 UTC m=+0.095582292 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 02:44:11 np0005539564 podman[231741]: 2025-11-29 07:44:11.563681776 +0000 UTC m=+0.101876344 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:44:11 np0005539564 podman[231739]: 2025-11-29 07:44:11.581277423 +0000 UTC m=+0.132645147 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 02:44:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:12.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:13.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:14 np0005539564 nova_compute[226295]: 2025-11-29 07:44:14.564 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:14.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:15.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:15 np0005539564 nova_compute[226295]: 2025-11-29 07:44:15.122 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:16.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:17.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:18 np0005539564 nova_compute[226295]: 2025-11-29 07:44:18.026 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:18 np0005539564 nova_compute[226295]: 2025-11-29 07:44:18.026 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:18 np0005539564 nova_compute[226295]: 2025-11-29 07:44:18.048 226310 DEBUG nova.objects.instance [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lazy-loading 'flavor' on Instance uuid 980ff2fe-9165-4009-82a9-3c3b2055f29a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:44:18 np0005539564 nova_compute[226295]: 2025-11-29 07:44:18.164 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:18.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:18 np0005539564 nova_compute[226295]: 2025-11-29 07:44:18.698 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:18 np0005539564 nova_compute[226295]: 2025-11-29 07:44:18.699 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:18 np0005539564 nova_compute[226295]: 2025-11-29 07:44:18.700 226310 INFO nova.compute.manager [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Attaching volume d78e902f-7aa0-49b0-b066-e2f250c5b234 to /dev/vdb#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.066 226310 DEBUG os_brick.utils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.067 226310 INFO oslo.privsep.daemon [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp4mg8zxne/privsep.sock']#033[00m
Nov 29 02:44:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:19.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.567 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.792 226310 INFO oslo.privsep.daemon [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.656 231810 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.660 231810 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.663 231810 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.663 231810 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231810#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.800 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac1d960-a1b1-4415-8635-8e7c91213b3c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.901 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.921 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.922 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[e39ed86e-3fb2-4152-920b-54de54349915]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.924 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.939 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.942 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbc7096-f135-4128-a2ae-f8646e08c91f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.945 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.960 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.961 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e8b878-8ea4-41b8-ad23-df83af2ea73a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.964 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[8d46bcd2-acce-4a5d-9ff9-49e0910e305f]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.965 226310 DEBUG oslo_concurrency.processutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:19 np0005539564 nova_compute[226295]: 2025-11-29 07:44:19.994 226310 DEBUG oslo_concurrency.processutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.001 226310 DEBUG os_brick.initiator.connectors.lightos [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.003 226310 DEBUG os_brick.initiator.connectors.lightos [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.003 226310 DEBUG os_brick.initiator.connectors.lightos [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.004 226310 DEBUG os_brick.utils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] <== get_connector_properties: return (937ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.005 226310 DEBUG nova.virt.block_device [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Updating existing volume attachment record: e54f4576-f0c3-43f6-97dd-4d95fdad8851 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.084 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402245.082729, 5068c873-ee82-4faa-a05b-3df3ed25d792 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.085 226310 INFO nova.compute.manager [-] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.124 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.135 226310 DEBUG nova.compute.manager [None req-e0544339-cf29-4a95-8e94-2d1c4e83b7c9 - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.141 226310 DEBUG nova.compute.manager [None req-e0544339-cf29-4a95-8e94-2d1c4e83b7c9 - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:44:20 np0005539564 nova_compute[226295]: 2025-11-29 07:44:20.200 226310 INFO nova.compute.manager [None req-e0544339-cf29-4a95-8e94-2d1c4e83b7c9 - - - - - -] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 29 02:44:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:20.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:21 np0005539564 nova_compute[226295]: 2025-11-29 07:44:21.373 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:21 np0005539564 nova_compute[226295]: 2025-11-29 07:44:21.374 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:21 np0005539564 nova_compute[226295]: 2025-11-29 07:44:21.375 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:21 np0005539564 nova_compute[226295]: 2025-11-29 07:44:21.388 226310 DEBUG nova.objects.instance [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lazy-loading 'flavor' on Instance uuid 980ff2fe-9165-4009-82a9-3c3b2055f29a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:44:21 np0005539564 nova_compute[226295]: 2025-11-29 07:44:21.416 226310 DEBUG nova.virt.libvirt.driver [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Attempting to attach volume d78e902f-7aa0-49b0-b066-e2f250c5b234 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 02:44:21 np0005539564 nova_compute[226295]: 2025-11-29 07:44:21.421 226310 DEBUG nova.virt.libvirt.guest [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 02:44:21 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:44:21 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-d78e902f-7aa0-49b0-b066-e2f250c5b234">
Nov 29 02:44:21 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:44:21 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:44:21 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:44:21 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:44:21 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 02:44:21 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:44:21 np0005539564 nova_compute[226295]:  </auth>
Nov 29 02:44:21 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:44:21 np0005539564 nova_compute[226295]:  <serial>d78e902f-7aa0-49b0-b066-e2f250c5b234</serial>
Nov 29 02:44:21 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:44:21 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 02:44:21 np0005539564 nova_compute[226295]: 2025-11-29 07:44:21.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:22 np0005539564 nova_compute[226295]: 2025-11-29 07:44:22.585 226310 DEBUG nova.virt.libvirt.driver [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:44:22 np0005539564 nova_compute[226295]: 2025-11-29 07:44:22.587 226310 DEBUG nova.virt.libvirt.driver [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:44:22 np0005539564 nova_compute[226295]: 2025-11-29 07:44:22.587 226310 DEBUG nova.virt.libvirt.driver [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:44:22 np0005539564 nova_compute[226295]: 2025-11-29 07:44:22.587 226310 DEBUG nova.virt.libvirt.driver [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] No VIF found with MAC fa:16:3e:4c:df:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:44:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:22.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:23 np0005539564 nova_compute[226295]: 2025-11-29 07:44:23.062 226310 DEBUG oslo_concurrency.lockutils [None req-7da2cbff-62d6-4d7c-a54e-41bca7efa781 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:23.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.560 226310 DEBUG nova.virt.libvirt.driver [None req-b1a54b2a-9e07-4a84-b2b0-3df81d070888 7255b112d5ae414eab3cab9e721acee3 275da93235744a9ba0911bf4f1bb1ec7 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] volume_snapshot_create: create_info: {'snapshot_id': '5080e7e5-abb8-46e0-a7d8-d32224958c97', 'type': 'qcow2', 'new_file': 'new_file'} volume_snapshot_create /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3572#033[00m
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.567 226310 ERROR nova.virt.libvirt.driver [None req-b1a54b2a-9e07-4a84-b2b0-3df81d070888 7255b112d5ae414eab3cab9e721acee3 275da93235744a9ba0911bf4f1bb1ec7 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.567 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Traceback (most recent call last):
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.567 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.567 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a]     self._volume_snapshot_create(context, instance, guest,
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.567 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.567 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a]     raise exception.InternalError(msg)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.567 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.567 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] #033[00m
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.576 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:24.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver [None req-b1a54b2a-9e07-4a84-b2b0-3df81d070888 7255b112d5ae414eab3cab9e721acee3 275da93235744a9ba0911bf4f1bb1ec7 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot 5080e7e5-abb8-46e0-a7d8-d32224958c97 could not be found.
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     self._volume_snapshot_create(context, instance, guest,
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     raise exception.InternalError(msg)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot 5080e7e5-abb8-46e0-a7d8-d32224958c97 could not be found. (HTTP 404) (Request-ID: req-4b707df9-912d-41a5-97cf-ffd8dded6910)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot 5080e7e5-abb8-46e0-a7d8-d32224958c97 could not be found.
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.768 226310 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server [None req-b1a54b2a-9e07-4a84-b2b0-3df81d070888 7255b112d5ae414eab3cab9e721acee3 275da93235744a9ba0911bf4f1bb1ec7 - - default default] Exception during message handling: nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4410, in volume_snapshot_create
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_create(context, instance, volume_id,
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3597, in volume_snapshot_create
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     self._volume_snapshot_create(context, instance, guest,
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server     raise exception.InternalError(msg)
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.774 226310 ERROR oslo_messaging.rpc.server #033[00m
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.783 226310 DEBUG nova.virt.libvirt.driver [None req-d48c4a3d-6184-4107-b015-d21ad572576e 7255b112d5ae414eab3cab9e721acee3 275da93235744a9ba0911bf4f1bb1ec7 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] volume_snapshot_delete: delete_info: {'volume_id': 'd78e902f-7aa0-49b0-b066-e2f250c5b234'} _volume_snapshot_delete /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3673#033[00m
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.784 226310 ERROR nova.virt.libvirt.driver [None req-d48c4a3d-6184-4107-b015-d21ad572576e 7255b112d5ae414eab3cab9e721acee3 275da93235744a9ba0911bf4f1bb1ec7 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.784 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Traceback (most recent call last):
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.784 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.784 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a]     self._volume_snapshot_delete(context, instance, volume_id,
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.784 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.784 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a]     if delete_info['type'] != 'qcow2':
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.784 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] KeyError: 'type'
Nov 29 02:44:24 np0005539564 nova_compute[226295]: 2025-11-29 07:44:24.784 226310 ERROR nova.virt.libvirt.driver [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] #033[00m
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver [None req-d48c4a3d-6184-4107-b015-d21ad572576e 7255b112d5ae414eab3cab9e721acee3 275da93235744a9ba0911bf4f1bb1ec7 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot None could not be found.
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     self._volume_snapshot_delete(context, instance, volume_id,
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     if delete_info['type'] != 'qcow2':
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver KeyError: 'type'
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot None could not be found. (HTTP 404) (Request-ID: req-bfc3800c-4496-4ca2-a6c2-7c5fcf8d6c2a)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot None could not be found.
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.035 226310 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server [None req-d48c4a3d-6184-4107-b015-d21ad572576e 7255b112d5ae414eab3cab9e721acee3 275da93235744a9ba0911bf4f1bb1ec7 - - default default] Exception during message handling: KeyError: 'type'
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4422, in volume_snapshot_delete
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_delete(context, instance, volume_id,
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3853, in volume_snapshot_delete
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     self._volume_snapshot_delete(context, instance, volume_id,
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server     if delete_info['type'] != 'qcow2':
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server KeyError: 'type'
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.038 226310 ERROR oslo_messaging.rpc.server #033[00m
Nov 29 02:44:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:25.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.166 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.602 226310 DEBUG oslo_concurrency.lockutils [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.602 226310 DEBUG oslo_concurrency.lockutils [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.634 226310 INFO nova.compute.manager [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Detaching volume d78e902f-7aa0-49b0-b066-e2f250c5b234#033[00m
Nov 29 02:44:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.872 226310 INFO nova.virt.block_device [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Attempting to driver detach volume d78e902f-7aa0-49b0-b066-e2f250c5b234 from mountpoint /dev/vdb#033[00m
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.884 226310 DEBUG nova.virt.libvirt.driver [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Attempting to detach device vdb from instance 980ff2fe-9165-4009-82a9-3c3b2055f29a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 02:44:25 np0005539564 nova_compute[226295]: 2025-11-29 07:44:25.885 226310 DEBUG nova.virt.libvirt.guest [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:44:25 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:44:25 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-d78e902f-7aa0-49b0-b066-e2f250c5b234">
Nov 29 02:44:25 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:44:25 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:44:25 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:44:25 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:44:25 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:44:25 np0005539564 nova_compute[226295]:  <serial>d78e902f-7aa0-49b0-b066-e2f250c5b234</serial>
Nov 29 02:44:25 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:44:25 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:44:25 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:44:26 np0005539564 nova_compute[226295]: 2025-11-29 07:44:26.156 226310 INFO nova.virt.libvirt.driver [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Successfully detached device vdb from instance 980ff2fe-9165-4009-82a9-3c3b2055f29a from the persistent domain config.#033[00m
Nov 29 02:44:26 np0005539564 nova_compute[226295]: 2025-11-29 07:44:26.158 226310 DEBUG nova.virt.libvirt.driver [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 980ff2fe-9165-4009-82a9-3c3b2055f29a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 02:44:26 np0005539564 nova_compute[226295]: 2025-11-29 07:44:26.159 226310 DEBUG nova.virt.libvirt.guest [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:44:26 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:44:26 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-d78e902f-7aa0-49b0-b066-e2f250c5b234">
Nov 29 02:44:26 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:44:26 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:44:26 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:44:26 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:44:26 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:44:26 np0005539564 nova_compute[226295]:  <serial>d78e902f-7aa0-49b0-b066-e2f250c5b234</serial>
Nov 29 02:44:26 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:44:26 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:44:26 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:44:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:27.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:27 np0005539564 nova_compute[226295]: 2025-11-29 07:44:27.372 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764402267.371809, 980ff2fe-9165-4009-82a9-3c3b2055f29a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 02:44:27 np0005539564 nova_compute[226295]: 2025-11-29 07:44:27.375 226310 DEBUG nova.virt.libvirt.driver [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 980ff2fe-9165-4009-82a9-3c3b2055f29a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 02:44:27 np0005539564 nova_compute[226295]: 2025-11-29 07:44:27.379 226310 INFO nova.virt.libvirt.driver [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Successfully detached device vdb from instance 980ff2fe-9165-4009-82a9-3c3b2055f29a from the live domain config.#033[00m
Nov 29 02:44:27 np0005539564 nova_compute[226295]: 2025-11-29 07:44:27.723 226310 DEBUG nova.objects.instance [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lazy-loading 'flavor' on Instance uuid 980ff2fe-9165-4009-82a9-3c3b2055f29a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:44:27 np0005539564 nova_compute[226295]: 2025-11-29 07:44:27.779 226310 DEBUG oslo_concurrency.lockutils [None req-ef612bbe-e435-4317-8775-520a366dc554 35709e8ae21145ea8f20a71bfac799e7 e5fa0a96b3534909a4900e2989849968 - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:44:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747782361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:44:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:44:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747782361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:44:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:28 np0005539564 nova_compute[226295]: 2025-11-29 07:44:28.692 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:29 np0005539564 nova_compute[226295]: 2025-11-29 07:44:29.571 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:30 np0005539564 nova_compute[226295]: 2025-11-29 07:44:30.170 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:31.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:32 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:44:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:32 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 5.010615349s
Nov 29 02:44:32 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 5.010087013s
Nov 29 02:44:32 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 5.066989422s
Nov 29 02:44:32 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 5.066422939s
Nov 29 02:44:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:33.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:33 np0005539564 nova_compute[226295]: 2025-11-29 07:44:33.569 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:33 np0005539564 nova_compute[226295]: 2025-11-29 07:44:33.570 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:33 np0005539564 nova_compute[226295]: 2025-11-29 07:44:33.599 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:33 np0005539564 nova_compute[226295]: 2025-11-29 07:44:33.599 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:44:33 np0005539564 nova_compute[226295]: 2025-11-29 07:44:33.910 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:44:33 np0005539564 nova_compute[226295]: 2025-11-29 07:44:33.910 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:44:33 np0005539564 nova_compute[226295]: 2025-11-29 07:44:33.911 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:44:34 np0005539564 nova_compute[226295]: 2025-11-29 07:44:34.606 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:34.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:35.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:35 np0005539564 nova_compute[226295]: 2025-11-29 07:44:35.172 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.730553150s, txc = 0x55ba4e5d1200
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.725695133s, txc = 0x55ba4f725b00
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.707062244s, txc = 0x55ba4f728f00
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.698678493s, txc = 0x55ba50687800
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.696373940s, txc = 0x55ba4f729200
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.693175793s, txc = 0x55ba50686900
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.692754269s, txc = 0x55ba4e5a1800
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.691752434s, txc = 0x55ba5041c000
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.690634727s, txc = 0x55ba50653b00
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.687542915s, txc = 0x55ba50432000
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.686331749s, txc = 0x55ba50644300
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.685957432s, txc = 0x55ba5063e300
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.684879303s, txc = 0x55ba5063e900
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.684543133s, txc = 0x55ba4f72a900
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.684399128s, txc = 0x55ba4e531800
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.683609009s, txc = 0x55ba5041d500
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.682993889s, txc = 0x55ba503e6c00
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.681825638s, txc = 0x55ba5063f800
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.681496143s, txc = 0x55ba5065b200
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.670297146s, txc = 0x55ba4f725800
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.667950630s, txc = 0x55ba4f725500
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.665323734s, txc = 0x55ba4f7b8f00
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.664201260s, txc = 0x55ba4e60cc00
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.663696766s, txc = 0x55ba50635500
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.663367271s, txc = 0x55ba4e5a8300
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.657834530s, txc = 0x55ba5063f500
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.656543255s, txc = 0x55ba4f729500
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.656169415s, txc = 0x55ba4e5a1500
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.655892849s, txc = 0x55ba5041d800
Nov 29 02:44:35 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.655537605s, txc = 0x55ba4e60c900
Nov 29 02:44:36 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.286907196s, txc = 0x55ba5069e000
Nov 29 02:44:36 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.343419075s, txc = 0x55ba4e60c600
Nov 29 02:44:36 np0005539564 ovn_controller[130591]: 2025-11-29T07:44:36Z|00043|memory|INFO|peak resident set size grew 52% in last 1306.9 seconds, from 16256 kB to 24704 kB
Nov 29 02:44:36 np0005539564 ovn_controller[130591]: 2025-11-29T07:44:36Z|00044|memory|INFO|idl-cells-OVN_Southbound:11246 idl-cells-Open_vSwitch:870 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:404 lflow-cache-entries-cache-matches:303 lflow-cache-size-KB:1719 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:664 ofctrl_installed_flow_usage-KB:486 ofctrl_sb_flow_ref_usage-KB:250
Nov 29 02:44:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:36.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.736 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Updating instance_info_cache with network_info: [{"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.757 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-980ff2fe-9165-4009-82a9-3c3b2055f29a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.758 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.759 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.759 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.760 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.760 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.761 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.761 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.762 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.762 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.797 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.798 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.798 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.798 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:44:36 np0005539564 nova_compute[226295]: 2025-11-29 07:44:36.799 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:37.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/484426313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.321 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.457 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.458 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.463 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.464 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.636 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.637 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4751MB free_disk=20.830665588378906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.638 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.638 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.829 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 5068c873-ee82-4faa-a05b-3df3ed25d792 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.830 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 980ff2fe-9165-4009-82a9-3c3b2055f29a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.830 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.831 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:44:37 np0005539564 nova_compute[226295]: 2025-11-29 07:44:37.963 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:38 np0005539564 nova_compute[226295]: 2025-11-29 07:44:38.377 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1938226935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:38 np0005539564 nova_compute[226295]: 2025-11-29 07:44:38.567 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:38 np0005539564 nova_compute[226295]: 2025-11-29 07:44:38.575 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:44:38 np0005539564 nova_compute[226295]: 2025-11-29 07:44:38.594 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:44:38 np0005539564 nova_compute[226295]: 2025-11-29 07:44:38.658 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:44:38 np0005539564 nova_compute[226295]: 2025-11-29 07:44:38.658 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:38.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:39.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:39 np0005539564 nova_compute[226295]: 2025-11-29 07:44:39.608 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.175 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:40 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:44:40 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:44:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:40.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.702 226310 INFO nova.virt.libvirt.driver [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Deleting instance files /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792_del#033[00m
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.703 226310 INFO nova.virt.libvirt.driver [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Deletion of /var/lib/nova/instances/5068c873-ee82-4faa-a05b-3df3ed25d792_del complete#033[00m
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.807 226310 DEBUG nova.virt.libvirt.host [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.807 226310 INFO nova.virt.libvirt.host [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] UEFI support detected#033[00m
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.809 226310 INFO nova.compute.manager [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Took 35.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.809 226310 DEBUG oslo.service.loopingcall [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.810 226310 DEBUG nova.compute.manager [-] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:44:40 np0005539564 nova_compute[226295]: 2025-11-29 07:44:40.810 226310 DEBUG nova.network.neutron [-] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:44:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:41.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:42 np0005539564 podman[231888]: 2025-11-29 07:44:42.519498322 +0000 UTC m=+0.065663832 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:44:42 np0005539564 podman[231890]: 2025-11-29 07:44:42.520385206 +0000 UTC m=+0.063077361 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:44:42 np0005539564 podman[231887]: 2025-11-29 07:44:42.536721119 +0000 UTC m=+0.091497372 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:44:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:42.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:43.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:43 np0005539564 nova_compute[226295]: 2025-11-29 07:44:43.176 226310 DEBUG nova.network.neutron [-] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:44:43 np0005539564 nova_compute[226295]: 2025-11-29 07:44:43.256 226310 INFO nova.compute.manager [-] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Took 2.45 seconds to deallocate network for instance.#033[00m
Nov 29 02:44:43 np0005539564 nova_compute[226295]: 2025-11-29 07:44:43.414 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:43 np0005539564 nova_compute[226295]: 2025-11-29 07:44:43.415 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:43 np0005539564 nova_compute[226295]: 2025-11-29 07:44:43.592 226310 DEBUG nova.compute.manager [req-8260f9ef-55a5-4199-a736-8cfcafcd765e req-ca759377-77a9-4b64-9c74-0e19bb4604cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5068c873-ee82-4faa-a05b-3df3ed25d792] Received event network-vif-deleted-bb62692a-9e57-4f29-a570-df3272a8f05c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:44:43 np0005539564 nova_compute[226295]: 2025-11-29 07:44:43.676 226310 DEBUG oslo_concurrency.processutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2077925806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:44 np0005539564 nova_compute[226295]: 2025-11-29 07:44:44.129 226310 DEBUG oslo_concurrency.processutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:44 np0005539564 nova_compute[226295]: 2025-11-29 07:44:44.134 226310 DEBUG nova.compute.provider_tree [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:44:44 np0005539564 nova_compute[226295]: 2025-11-29 07:44:44.160 226310 DEBUG nova.scheduler.client.report [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:44:44 np0005539564 nova_compute[226295]: 2025-11-29 07:44:44.190 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:44 np0005539564 nova_compute[226295]: 2025-11-29 07:44:44.260 226310 INFO nova.scheduler.client.report [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Deleted allocations for instance 5068c873-ee82-4faa-a05b-3df3ed25d792#033[00m
Nov 29 02:44:44 np0005539564 nova_compute[226295]: 2025-11-29 07:44:44.511 226310 DEBUG oslo_concurrency.lockutils [None req-a3bdd114-d9d7-4f19-a832-5569e5a7ebf3 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "5068c873-ee82-4faa-a05b-3df3ed25d792" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 39.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:44 np0005539564 nova_compute[226295]: 2025-11-29 07:44:44.610 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:44.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:45.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:45 np0005539564 nova_compute[226295]: 2025-11-29 07:44:45.177 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:46.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:47.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.244 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.245 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.245 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.245 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.246 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.247 226310 INFO nova.compute.manager [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Terminating instance#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.248 226310 DEBUG nova.compute.manager [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:44:47 np0005539564 kernel: tapd00ce7a8-c8 (unregistering): left promiscuous mode
Nov 29 02:44:47 np0005539564 NetworkManager[48997]: <info>  [1764402287.7368] device (tapd00ce7a8-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:44:47 np0005539564 ovn_controller[130591]: 2025-11-29T07:44:47Z|00045|binding|INFO|Releasing lport d00ce7a8-c85b-4013-844e-d84467a05bff from this chassis (sb_readonly=0)
Nov 29 02:44:47 np0005539564 ovn_controller[130591]: 2025-11-29T07:44:47Z|00046|binding|INFO|Setting lport d00ce7a8-c85b-4013-844e-d84467a05bff down in Southbound
Nov 29 02:44:47 np0005539564 ovn_controller[130591]: 2025-11-29T07:44:47Z|00047|binding|INFO|Removing iface tapd00ce7a8-c8 ovn-installed in OVS
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.744 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:47.750 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:df:aa 10.100.0.13'], port_security=['fa:16:3e:4c:df:aa 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '980ff2fe-9165-4009-82a9-3c3b2055f29a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1bd43cdb-189f-493f-a1a6-454e88ff3fb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '422656fa169e49dcb91b5d4a8819f5ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0766cb7d-04a9-440f-bd3d-d25f33d3e578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a8fc4d-6367-4b5b-8259-468538126128, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=d00ce7a8-c85b-4013-844e-d84467a05bff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:44:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:47.751 139780 INFO neutron.agent.ovn.metadata.agent [-] Port d00ce7a8-c85b-4013-844e-d84467a05bff in datapath 1bd43cdb-189f-493f-a1a6-454e88ff3fb6 unbound from our chassis#033[00m
Nov 29 02:44:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:47.752 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1bd43cdb-189f-493f-a1a6-454e88ff3fb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:44:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:47.754 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b97bef-6290-41fd-90ce-7bcaf213f904]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:47.755 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6 namespace which is not needed anymore#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:47 np0005539564 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 29 02:44:47 np0005539564 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 18.932s CPU time.
Nov 29 02:44:47 np0005539564 systemd-machined[190128]: Machine qemu-2-instance-00000006 terminated.
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.886 226310 INFO nova.virt.libvirt.driver [-] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Instance destroyed successfully.#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.886 226310 DEBUG nova.objects.instance [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lazy-loading 'resources' on Instance uuid 980ff2fe-9165-4009-82a9-3c3b2055f29a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.914 226310 DEBUG nova.virt.libvirt.vif [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:42:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-162883616',display_name='tempest-VolumesAssistedSnapshotsTest-server-162883616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-162883616',id=6,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/EBCCQsqBwozvSx/5HwAAjIs444VIg4wRd07RCNsB+iYI7M7vypwpfTHRW7mBe8j4u3HfnXjFueKGbu88UbhEtiS8vm73ejQqPJwakBU5jmh9B+oxfN06CQAsz70oQNw==',key_name='tempest-keypair-2144013158',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:43:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='422656fa169e49dcb91b5d4a8819f5ff',ramdisk_id='',reservation_id='r-pkt050z4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1468079842',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1468079842-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:43:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197276e850054352a94a54ad4b0274be',uuid=980ff2fe-9165-4009-82a9-3c3b2055f29a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.915 226310 DEBUG nova.network.os_vif_util [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Converting VIF {"id": "d00ce7a8-c85b-4013-844e-d84467a05bff", "address": "fa:16:3e:4c:df:aa", "network": {"id": "1bd43cdb-189f-493f-a1a6-454e88ff3fb6", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1897220559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "422656fa169e49dcb91b5d4a8819f5ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd00ce7a8-c8", "ovs_interfaceid": "d00ce7a8-c85b-4013-844e-d84467a05bff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.916 226310 DEBUG nova.network.os_vif_util [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4c:df:aa,bridge_name='br-int',has_traffic_filtering=True,id=d00ce7a8-c85b-4013-844e-d84467a05bff,network=Network(1bd43cdb-189f-493f-a1a6-454e88ff3fb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd00ce7a8-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.916 226310 DEBUG os_vif [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:df:aa,bridge_name='br-int',has_traffic_filtering=True,id=d00ce7a8-c85b-4013-844e-d84467a05bff,network=Network(1bd43cdb-189f-493f-a1a6-454e88ff3fb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd00ce7a8-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.918 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.918 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd00ce7a8-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.920 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.923 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:44:47 np0005539564 nova_compute[226295]: 2025-11-29 07:44:47.925 226310 INFO os_vif [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:df:aa,bridge_name='br-int',has_traffic_filtering=True,id=d00ce7a8-c85b-4013-844e-d84467a05bff,network=Network(1bd43cdb-189f-493f-a1a6-454e88ff3fb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd00ce7a8-c8')#033[00m
Nov 29 02:44:48 np0005539564 neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6[231460]: [NOTICE]   (231464) : haproxy version is 2.8.14-c23fe91
Nov 29 02:44:48 np0005539564 neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6[231460]: [NOTICE]   (231464) : path to executable is /usr/sbin/haproxy
Nov 29 02:44:48 np0005539564 neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6[231460]: [WARNING]  (231464) : Exiting Master process...
Nov 29 02:44:48 np0005539564 neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6[231460]: [ALERT]    (231464) : Current worker (231466) exited with code 143 (Terminated)
Nov 29 02:44:48 np0005539564 neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6[231460]: [WARNING]  (231464) : All workers exited. Exiting... (0)
Nov 29 02:44:48 np0005539564 systemd[1]: libpod-5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f.scope: Deactivated successfully.
Nov 29 02:44:48 np0005539564 conmon[231460]: conmon 5c9ee03667982f1313b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f.scope/container/memory.events
Nov 29 02:44:48 np0005539564 podman[231994]: 2025-11-29 07:44:48.119646324 +0000 UTC m=+0.276965831 container died 5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:44:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:48.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:49.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:49 np0005539564 nova_compute[226295]: 2025-11-29 07:44:49.306 226310 DEBUG nova.compute.manager [req-0e6f9f73-f94c-495b-8b67-2513dab5ee86 req-896c0eb2-81f6-4625-bbad-8fe8672cea58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received event network-vif-unplugged-d00ce7a8-c85b-4013-844e-d84467a05bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:44:49 np0005539564 nova_compute[226295]: 2025-11-29 07:44:49.307 226310 DEBUG oslo_concurrency.lockutils [req-0e6f9f73-f94c-495b-8b67-2513dab5ee86 req-896c0eb2-81f6-4625-bbad-8fe8672cea58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:49 np0005539564 nova_compute[226295]: 2025-11-29 07:44:49.308 226310 DEBUG oslo_concurrency.lockutils [req-0e6f9f73-f94c-495b-8b67-2513dab5ee86 req-896c0eb2-81f6-4625-bbad-8fe8672cea58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:49 np0005539564 nova_compute[226295]: 2025-11-29 07:44:49.308 226310 DEBUG oslo_concurrency.lockutils [req-0e6f9f73-f94c-495b-8b67-2513dab5ee86 req-896c0eb2-81f6-4625-bbad-8fe8672cea58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:49 np0005539564 nova_compute[226295]: 2025-11-29 07:44:49.309 226310 DEBUG nova.compute.manager [req-0e6f9f73-f94c-495b-8b67-2513dab5ee86 req-896c0eb2-81f6-4625-bbad-8fe8672cea58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] No waiting events found dispatching network-vif-unplugged-d00ce7a8-c85b-4013-844e-d84467a05bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:44:49 np0005539564 nova_compute[226295]: 2025-11-29 07:44:49.309 226310 DEBUG nova.compute.manager [req-0e6f9f73-f94c-495b-8b67-2513dab5ee86 req-896c0eb2-81f6-4625-bbad-8fe8672cea58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received event network-vif-unplugged-d00ce7a8-c85b-4013-844e-d84467a05bff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:44:49 np0005539564 nova_compute[226295]: 2025-11-29 07:44:49.671 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:51 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f-userdata-shm.mount: Deactivated successfully.
Nov 29 02:44:51 np0005539564 systemd[1]: var-lib-containers-storage-overlay-0ba302962eb74dbe965e7008433b085ae567bdfe532bd6a410ac5bfd8cfeaef2-merged.mount: Deactivated successfully.
Nov 29 02:44:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:51.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:51.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:51 np0005539564 podman[231994]: 2025-11-29 07:44:51.359799039 +0000 UTC m=+3.517118546 container cleanup 5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:44:51 np0005539564 systemd[1]: libpod-conmon-5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f.scope: Deactivated successfully.
Nov 29 02:44:51 np0005539564 nova_compute[226295]: 2025-11-29 07:44:51.626 226310 DEBUG nova.compute.manager [req-b8e7ba86-57e2-4329-b0f4-144ccbf700c9 req-5f3489c5-192d-4f23-a6d2-fa14e04bd4b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received event network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:44:51 np0005539564 nova_compute[226295]: 2025-11-29 07:44:51.626 226310 DEBUG oslo_concurrency.lockutils [req-b8e7ba86-57e2-4329-b0f4-144ccbf700c9 req-5f3489c5-192d-4f23-a6d2-fa14e04bd4b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:51 np0005539564 nova_compute[226295]: 2025-11-29 07:44:51.627 226310 DEBUG oslo_concurrency.lockutils [req-b8e7ba86-57e2-4329-b0f4-144ccbf700c9 req-5f3489c5-192d-4f23-a6d2-fa14e04bd4b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:51 np0005539564 nova_compute[226295]: 2025-11-29 07:44:51.627 226310 DEBUG oslo_concurrency.lockutils [req-b8e7ba86-57e2-4329-b0f4-144ccbf700c9 req-5f3489c5-192d-4f23-a6d2-fa14e04bd4b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:51 np0005539564 nova_compute[226295]: 2025-11-29 07:44:51.627 226310 DEBUG nova.compute.manager [req-b8e7ba86-57e2-4329-b0f4-144ccbf700c9 req-5f3489c5-192d-4f23-a6d2-fa14e04bd4b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] No waiting events found dispatching network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:44:51 np0005539564 nova_compute[226295]: 2025-11-29 07:44:51.627 226310 WARNING nova.compute.manager [req-b8e7ba86-57e2-4329-b0f4-144ccbf700c9 req-5f3489c5-192d-4f23-a6d2-fa14e04bd4b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received unexpected event network-vif-plugged-d00ce7a8-c85b-4013-844e-d84467a05bff for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:44:51 np0005539564 nova_compute[226295]: 2025-11-29 07:44:51.872 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:52 np0005539564 podman[232054]: 2025-11-29 07:44:52.011031275 +0000 UTC m=+0.629591832 container remove 5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.020 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b189009d-b239-4317-b839-95ed46db0191]: (4, ('Sat Nov 29 07:44:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6 (5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f)\n5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f\nSat Nov 29 07:44:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6 (5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f)\n5c9ee03667982f1313b18439784671316c67c086b6d9721a1a98ff99f487578f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.022 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[379cfd58-9fb9-4012-9766-44949f2b8192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.023 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bd43cdb-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:44:52 np0005539564 nova_compute[226295]: 2025-11-29 07:44:52.025 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:52 np0005539564 kernel: tap1bd43cdb-10: left promiscuous mode
Nov 29 02:44:52 np0005539564 nova_compute[226295]: 2025-11-29 07:44:52.040 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.043 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a16f48fd-c9cb-40b0-82a2-870a99ae4ddd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.058 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b527af7-6f91-47ff-989e-5a0db0950905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.059 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d077b7db-abdb-4964-8165-528aafea62fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.082 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0cac746a-88a8-41f8-95e1-16c779f45413]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516939, 'reachable_time': 17075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232069, 'error': None, 'target': 'ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:52 np0005539564 systemd[1]: run-netns-ovnmeta\x2d1bd43cdb\x2d189f\x2d493f\x2da1a6\x2d454e88ff3fb6.mount: Deactivated successfully.
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.088 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1bd43cdb-189f-493f-a1a6-454e88ff3fb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:44:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:44:52.088 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[75b7a04d-f312-417b-a609-b1ea8127b939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:52 np0005539564 nova_compute[226295]: 2025-11-29 07:44:52.974 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:54 np0005539564 nova_compute[226295]: 2025-11-29 07:44:54.674 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:55.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:57.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:57.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.243 226310 INFO nova.virt.libvirt.driver [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Deleting instance files /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a_del#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.244 226310 INFO nova.virt.libvirt.driver [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Deletion of /var/lib/nova/instances/980ff2fe-9165-4009-82a9-3c3b2055f29a_del complete#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.364 226310 INFO nova.compute.manager [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Took 10.12 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.364 226310 DEBUG oslo.service.loopingcall [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.365 226310 DEBUG nova.compute.manager [-] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.365 226310 DEBUG nova.network.neutron [-] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.686 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.686 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:44:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:44:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.713 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.833 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.833 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.841 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.841 226310 INFO nova.compute.claims [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:44:57 np0005539564 nova_compute[226295]: 2025-11-29 07:44:57.976 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.108 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1116700311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.537 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.543 226310 DEBUG nova.compute.provider_tree [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.582 226310 DEBUG nova.scheduler.client.report [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.624 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.626 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.713 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.713 226310 DEBUG nova.network.neutron [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.767 226310 INFO nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.800 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.965 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.967 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.968 226310 INFO nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating image(s)#033[00m
Nov 29 02:44:58 np0005539564 nova_compute[226295]: 2025-11-29 07:44:58.998 226310 DEBUG nova.storage.rbd_utils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.042 226310 DEBUG nova.storage.rbd_utils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.081 226310 DEBUG nova.storage.rbd_utils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.084 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.155 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.157 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.158 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.159 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:59.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.190 226310 DEBUG nova.storage.rbd_utils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.194 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.215 226310 DEBUG nova.policy [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd94c707cca604d72a8e1d49b636095e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96ea84545e71401fb69d21be6e2472f7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:44:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:44:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:59.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.253 226310 DEBUG nova.network.neutron [-] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:44:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:44:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.308 226310 INFO nova.compute.manager [-] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Took 1.94 seconds to deallocate network for instance.#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.380 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.381 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.497 226310 DEBUG oslo_concurrency.processutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:59 np0005539564 nova_compute[226295]: 2025-11-29 07:44:59.676 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2367818218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:00 np0005539564 nova_compute[226295]: 2025-11-29 07:45:00.253 226310 DEBUG oslo_concurrency.processutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:00 np0005539564 nova_compute[226295]: 2025-11-29 07:45:00.260 226310 DEBUG nova.compute.provider_tree [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:00 np0005539564 nova_compute[226295]: 2025-11-29 07:45:00.279 226310 DEBUG nova.scheduler.client.report [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:00 np0005539564 nova_compute[226295]: 2025-11-29 07:45:00.327 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:00 np0005539564 nova_compute[226295]: 2025-11-29 07:45:00.437 226310 INFO nova.scheduler.client.report [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Deleted allocations for instance 980ff2fe-9165-4009-82a9-3c3b2055f29a#033[00m
Nov 29 02:45:00 np0005539564 nova_compute[226295]: 2025-11-29 07:45:00.453 226310 DEBUG nova.compute.manager [req-6c0cd79e-fd52-405c-80f4-92d40b03e617 req-9df91f7a-a5da-405b-adc7-b99186af3da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Received event network-vif-deleted-d00ce7a8-c85b-4013-844e-d84467a05bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:00 np0005539564 nova_compute[226295]: 2025-11-29 07:45:00.711 226310 DEBUG nova.network.neutron [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Successfully created port: 6a7b5e5f-28d1-488d-8877-073bc5493a93 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:45:00 np0005539564 nova_compute[226295]: 2025-11-29 07:45:00.721 226310 DEBUG oslo_concurrency.lockutils [None req-231877b3-4a6b-4228-b249-afaf4ed3e851 197276e850054352a94a54ad4b0274be 422656fa169e49dcb91b5d4a8819f5ff - - default default] Lock "980ff2fe-9165-4009-82a9-3c3b2055f29a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:01.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:01.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:02 np0005539564 nova_compute[226295]: 2025-11-29 07:45:02.709 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:02 np0005539564 nova_compute[226295]: 2025-11-29 07:45:02.810 226310 DEBUG nova.storage.rbd_utils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] resizing rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:45:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:03.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:03.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.427 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.430 226310 DEBUG nova.network.neutron [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Successfully updated port: 6a7b5e5f-28d1-488d-8877-073bc5493a93 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.431 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402287.8833597, 980ff2fe-9165-4009-82a9-3c3b2055f29a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.431 226310 INFO nova.compute.manager [-] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.433 226310 DEBUG nova.compute.manager [req-a31dc87e-8cda-4831-bd80-46324672f36c req-e42ed9b0-a1f8-4c48-afea-f6860427af46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-changed-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.433 226310 DEBUG nova.compute.manager [req-a31dc87e-8cda-4831-bd80-46324672f36c req-e42ed9b0-a1f8-4c48-afea-f6860427af46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Refreshing instance network info cache due to event network-changed-6a7b5e5f-28d1-488d-8877-073bc5493a93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.434 226310 DEBUG oslo_concurrency.lockutils [req-a31dc87e-8cda-4831-bd80-46324672f36c req-e42ed9b0-a1f8-4c48-afea-f6860427af46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.434 226310 DEBUG oslo_concurrency.lockutils [req-a31dc87e-8cda-4831-bd80-46324672f36c req-e42ed9b0-a1f8-4c48-afea-f6860427af46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.434 226310 DEBUG nova.network.neutron [req-a31dc87e-8cda-4831-bd80-46324672f36c req-e42ed9b0-a1f8-4c48-afea-f6860427af46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Refreshing network info cache for port 6a7b5e5f-28d1-488d-8877-073bc5493a93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.542 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.543 226310 DEBUG nova.compute.manager [None req-11c4ab96-477a-4f49-877c-3d49523dba66 - - - - - -] [instance: 980ff2fe-9165-4009-82a9-3c3b2055f29a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:03.548 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.551 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:03.552 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.559 226310 DEBUG nova.objects.instance [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.574 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.574 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Ensure instance console log exists: /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.574 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.575 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.575 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:03.694 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:03.694 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:03.694 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:03 np0005539564 nova_compute[226295]: 2025-11-29 07:45:03.912 226310 DEBUG nova.network.neutron [req-a31dc87e-8cda-4831-bd80-46324672f36c req-e42ed9b0-a1f8-4c48-afea-f6860427af46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:45:04 np0005539564 nova_compute[226295]: 2025-11-29 07:45:04.678 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:05 np0005539564 nova_compute[226295]: 2025-11-29 07:45:05.165 226310 DEBUG nova.network.neutron [req-a31dc87e-8cda-4831-bd80-46324672f36c req-e42ed9b0-a1f8-4c48-afea-f6860427af46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:05.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:05 np0005539564 nova_compute[226295]: 2025-11-29 07:45:05.182 226310 DEBUG oslo_concurrency.lockutils [req-a31dc87e-8cda-4831-bd80-46324672f36c req-e42ed9b0-a1f8-4c48-afea-f6860427af46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:45:05 np0005539564 nova_compute[226295]: 2025-11-29 07:45:05.183 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquired lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:45:05 np0005539564 nova_compute[226295]: 2025-11-29 07:45:05.183 226310 DEBUG nova.network.neutron [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:45:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:06 np0005539564 nova_compute[226295]: 2025-11-29 07:45:06.016 226310 DEBUG nova.network.neutron [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:45:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:07.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.867 226310 DEBUG nova.network.neutron [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Updating instance_info_cache with network_info: [{"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.955 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Releasing lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.956 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance network_info: |[{"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.960 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Start _get_guest_xml network_info=[{"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.967 226310 WARNING nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.978 226310 DEBUG nova.virt.libvirt.host [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.979 226310 DEBUG nova.virt.libvirt.host [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.983 226310 DEBUG nova.virt.libvirt.host [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.983 226310 DEBUG nova.virt.libvirt.host [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.986 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.987 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.987 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.988 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.988 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.988 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.988 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.989 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.989 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.989 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.990 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.990 226310 DEBUG nova.virt.hardware [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:45:07 np0005539564 nova_compute[226295]: 2025-11-29 07:45:07.993 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:08 np0005539564 nova_compute[226295]: 2025-11-29 07:45:08.480 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:45:08 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3817860258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:45:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:09.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:09.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:09 np0005539564 nova_compute[226295]: 2025-11-29 07:45:09.680 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:10 np0005539564 nova_compute[226295]: 2025-11-29 07:45:10.387 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:10 np0005539564 nova_compute[226295]: 2025-11-29 07:45:10.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:11.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:11.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:11 np0005539564 nova_compute[226295]: 2025-11-29 07:45:11.637 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:11 np0005539564 nova_compute[226295]: 2025-11-29 07:45:11.707 226310 DEBUG nova.storage.rbd_utils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:11 np0005539564 nova_compute[226295]: 2025-11-29 07:45:11.711 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:12 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:45:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:45:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3497740403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:45:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:12.554 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.483 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:13 np0005539564 podman[232479]: 2025-11-29 07:45:13.506591928 +0000 UTC m=+0.056946515 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:45:13 np0005539564 podman[232478]: 2025-11-29 07:45:13.546578832 +0000 UTC m=+0.095322266 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:45:13 np0005539564 podman[232477]: 2025-11-29 07:45:13.546594543 +0000 UTC m=+0.101169705 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.669 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.958s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.671 226310 DEBUG nova.virt.libvirt.vif [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:44:58Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.671 226310 DEBUG nova.network.os_vif_util [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.672 226310 DEBUG nova.network.os_vif_util [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.673 226310 DEBUG nova.objects.instance [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.752 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <uuid>22e7eaf7-aac4-4748-90df-04afd0ea7376</uuid>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <name>instance-00000009</name>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersAdminTestJSON-server-1240733035</nova:name>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:45:07</nova:creationTime>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <nova:user uuid="d94c707cca604d72a8e1d49b636095e1">tempest-ServersAdminTestJSON-1807764482-project-member</nova:user>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <nova:project uuid="96ea84545e71401fb69d21be6e2472f7">tempest-ServersAdminTestJSON-1807764482</nova:project>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <nova:port uuid="6a7b5e5f-28d1-488d-8877-073bc5493a93">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <entry name="serial">22e7eaf7-aac4-4748-90df-04afd0ea7376</entry>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <entry name="uuid">22e7eaf7-aac4-4748-90df-04afd0ea7376</entry>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/22e7eaf7-aac4-4748-90df-04afd0ea7376_disk">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:66:59:71"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <target dev="tap6a7b5e5f-28"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/console.log" append="off"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:45:13 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:45:13 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:45:13 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:45:13 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.753 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Preparing to wait for external event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.758 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.759 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.759 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.760 226310 DEBUG nova.virt.libvirt.vif [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:44:58Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.761 226310 DEBUG nova.network.os_vif_util [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.761 226310 DEBUG nova.network.os_vif_util [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.762 226310 DEBUG os_vif [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.763 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.764 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.768 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a7b5e5f-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.768 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a7b5e5f-28, col_values=(('external_ids', {'iface-id': '6a7b5e5f-28d1-488d-8877-073bc5493a93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:59:71', 'vm-uuid': '22e7eaf7-aac4-4748-90df-04afd0ea7376'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:13 np0005539564 NetworkManager[48997]: <info>  [1764402313.7708] manager: (tap6a7b5e5f-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.770 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.772 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.776 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:13 np0005539564 nova_compute[226295]: 2025-11-29 07:45:13.776 226310 INFO os_vif [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28')#033[00m
Nov 29 02:45:14 np0005539564 nova_compute[226295]: 2025-11-29 07:45:14.223 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:45:14 np0005539564 nova_compute[226295]: 2025-11-29 07:45:14.223 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:45:14 np0005539564 nova_compute[226295]: 2025-11-29 07:45:14.223 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No VIF found with MAC fa:16:3e:66:59:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:45:14 np0005539564 nova_compute[226295]: 2025-11-29 07:45:14.224 226310 INFO nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Using config drive#033[00m
Nov 29 02:45:14 np0005539564 nova_compute[226295]: 2025-11-29 07:45:14.821 226310 DEBUG nova.storage.rbd_utils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:14 np0005539564 nova_compute[226295]: 2025-11-29 07:45:14.829 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:15.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:15.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:15 np0005539564 nova_compute[226295]: 2025-11-29 07:45:15.668 226310 INFO nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating config drive at /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config#033[00m
Nov 29 02:45:15 np0005539564 nova_compute[226295]: 2025-11-29 07:45:15.677 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_061j1px execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:15 np0005539564 nova_compute[226295]: 2025-11-29 07:45:15.822 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_061j1px" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:15 np0005539564 nova_compute[226295]: 2025-11-29 07:45:15.902 226310 DEBUG nova.storage.rbd_utils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:15 np0005539564 nova_compute[226295]: 2025-11-29 07:45:15.908 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:17.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:18 np0005539564 nova_compute[226295]: 2025-11-29 07:45:18.771 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:19.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:19 np0005539564 nova_compute[226295]: 2025-11-29 07:45:19.687 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:21.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:23.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:23.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:23 np0005539564 nova_compute[226295]: 2025-11-29 07:45:23.773 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:24 np0005539564 nova_compute[226295]: 2025-11-29 07:45:24.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:25.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:25.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.434 226310 DEBUG oslo_concurrency.processutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 10.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.435 226310 INFO nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deleting local config drive /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config because it was imported into RBD.#033[00m
Nov 29 02:45:26 np0005539564 kernel: tap6a7b5e5f-28: entered promiscuous mode
Nov 29 02:45:26 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:26Z|00048|binding|INFO|Claiming lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 for this chassis.
Nov 29 02:45:26 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:26Z|00049|binding|INFO|6a7b5e5f-28d1-488d-8877-073bc5493a93: Claiming fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:45:26 np0005539564 NetworkManager[48997]: <info>  [1764402326.5058] manager: (tap6a7b5e5f-28): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.506 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.531 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:59:71 10.100.0.5'], port_security=['fa:16:3e:66:59:71 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '22e7eaf7-aac4-4748-90df-04afd0ea7376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=6a7b5e5f-28d1-488d-8877-073bc5493a93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.532 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 6a7b5e5f-28d1-488d-8877-073bc5493a93 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 bound to our chassis#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.534 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050#033[00m
Nov 29 02:45:26 np0005539564 systemd-udevd[232614]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:45:26 np0005539564 systemd-machined[190128]: New machine qemu-3-instance-00000009.
Nov 29 02:45:26 np0005539564 NetworkManager[48997]: <info>  [1764402326.5525] device (tap6a7b5e5f-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:45:26 np0005539564 NetworkManager[48997]: <info>  [1764402326.5534] device (tap6a7b5e5f-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.551 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac1a889-0b31-4d35-b653-752ce9036824]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.552 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap788595a6-81 in ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.554 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap788595a6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.555 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f6682650-ed2d-44dc-bf49-5f616aa82be2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.556 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c26636-1c46-4840-a725-894c78e1860f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.573 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[118f08c2-5a98-40b7-9b23-cd4b01cba238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 systemd[1]: Started Virtual Machine qemu-3-instance-00000009.
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.583 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.587 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe7edf-439b-4048-9e64-1af368a71b85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:26Z|00050|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 ovn-installed in OVS
Nov 29 02:45:26 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:26Z|00051|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 up in Southbound
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.588 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.616 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd712ca-d430-432f-91d9-fe18b01ec12f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.622 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[016628fe-38cc-4bec-9da6-29d8f3a9d954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 NetworkManager[48997]: <info>  [1764402326.6236] manager: (tap788595a6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.653 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bcec9c65-dc67-4fc3-8b05-2cc748fab7d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.655 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[08b42116-37e6-4678-9f67-0de877486372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 NetworkManager[48997]: <info>  [1764402326.6835] device (tap788595a6-80): carrier: link connected
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.688 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4522b1-6594-4636-9608-162bf481da33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.703 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc955c3-4d83-4646-b343-b9a02b33f1d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530136, 'reachable_time': 17929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232648, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.719 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1c572053-df0c-40ce-a138-4f538621473b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:529d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530136, 'tstamp': 530136}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232649, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.745 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a43d53-0d60-4fbc-b2db-55f1d0da5d59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530136, 'reachable_time': 17929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 232650, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.775 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1a8e38-2537-4119-ba3a-aa9901eb2cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.852 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7661f16a-9a03-4e01-a15d-9862f5153003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.854 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.855 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.856 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.858 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539564 NetworkManager[48997]: <info>  [1764402326.8593] manager: (tap788595a6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 29 02:45:26 np0005539564 kernel: tap788595a6-80: entered promiscuous mode
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.861 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.863 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.865 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:26Z|00052|binding|INFO|Releasing lport 4a1365a2-9549-4214-ba8d-c7bb361501a6 from this chassis (sb_readonly=0)
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.866 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.867 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.868 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[da5789bc-fc67-4244-823d-1821b4769355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.869 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-788595a6-8f3f-45f7-807d-f88c9bf0e050
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 788595a6-8f3f-45f7-807d-f88c9bf0e050
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:26.870 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'env', 'PROCESS_TAG=haproxy-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/788595a6-8f3f-45f7-807d-f88c9bf0e050.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:45:26 np0005539564 nova_compute[226295]: 2025-11-29 07:45:26.877 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:27 np0005539564 podman[232718]: 2025-11-29 07:45:27.23617024 +0000 UTC m=+0.030320453 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.501 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402328.501416, 22e7eaf7-aac4-4748-90df-04afd0ea7376 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.502 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] VM Started (Lifecycle Event)#033[00m
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.521 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.525 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402328.5015886, 22e7eaf7-aac4-4748-90df-04afd0ea7376 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.525 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.542 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.545 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.572 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:45:28 np0005539564 nova_compute[226295]: 2025-11-29 07:45:28.775 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:29.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:29.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:29 np0005539564 podman[232718]: 2025-11-29 07:45:29.378708023 +0000 UTC m=+2.172858146 container create 889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.411 226310 DEBUG nova.compute.manager [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.411 226310 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.412 226310 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.412 226310 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.412 226310 DEBUG nova.compute.manager [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Processing event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.413 226310 DEBUG nova.compute.manager [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.413 226310 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.413 226310 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.414 226310 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.414 226310 DEBUG nova.compute.manager [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.415 226310 WARNING nova.compute.manager [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.416 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.420 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402329.4203339, 22e7eaf7-aac4-4748-90df-04afd0ea7376 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.421 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.424 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.428 226310 INFO nova.virt.libvirt.driver [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance spawned successfully.#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.429 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.448 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.458 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.465 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.466 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.466 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.467 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.468 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.469 226310 DEBUG nova.virt.libvirt.driver [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.501 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.544 226310 INFO nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Took 30.58 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.545 226310 DEBUG nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.606 226310 INFO nova.compute.manager [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Took 31.81 seconds to build instance.#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.630 226310 DEBUG oslo_concurrency.lockutils [None req-8e30cf32-5012-44bc-a4b2-952cd2c16f1a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 31.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:29 np0005539564 nova_compute[226295]: 2025-11-29 07:45:29.692 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:29 np0005539564 systemd[1]: Started libpod-conmon-889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5.scope.
Nov 29 02:45:29 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:45:29 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/922ca8cd455975622217533ffbcab3cf4b1abd0399d09c00eab8dc44d01ee6cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:30 np0005539564 podman[232718]: 2025-11-29 07:45:30.106833026 +0000 UTC m=+2.900983249 container init 889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 02:45:30 np0005539564 podman[232718]: 2025-11-29 07:45:30.114886934 +0000 UTC m=+2.909037097 container start 889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:45:30 np0005539564 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[232739]: [NOTICE]   (232743) : New worker (232745) forked
Nov 29 02:45:30 np0005539564 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[232739]: [NOTICE]   (232743) : Loading success.
Nov 29 02:45:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:31.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:33.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:33.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:33 np0005539564 nova_compute[226295]: 2025-11-29 07:45:33.777 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:34 np0005539564 nova_compute[226295]: 2025-11-29 07:45:34.695 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:34 np0005539564 nova_compute[226295]: 2025-11-29 07:45:34.995 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:34 np0005539564 nova_compute[226295]: 2025-11-29 07:45:34.995 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.011 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.105 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.105 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.113 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.114 226310 INFO nova.compute.claims [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:45:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:35.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.273 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:45:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:45:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/305277659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.696 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.701 226310 DEBUG nova.compute.provider_tree [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.719 226310 DEBUG nova.scheduler.client.report [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.742 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.743 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.799 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.799 226310 DEBUG nova.network.neutron [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.821 226310 INFO nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.838 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.951 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.953 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.953 226310 INFO nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Creating image(s)#033[00m
Nov 29 02:45:35 np0005539564 nova_compute[226295]: 2025-11-29 07:45:35.994 226310 DEBUG nova.storage.rbd_utils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15412333-68f0-43a8-b114-af3dde30be2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.165 226310 DEBUG nova.storage.rbd_utils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15412333-68f0-43a8-b114-af3dde30be2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.202 226310 DEBUG nova.storage.rbd_utils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15412333-68f0-43a8-b114-af3dde30be2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.210 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.247 226310 DEBUG nova.policy [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd94c707cca604d72a8e1d49b636095e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96ea84545e71401fb69d21be6e2472f7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.302 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.303 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.305 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.305 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.333 226310 DEBUG nova.storage.rbd_utils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15412333-68f0-43a8-b114-af3dde30be2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:36 np0005539564 nova_compute[226295]: 2025-11-29 07:45:36.337 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 15412333-68f0-43a8-b114-af3dde30be2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:37.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:37.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:37 np0005539564 nova_compute[226295]: 2025-11-29 07:45:37.373 226310 DEBUG nova.network.neutron [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Successfully created port: 04aab360-dfb8-4a22-be37-eeec9d964e92 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.492 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 15412333-68f0-43a8-b114-af3dde30be2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.560 226310 DEBUG nova.storage.rbd_utils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] resizing rbd image 15412333-68f0-43a8-b114-af3dde30be2a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.837 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.839 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.840 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.840 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.840 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.868 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.918 226310 DEBUG nova.network.neutron [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Successfully updated port: 04aab360-dfb8-4a22-be37-eeec9d964e92 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.938 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.938 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquired lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:45:38 np0005539564 nova_compute[226295]: 2025-11-29 07:45:38.938 226310 DEBUG nova.network.neutron [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.125 226310 DEBUG nova.compute.manager [req-1126a09a-e956-452c-a2c9-b6f42ec796fa req-4e3fdf30-11fe-4e21-a327-bc365ce7cb45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received event network-changed-04aab360-dfb8-4a22-be37-eeec9d964e92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.126 226310 DEBUG nova.compute.manager [req-1126a09a-e956-452c-a2c9-b6f42ec796fa req-4e3fdf30-11fe-4e21-a327-bc365ce7cb45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Refreshing instance network info cache due to event network-changed-04aab360-dfb8-4a22-be37-eeec9d964e92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.127 226310 DEBUG oslo_concurrency.lockutils [req-1126a09a-e956-452c-a2c9-b6f42ec796fa req-4e3fdf30-11fe-4e21-a327-bc365ce7cb45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:45:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:39.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.327 226310 DEBUG nova.objects.instance [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 15412333-68f0-43a8-b114-af3dde30be2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.362 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.363 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Ensure instance console log exists: /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.364 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.365 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.365 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.487 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.488 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.488 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.489 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.509 226310 DEBUG nova.network.neutron [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:45:39 np0005539564 nova_compute[226295]: 2025-11-29 07:45:39.700 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.475 226310 DEBUG nova.network.neutron [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Updating instance_info_cache with network_info: [{"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.493 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Releasing lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.494 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Instance network_info: |[{"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.494 226310 DEBUG oslo_concurrency.lockutils [req-1126a09a-e956-452c-a2c9-b6f42ec796fa req-4e3fdf30-11fe-4e21-a327-bc365ce7cb45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.495 226310 DEBUG nova.network.neutron [req-1126a09a-e956-452c-a2c9-b6f42ec796fa req-4e3fdf30-11fe-4e21-a327-bc365ce7cb45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Refreshing network info cache for port 04aab360-dfb8-4a22-be37-eeec9d964e92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.498 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Start _get_guest_xml network_info=[{"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.506 226310 WARNING nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.519 226310 DEBUG nova.virt.libvirt.host [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.521 226310 DEBUG nova.virt.libvirt.host [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.526 226310 DEBUG nova.virt.libvirt.host [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.527 226310 DEBUG nova.virt.libvirt.host [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.529 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.530 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.531 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.532 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.532 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.533 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.533 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.534 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.534 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.535 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.535 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.536 226310 DEBUG nova.virt.hardware [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:45:41 np0005539564 nova_compute[226295]: 2025-11-29 07:45:41.542 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:45:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2175026816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.110 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Updating instance_info_cache with network_info: [{"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.140 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.141 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.141 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.142 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.142 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.142 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.143 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.143 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.144 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.144 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.166 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.167 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.167 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.168 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:45:42 np0005539564 nova_compute[226295]: 2025-11-29 07:45:42.168 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:43 np0005539564 nova_compute[226295]: 2025-11-29 07:45:43.840 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:43 np0005539564 nova_compute[226295]: 2025-11-29 07:45:43.923 226310 DEBUG nova.network.neutron [req-1126a09a-e956-452c-a2c9-b6f42ec796fa req-4e3fdf30-11fe-4e21-a327-bc365ce7cb45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Updated VIF entry in instance network info cache for port 04aab360-dfb8-4a22-be37-eeec9d964e92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:45:43 np0005539564 nova_compute[226295]: 2025-11-29 07:45:43.924 226310 DEBUG nova.network.neutron [req-1126a09a-e956-452c-a2c9-b6f42ec796fa req-4e3fdf30-11fe-4e21-a327-bc365ce7cb45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Updating instance_info_cache with network_info: [{"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:43 np0005539564 nova_compute[226295]: 2025-11-29 07:45:43.947 226310 DEBUG oslo_concurrency.lockutils [req-1126a09a-e956-452c-a2c9-b6f42ec796fa req-4e3fdf30-11fe-4e21-a327-bc365ce7cb45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.217 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.271 226310 DEBUG nova.storage.rbd_utils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15412333-68f0-43a8-b114-af3dde30be2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.274 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4264991800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.536 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.615 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.615 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.752 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:44 np0005539564 podman[233076]: 2025-11-29 07:45:44.78060198 +0000 UTC m=+0.323419560 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:45:44 np0005539564 podman[233075]: 2025-11-29 07:45:44.783292613 +0000 UTC m=+0.330927834 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:45:44 np0005539564 podman[233074]: 2025-11-29 07:45:44.808676751 +0000 UTC m=+0.356392654 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.866 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.867 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4648MB free_disk=20.916088104248047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.867 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.867 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.957 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 22e7eaf7-aac4-4748-90df-04afd0ea7376 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.957 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 15412333-68f0-43a8-b114-af3dde30be2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.957 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:45:44 np0005539564 nova_compute[226295]: 2025-11-29 07:45:44.958 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.011 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:45.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:45.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:45:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1000503346' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.367 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.370 226310 DEBUG nova.virt.libvirt.vif [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:45:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1517476307',display_name='tempest-ServersAdminTestJSON-server-1517476307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1517476307',id=11,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-8uugksx5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:45:35Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=15412333-68f0-43a8-b114-af3dde30be2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.370 226310 DEBUG nova.network.os_vif_util [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.371 226310 DEBUG nova.network.os_vif_util [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:93:b2,bridge_name='br-int',has_traffic_filtering=True,id=04aab360-dfb8-4a22-be37-eeec9d964e92,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04aab360-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.372 226310 DEBUG nova.objects.instance [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 15412333-68f0-43a8-b114-af3dde30be2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.400 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <uuid>15412333-68f0-43a8-b114-af3dde30be2a</uuid>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <name>instance-0000000b</name>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersAdminTestJSON-server-1517476307</nova:name>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:45:41</nova:creationTime>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <nova:user uuid="d94c707cca604d72a8e1d49b636095e1">tempest-ServersAdminTestJSON-1807764482-project-member</nova:user>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <nova:project uuid="96ea84545e71401fb69d21be6e2472f7">tempest-ServersAdminTestJSON-1807764482</nova:project>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <nova:port uuid="04aab360-dfb8-4a22-be37-eeec9d964e92">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <entry name="serial">15412333-68f0-43a8-b114-af3dde30be2a</entry>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <entry name="uuid">15412333-68f0-43a8-b114-af3dde30be2a</entry>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/15412333-68f0-43a8-b114-af3dde30be2a_disk">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/15412333-68f0-43a8-b114-af3dde30be2a_disk.config">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:66:93:b2"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <target dev="tap04aab360-df"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a/console.log" append="off"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:45:45 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:45:45 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:45:45 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:45:45 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.403 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Preparing to wait for external event network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.403 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.404 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.404 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.406 226310 DEBUG nova.virt.libvirt.vif [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:45:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1517476307',display_name='tempest-ServersAdminTestJSON-server-1517476307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1517476307',id=11,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-8uugksx5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:45:35Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=15412333-68f0-43a8-b114-af3dde30be2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.406 226310 DEBUG nova.network.os_vif_util [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.407 226310 DEBUG nova.network.os_vif_util [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:93:b2,bridge_name='br-int',has_traffic_filtering=True,id=04aab360-dfb8-4a22-be37-eeec9d964e92,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04aab360-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.408 226310 DEBUG os_vif [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:93:b2,bridge_name='br-int',has_traffic_filtering=True,id=04aab360-dfb8-4a22-be37-eeec9d964e92,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04aab360-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.409 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.410 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.411 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.415 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.416 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap04aab360-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.417 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap04aab360-df, col_values=(('external_ids', {'iface-id': '04aab360-dfb8-4a22-be37-eeec9d964e92', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:93:b2', 'vm-uuid': '15412333-68f0-43a8-b114-af3dde30be2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.421 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:45 np0005539564 NetworkManager[48997]: <info>  [1764402345.4217] manager: (tap04aab360-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.424 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.428 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.431 226310 INFO os_vif [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:93:b2,bridge_name='br-int',has_traffic_filtering=True,id=04aab360-dfb8-4a22-be37-eeec9d964e92,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04aab360-df')#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.599 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.599 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.600 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No VIF found with MAC fa:16:3e:66:93:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.601 226310 INFO nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Using config drive#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.639 226310 DEBUG nova.storage.rbd_utils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15412333-68f0-43a8-b114-af3dde30be2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/899598920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.696 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.701 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.720 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.747 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:45:45 np0005539564 nova_compute[226295]: 2025-11-29 07:45:45.748 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:46 np0005539564 nova_compute[226295]: 2025-11-29 07:45:46.394 226310 INFO nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Creating config drive at /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a/disk.config#033[00m
Nov 29 02:45:46 np0005539564 nova_compute[226295]: 2025-11-29 07:45:46.404 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdecnvt3c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:46 np0005539564 nova_compute[226295]: 2025-11-29 07:45:46.547 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdecnvt3c" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:46 np0005539564 nova_compute[226295]: 2025-11-29 07:45:46.597 226310 DEBUG nova.storage.rbd_utils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15412333-68f0-43a8-b114-af3dde30be2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:46 np0005539564 nova_compute[226295]: 2025-11-29 07:45:46.603 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a/disk.config 15412333-68f0-43a8-b114-af3dde30be2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:47.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:47.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:49.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:49.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:49 np0005539564 nova_compute[226295]: 2025-11-29 07:45:49.646 226310 DEBUG oslo_concurrency.processutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a/disk.config 15412333-68f0-43a8-b114-af3dde30be2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:49 np0005539564 nova_compute[226295]: 2025-11-29 07:45:49.648 226310 INFO nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Deleting local config drive /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a/disk.config because it was imported into RBD.#033[00m
Nov 29 02:45:49 np0005539564 kernel: tap04aab360-df: entered promiscuous mode
Nov 29 02:45:49 np0005539564 NetworkManager[48997]: <info>  [1764402349.7044] manager: (tap04aab360-df): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Nov 29 02:45:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:49Z|00053|binding|INFO|Claiming lport 04aab360-dfb8-4a22-be37-eeec9d964e92 for this chassis.
Nov 29 02:45:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:49Z|00054|binding|INFO|04aab360-dfb8-4a22-be37-eeec9d964e92: Claiming fa:16:3e:66:93:b2 10.100.0.13
Nov 29 02:45:49 np0005539564 nova_compute[226295]: 2025-11-29 07:45:49.707 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.717 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:93:b2 10.100.0.13'], port_security=['fa:16:3e:66:93:b2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '15412333-68f0-43a8-b114-af3dde30be2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=04aab360-dfb8-4a22-be37-eeec9d964e92) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.718 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 04aab360-dfb8-4a22-be37-eeec9d964e92 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 bound to our chassis#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.719 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050#033[00m
Nov 29 02:45:49 np0005539564 nova_compute[226295]: 2025-11-29 07:45:49.727 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:49Z|00055|binding|INFO|Setting lport 04aab360-dfb8-4a22-be37-eeec9d964e92 ovn-installed in OVS
Nov 29 02:45:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:49Z|00056|binding|INFO|Setting lport 04aab360-dfb8-4a22-be37-eeec9d964e92 up in Southbound
Nov 29 02:45:49 np0005539564 nova_compute[226295]: 2025-11-29 07:45:49.729 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:49 np0005539564 systemd-udevd[233234]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.736 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9ac731-1fbf-4306-aafd-0d78297caef8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:49 np0005539564 systemd-machined[190128]: New machine qemu-4-instance-0000000b.
Nov 29 02:45:49 np0005539564 NetworkManager[48997]: <info>  [1764402349.7486] device (tap04aab360-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:45:49 np0005539564 NetworkManager[48997]: <info>  [1764402349.7508] device (tap04aab360-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:45:49 np0005539564 nova_compute[226295]: 2025-11-29 07:45:49.753 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:49 np0005539564 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.769 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8e71d4-da38-4d6e-8b5f-6ea8f2d303e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.774 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[02369204-a04c-40f1-b164-f5ba8a2fd86e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.799 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[82295d96-e3be-4b15-ab4d-b379f993f9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.816 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8d283416-1213-4e03-aeb5-0702e17b1cf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530136, 'reachable_time': 17929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233247, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.831 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5f979fcf-e6a1-460b-b054-0fd0687f8b50]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530148, 'tstamp': 530148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233248, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530152, 'tstamp': 530152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233248, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.833 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:49 np0005539564 nova_compute[226295]: 2025-11-29 07:45:49.857 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:49 np0005539564 nova_compute[226295]: 2025-11-29 07:45:49.858 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.858 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.859 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.859 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:45:49.860 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.120 226310 DEBUG nova.compute.manager [req-cfe4d1e5-7160-4932-a315-eb0803d135bc req-2f30c224-8f12-4bd8-8fa0-4dec8253930a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received event network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.120 226310 DEBUG oslo_concurrency.lockutils [req-cfe4d1e5-7160-4932-a315-eb0803d135bc req-2f30c224-8f12-4bd8-8fa0-4dec8253930a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.121 226310 DEBUG oslo_concurrency.lockutils [req-cfe4d1e5-7160-4932-a315-eb0803d135bc req-2f30c224-8f12-4bd8-8fa0-4dec8253930a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.121 226310 DEBUG oslo_concurrency.lockutils [req-cfe4d1e5-7160-4932-a315-eb0803d135bc req-2f30c224-8f12-4bd8-8fa0-4dec8253930a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.121 226310 DEBUG nova.compute.manager [req-cfe4d1e5-7160-4932-a315-eb0803d135bc req-2f30c224-8f12-4bd8-8fa0-4dec8253930a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Processing event network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.420 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.689 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402350.689167, 15412333-68f0-43a8-b114-af3dde30be2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.690 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] VM Started (Lifecycle Event)#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.692 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.697 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.700 226310 INFO nova.virt.libvirt.driver [-] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Instance spawned successfully.#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.701 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.731 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.740 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.745 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.746 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.747 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.748 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.749 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.750 226310 DEBUG nova.virt.libvirt.driver [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.779 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.780 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402350.6895065, 15412333-68f0-43a8-b114-af3dde30be2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.780 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.808 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.812 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402350.6953666, 15412333-68f0-43a8-b114-af3dde30be2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.812 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.833 226310 INFO nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Took 14.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.834 226310 DEBUG nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.841 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.844 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:45:50 np0005539564 nova_compute[226295]: 2025-11-29 07:45:50.870 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:45:51 np0005539564 nova_compute[226295]: 2025-11-29 07:45:51.092 226310 INFO nova.compute.manager [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Took 16.02 seconds to build instance.#033[00m
Nov 29 02:45:51 np0005539564 nova_compute[226295]: 2025-11-29 07:45:51.113 226310 DEBUG oslo_concurrency.lockutils [None req-1841b395-4552-4d61-be0d-c5103da440a2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:51.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:51.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:51 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:51Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:45:51 np0005539564 ovn_controller[130591]: 2025-11-29T07:45:51Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:45:52 np0005539564 nova_compute[226295]: 2025-11-29 07:45:52.400 226310 DEBUG nova.compute.manager [req-0dbe50ec-494c-425a-858a-c6dd05737b8e req-e2e4b99e-f81c-4995-ba83-e37a4055e075 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received event network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:52 np0005539564 nova_compute[226295]: 2025-11-29 07:45:52.401 226310 DEBUG oslo_concurrency.lockutils [req-0dbe50ec-494c-425a-858a-c6dd05737b8e req-e2e4b99e-f81c-4995-ba83-e37a4055e075 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:52 np0005539564 nova_compute[226295]: 2025-11-29 07:45:52.401 226310 DEBUG oslo_concurrency.lockutils [req-0dbe50ec-494c-425a-858a-c6dd05737b8e req-e2e4b99e-f81c-4995-ba83-e37a4055e075 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:52 np0005539564 nova_compute[226295]: 2025-11-29 07:45:52.401 226310 DEBUG oslo_concurrency.lockutils [req-0dbe50ec-494c-425a-858a-c6dd05737b8e req-e2e4b99e-f81c-4995-ba83-e37a4055e075 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:52 np0005539564 nova_compute[226295]: 2025-11-29 07:45:52.402 226310 DEBUG nova.compute.manager [req-0dbe50ec-494c-425a-858a-c6dd05737b8e req-e2e4b99e-f81c-4995-ba83-e37a4055e075 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] No waiting events found dispatching network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:45:52 np0005539564 nova_compute[226295]: 2025-11-29 07:45:52.402 226310 WARNING nova.compute.manager [req-0dbe50ec-494c-425a-858a-c6dd05737b8e req-e2e4b99e-f81c-4995-ba83-e37a4055e075 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received unexpected event network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:45:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:53.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:53.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:54 np0005539564 nova_compute[226295]: 2025-11-29 07:45:54.756 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:55.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:55.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:55 np0005539564 nova_compute[226295]: 2025-11-29 07:45:55.422 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:57.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:45:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.757055) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359757098, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1519, "num_deletes": 251, "total_data_size": 3589918, "memory_usage": 3627408, "flush_reason": "Manual Compaction"}
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 29 02:45:59 np0005539564 nova_compute[226295]: 2025-11-29 07:45:59.759 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359788968, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 2358792, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21911, "largest_seqno": 23425, "table_properties": {"data_size": 2352056, "index_size": 3807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15172, "raw_average_key_size": 20, "raw_value_size": 2338385, "raw_average_value_size": 3194, "num_data_blocks": 169, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402162, "oldest_key_time": 1764402162, "file_creation_time": 1764402359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 31959 microseconds, and 6139 cpu microseconds.
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.789012) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 2358792 bytes OK
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.789032) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.790484) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.790499) EVENT_LOG_v1 {"time_micros": 1764402359790494, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.790517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3582682, prev total WAL file size 3582682, number of live WAL files 2.
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.791515) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(2303KB)], [42(8501KB)]
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359791583, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 11063824, "oldest_snapshot_seqno": -1}
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5205 keys, 8878750 bytes, temperature: kUnknown
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359898072, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 8878750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8844180, "index_size": 20444, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 131571, "raw_average_key_size": 25, "raw_value_size": 8750333, "raw_average_value_size": 1681, "num_data_blocks": 838, "num_entries": 5205, "num_filter_entries": 5205, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764402359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.898730) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8878750 bytes
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.900190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.8 rd, 83.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 8.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(8.5) write-amplify(3.8) OK, records in: 5722, records dropped: 517 output_compression: NoCompression
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.900219) EVENT_LOG_v1 {"time_micros": 1764402359900206, "job": 24, "event": "compaction_finished", "compaction_time_micros": 106566, "compaction_time_cpu_micros": 17838, "output_level": 6, "num_output_files": 1, "total_output_size": 8878750, "num_input_records": 5722, "num_output_records": 5205, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359901368, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359904265, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.791386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.904367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.904374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.904375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.904377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:45:59.904379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:00 np0005539564 nova_compute[226295]: 2025-11-29 07:46:00.425 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:46:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:46:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:03.696 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:03.696 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:03.697 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:04 np0005539564 nova_compute[226295]: 2025-11-29 07:46:04.761 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:05 np0005539564 nova_compute[226295]: 2025-11-29 07:46:05.428 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:05 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:05Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:93:b2 10.100.0.13
Nov 29 02:46:05 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:05Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:93:b2 10.100.0.13
Nov 29 02:46:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:06.101 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:46:06 np0005539564 nova_compute[226295]: 2025-11-29 07:46:06.101 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:06.103 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:46:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:07.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:09.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:09 np0005539564 nova_compute[226295]: 2025-11-29 07:46:09.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:10.105 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:10 np0005539564 nova_compute[226295]: 2025-11-29 07:46:10.450 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:11.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:13.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:13.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:14 np0005539564 nova_compute[226295]: 2025-11-29 07:46:14.804 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:15 np0005539564 nova_compute[226295]: 2025-11-29 07:46:15.257 226310 INFO nova.compute.manager [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Rebuilding instance#033[00m
Nov 29 02:46:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:15.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:15 np0005539564 nova_compute[226295]: 2025-11-29 07:46:15.452 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:15 np0005539564 podman[233294]: 2025-11-29 07:46:15.520828572 +0000 UTC m=+0.058849747 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:46:15 np0005539564 podman[233293]: 2025-11-29 07:46:15.530608937 +0000 UTC m=+0.076385273 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 02:46:15 np0005539564 podman[233292]: 2025-11-29 07:46:15.540659499 +0000 UTC m=+0.096781545 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 02:46:15 np0005539564 nova_compute[226295]: 2025-11-29 07:46:15.924 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:16 np0005539564 nova_compute[226295]: 2025-11-29 07:46:16.007 226310 DEBUG nova.compute.manager [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:16 np0005539564 nova_compute[226295]: 2025-11-29 07:46:16.082 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:16 np0005539564 nova_compute[226295]: 2025-11-29 07:46:16.100 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:16 np0005539564 nova_compute[226295]: 2025-11-29 07:46:16.116 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'resources' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:16 np0005539564 nova_compute[226295]: 2025-11-29 07:46:16.132 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:16 np0005539564 nova_compute[226295]: 2025-11-29 07:46:16.154 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:46:16 np0005539564 nova_compute[226295]: 2025-11-29 07:46:16.161 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:46:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:17.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.163047) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378163080, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 433, "num_deletes": 260, "total_data_size": 466237, "memory_usage": 476112, "flush_reason": "Manual Compaction"}
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378168077, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 307523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23430, "largest_seqno": 23858, "table_properties": {"data_size": 305138, "index_size": 485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5558, "raw_average_key_size": 17, "raw_value_size": 300316, "raw_average_value_size": 926, "num_data_blocks": 22, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402361, "oldest_key_time": 1764402361, "file_creation_time": 1764402378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5053 microseconds, and 1606 cpu microseconds.
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.168106) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 307523 bytes OK
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.168117) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.169812) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.169830) EVENT_LOG_v1 {"time_micros": 1764402378169824, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.169849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 463486, prev total WAL file size 463486, number of live WAL files 2.
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.170506) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353037' seq:0, type:0; will stop at (end)
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(300KB)], [45(8670KB)]
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378170590, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 9186273, "oldest_snapshot_seqno": -1}
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5001 keys, 9033914 bytes, temperature: kUnknown
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378310302, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 9033914, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8999998, "index_size": 20302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 128542, "raw_average_key_size": 25, "raw_value_size": 8909026, "raw_average_value_size": 1781, "num_data_blocks": 827, "num_entries": 5001, "num_filter_entries": 5001, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764402378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.310723) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9033914 bytes
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.312731) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.7 rd, 64.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.5 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(59.2) write-amplify(29.4) OK, records in: 5529, records dropped: 528 output_compression: NoCompression
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.312764) EVENT_LOG_v1 {"time_micros": 1764402378312748, "job": 26, "event": "compaction_finished", "compaction_time_micros": 139845, "compaction_time_cpu_micros": 25613, "output_level": 6, "num_output_files": 1, "total_output_size": 9033914, "num_input_records": 5529, "num_output_records": 5001, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378313091, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378315803, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.170283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539564 kernel: tap6a7b5e5f-28 (unregistering): left promiscuous mode
Nov 29 02:46:18 np0005539564 NetworkManager[48997]: <info>  [1764402378.4959] device (tap6a7b5e5f-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.502 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:18 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:18Z|00057|binding|INFO|Releasing lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 from this chassis (sb_readonly=0)
Nov 29 02:46:18 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:18Z|00058|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 down in Southbound
Nov 29 02:46:18 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:18Z|00059|binding|INFO|Removing iface tap6a7b5e5f-28 ovn-installed in OVS
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.507 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.514 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:59:71 10.100.0.5'], port_security=['fa:16:3e:66:59:71 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '22e7eaf7-aac4-4748-90df-04afd0ea7376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=6a7b5e5f-28d1-488d-8877-073bc5493a93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.516 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 6a7b5e5f-28d1-488d-8877-073bc5493a93 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 unbound from our chassis#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.518 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.538 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.541 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6e0b7c-189a-4851-b155-3ba336b9c22e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:18 np0005539564 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 29 02:46:18 np0005539564 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Consumed 16.438s CPU time.
Nov 29 02:46:18 np0005539564 systemd-machined[190128]: Machine qemu-3-instance-00000009 terminated.
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.586 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[540657c0-2f7c-46bf-bebb-89e53cab66de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.590 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[16ac505c-1445-4854-87f3-0593dfd119d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.622 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d09e6e70-e9b2-4021-accd-18f796e76cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.640 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ed22022c-dd21-4ed7-9ea8-a456befe1f97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530136, 'reachable_time': 17929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233369, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.667 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[018e352e-f747-4471-be5f-5141049b8730]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530148, 'tstamp': 530148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233370, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530152, 'tstamp': 530152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233370, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.670 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.672 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.678 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.680 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.681 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.681 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:18.682 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.809 226310 DEBUG nova.compute.manager [req-91496977-c5e6-4861-b71b-d25d097f6c00 req-3adc34d2-a052-4d29-b0a3-46601c802b9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.810 226310 DEBUG oslo_concurrency.lockutils [req-91496977-c5e6-4861-b71b-d25d097f6c00 req-3adc34d2-a052-4d29-b0a3-46601c802b9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.811 226310 DEBUG oslo_concurrency.lockutils [req-91496977-c5e6-4861-b71b-d25d097f6c00 req-3adc34d2-a052-4d29-b0a3-46601c802b9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.811 226310 DEBUG oslo_concurrency.lockutils [req-91496977-c5e6-4861-b71b-d25d097f6c00 req-3adc34d2-a052-4d29-b0a3-46601c802b9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.812 226310 DEBUG nova.compute.manager [req-91496977-c5e6-4861-b71b-d25d097f6c00 req-3adc34d2-a052-4d29-b0a3-46601c802b9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:46:18 np0005539564 nova_compute[226295]: 2025-11-29 07:46:18.812 226310 WARNING nova.compute.manager [req-91496977-c5e6-4861-b71b-d25d097f6c00 req-3adc34d2-a052-4d29-b0a3-46601c802b9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state error and task_state rebuilding.#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.185 226310 INFO nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.193 226310 INFO nova.virt.libvirt.driver [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance destroyed successfully.#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.202 226310 INFO nova.virt.libvirt.driver [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance destroyed successfully.#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.203 226310 DEBUG nova.virt.libvirt.vif [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:45:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:46:13Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.204 226310 DEBUG nova.network.os_vif_util [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.205 226310 DEBUG nova.network.os_vif_util [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.206 226310 DEBUG os_vif [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.209 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.210 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a7b5e5f-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.211 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.214 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.217 226310 INFO os_vif [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28')#033[00m
Nov 29 02:46:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:19.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:19.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:19 np0005539564 nova_compute[226295]: 2025-11-29 07:46:19.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:20 np0005539564 nova_compute[226295]: 2025-11-29 07:46:20.950 226310 DEBUG nova.compute.manager [req-145490f4-e30c-4da3-b7e8-9ad2ebadb94d req-1940e00c-8a90-43df-ac56-0d090e32b931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:46:20 np0005539564 nova_compute[226295]: 2025-11-29 07:46:20.951 226310 DEBUG oslo_concurrency.lockutils [req-145490f4-e30c-4da3-b7e8-9ad2ebadb94d req-1940e00c-8a90-43df-ac56-0d090e32b931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:20 np0005539564 nova_compute[226295]: 2025-11-29 07:46:20.951 226310 DEBUG oslo_concurrency.lockutils [req-145490f4-e30c-4da3-b7e8-9ad2ebadb94d req-1940e00c-8a90-43df-ac56-0d090e32b931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:20 np0005539564 nova_compute[226295]: 2025-11-29 07:46:20.952 226310 DEBUG oslo_concurrency.lockutils [req-145490f4-e30c-4da3-b7e8-9ad2ebadb94d req-1940e00c-8a90-43df-ac56-0d090e32b931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:20 np0005539564 nova_compute[226295]: 2025-11-29 07:46:20.952 226310 DEBUG nova.compute.manager [req-145490f4-e30c-4da3-b7e8-9ad2ebadb94d req-1940e00c-8a90-43df-ac56-0d090e32b931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:46:20 np0005539564 nova_compute[226295]: 2025-11-29 07:46:20.953 226310 WARNING nova.compute.manager [req-145490f4-e30c-4da3-b7e8-9ad2ebadb94d req-1940e00c-8a90-43df-ac56-0d090e32b931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state error and task_state rebuilding.#033[00m
Nov 29 02:46:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:21.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:21.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:21 np0005539564 nova_compute[226295]: 2025-11-29 07:46:21.569 226310 INFO nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deleting instance files /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376_del#033[00m
Nov 29 02:46:21 np0005539564 nova_compute[226295]: 2025-11-29 07:46:21.571 226310 INFO nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deletion of /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376_del complete#033[00m
Nov 29 02:46:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:22 np0005539564 nova_compute[226295]: 2025-11-29 07:46:22.000 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:46:22 np0005539564 nova_compute[226295]: 2025-11-29 07:46:22.001 226310 INFO nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating image(s)#033[00m
Nov 29 02:46:22 np0005539564 nova_compute[226295]: 2025-11-29 07:46:22.040 226310 DEBUG nova.storage.rbd_utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:22 np0005539564 nova_compute[226295]: 2025-11-29 07:46:22.080 226310 DEBUG nova.storage.rbd_utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:22 np0005539564 nova_compute[226295]: 2025-11-29 07:46:22.115 226310 DEBUG nova.storage.rbd_utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:22 np0005539564 nova_compute[226295]: 2025-11-29 07:46:22.118 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:22 np0005539564 nova_compute[226295]: 2025-11-29 07:46:22.119 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:22 np0005539564 nova_compute[226295]: 2025-11-29 07:46:22.531 226310 DEBUG nova.virt.libvirt.imagebackend [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/ed489666-5fa2-4ea4-8005-7a7505ac1b78/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/ed489666-5fa2-4ea4-8005-7a7505ac1b78/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:46:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:23.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.574 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.575 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.575 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.575 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.835 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.835 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.835 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.835 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 15412333-68f0-43a8-b114-af3dde30be2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:23 np0005539564 nova_compute[226295]: 2025-11-29 07:46:23.966 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.056 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.057 226310 DEBUG nova.virt.images [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] ed489666-5fa2-4ea4-8005-7a7505ac1b78 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.058 226310 DEBUG nova.privsep.utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.058 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.212 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.614 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.624 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.682 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.683 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.711 226310 DEBUG nova.storage.rbd_utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.715 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:24 np0005539564 nova_compute[226295]: 2025-11-29 07:46:24.811 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:25.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:25.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.653 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.937s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.690 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Updating instance_info_cache with network_info: [{"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.735 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-15412333-68f0-43a8-b114-af3dde30be2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.735 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.736 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.736 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.736 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.737 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.737 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.737 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.737 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.738 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.738 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.743 226310 DEBUG nova.storage.rbd_utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] resizing rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.895 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.896 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Ensure instance console log exists: /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.897 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.897 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.898 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.901 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Start _get_guest_xml network_info=[{"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.906 226310 WARNING nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.912 226310 DEBUG nova.virt.libvirt.host [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.913 226310 DEBUG nova.virt.libvirt.host [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.919 226310 DEBUG nova.virt.libvirt.host [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.920 226310 DEBUG nova.virt.libvirt.host [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.922 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.923 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.924 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.924 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.924 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.924 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.924 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.925 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.925 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.925 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.925 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.925 226310 DEBUG nova.virt.hardware [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.926 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:25 np0005539564 nova_compute[226295]: 2025-11-29 07:46:25.948 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.382 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:46:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1884400528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.431 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.465 226310 DEBUG nova.storage.rbd_utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.469 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.490 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.490 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.491 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.491 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.491 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2635811855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:46:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/970400620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.921 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.943 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.944 226310 DEBUG nova.virt.libvirt.vif [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:45:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:46:21Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.945 226310 DEBUG nova.network.os_vif_util [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.945 226310 DEBUG nova.network.os_vif_util [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.948 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <uuid>22e7eaf7-aac4-4748-90df-04afd0ea7376</uuid>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <name>instance-00000009</name>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersAdminTestJSON-server-1240733035</nova:name>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:46:25</nova:creationTime>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <nova:user uuid="d94c707cca604d72a8e1d49b636095e1">tempest-ServersAdminTestJSON-1807764482-project-member</nova:user>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <nova:project uuid="96ea84545e71401fb69d21be6e2472f7">tempest-ServersAdminTestJSON-1807764482</nova:project>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <nova:port uuid="6a7b5e5f-28d1-488d-8877-073bc5493a93">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <entry name="serial">22e7eaf7-aac4-4748-90df-04afd0ea7376</entry>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <entry name="uuid">22e7eaf7-aac4-4748-90df-04afd0ea7376</entry>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/22e7eaf7-aac4-4748-90df-04afd0ea7376_disk">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:66:59:71"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <target dev="tap6a7b5e5f-28"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/console.log" append="off"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:46:26 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:46:26 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:46:26 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:46:26 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.949 226310 DEBUG nova.virt.libvirt.vif [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:45:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:46:21Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.949 226310 DEBUG nova.network.os_vif_util [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.950 226310 DEBUG nova.network.os_vif_util [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.950 226310 DEBUG os_vif [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.950 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.951 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.951 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.956 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.956 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a7b5e5f-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.956 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a7b5e5f-28, col_values=(('external_ids', {'iface-id': '6a7b5e5f-28d1-488d-8877-073bc5493a93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:59:71', 'vm-uuid': '22e7eaf7-aac4-4748-90df-04afd0ea7376'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.957 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:26 np0005539564 NetworkManager[48997]: <info>  [1764402386.9586] manager: (tap6a7b5e5f-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.960 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.961 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:26 np0005539564 nova_compute[226295]: 2025-11-29 07:46:26.962 226310 INFO os_vif [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28')#033[00m
Nov 29 02:46:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:27.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:27.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.467 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.468 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.468 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No VIF found with MAC fa:16:3e:66:59:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.469 226310 INFO nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Using config drive#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.500 226310 DEBUG nova.storage.rbd_utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.508 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.508 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.676 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.677 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4682MB free_disk=20.85177993774414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.678 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.678 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.702 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:27 np0005539564 nova_compute[226295]: 2025-11-29 07:46:27.919 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'keypairs' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.139 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 22e7eaf7-aac4-4748-90df-04afd0ea7376 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.140 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 15412333-68f0-43a8-b114-af3dde30be2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.140 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.142 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.446 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/182931077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.920 226310 INFO nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating config drive at /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config#033[00m
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.925 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sajqa5s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.943 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:28 np0005539564 nova_compute[226295]: 2025-11-29 07:46:28.967 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.053 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sajqa5s" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.114 226310 DEBUG nova.storage.rbd_utils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.120 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.153 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.189 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.190 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.191 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.191 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.212 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.212 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:29.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:29.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:29 np0005539564 nova_compute[226295]: 2025-11-29 07:46:29.857 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:30 np0005539564 nova_compute[226295]: 2025-11-29 07:46:30.356 226310 DEBUG oslo_concurrency.processutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:30 np0005539564 nova_compute[226295]: 2025-11-29 07:46:30.356 226310 INFO nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deleting local config drive /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config because it was imported into RBD.#033[00m
Nov 29 02:46:30 np0005539564 kernel: tap6a7b5e5f-28: entered promiscuous mode
Nov 29 02:46:30 np0005539564 NetworkManager[48997]: <info>  [1764402390.4096] manager: (tap6a7b5e5f-28): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 29 02:46:30 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:30Z|00060|binding|INFO|Claiming lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 for this chassis.
Nov 29 02:46:30 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:30Z|00061|binding|INFO|6a7b5e5f-28d1-488d-8877-073bc5493a93: Claiming fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:46:30 np0005539564 nova_compute[226295]: 2025-11-29 07:46:30.409 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:30 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:30Z|00062|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 ovn-installed in OVS
Nov 29 02:46:30 np0005539564 nova_compute[226295]: 2025-11-29 07:46:30.428 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:30 np0005539564 nova_compute[226295]: 2025-11-29 07:46:30.430 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:30 np0005539564 systemd-machined[190128]: New machine qemu-5-instance-00000009.
Nov 29 02:46:30 np0005539564 systemd-udevd[233758]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:46:30 np0005539564 systemd[1]: Started Virtual Machine qemu-5-instance-00000009.
Nov 29 02:46:30 np0005539564 NetworkManager[48997]: <info>  [1764402390.4472] device (tap6a7b5e5f-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:46:30 np0005539564 NetworkManager[48997]: <info>  [1764402390.4483] device (tap6a7b5e5f-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:46:30 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:30Z|00063|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 up in Southbound
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.502 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:59:71 10.100.0.5'], port_security=['fa:16:3e:66:59:71 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '22e7eaf7-aac4-4748-90df-04afd0ea7376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=6a7b5e5f-28d1-488d-8877-073bc5493a93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.503 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 6a7b5e5f-28d1-488d-8877-073bc5493a93 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 bound to our chassis#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.504 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.519 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b34bbc1e-49d2-47b2-88f7-1088a97def16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.558 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[63e2fe48-ba19-4895-8dcd-ad699cab9720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.561 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c74844fa-54dd-45ff-a98b-a10c6f81cac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.605 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c45fc49a-a861-4e9d-b748-afcfe49085c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.630 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[70d7a1e5-b6e9-4e50-a467-26159df4f097]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530136, 'reachable_time': 17929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233772, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.654 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[916b95ae-fea7-4419-91c7-ab8859c5eeed]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530148, 'tstamp': 530148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233773, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530152, 'tstamp': 530152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233773, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.655 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:30 np0005539564 nova_compute[226295]: 2025-11-29 07:46:30.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:30 np0005539564 nova_compute[226295]: 2025-11-29 07:46:30.658 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.659 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.659 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.660 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:46:30.660 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.015 226310 DEBUG nova.compute.manager [req-4a17354f-ce95-40ce-8014-f6fb4f9b65bf req-f68a371b-8a9e-47d6-992e-5255cb56b2eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.016 226310 DEBUG oslo_concurrency.lockutils [req-4a17354f-ce95-40ce-8014-f6fb4f9b65bf req-f68a371b-8a9e-47d6-992e-5255cb56b2eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.016 226310 DEBUG oslo_concurrency.lockutils [req-4a17354f-ce95-40ce-8014-f6fb4f9b65bf req-f68a371b-8a9e-47d6-992e-5255cb56b2eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.016 226310 DEBUG oslo_concurrency.lockutils [req-4a17354f-ce95-40ce-8014-f6fb4f9b65bf req-f68a371b-8a9e-47d6-992e-5255cb56b2eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.017 226310 DEBUG nova.compute.manager [req-4a17354f-ce95-40ce-8014-f6fb4f9b65bf req-f68a371b-8a9e-47d6-992e-5255cb56b2eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.017 226310 WARNING nova.compute.manager [req-4a17354f-ce95-40ce-8014-f6fb4f9b65bf req-f68a371b-8a9e-47d6-992e-5255cb56b2eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Nov 29 02:46:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:31.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:31.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.516 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 22e7eaf7-aac4-4748-90df-04afd0ea7376 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.517 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402391.5158496, 22e7eaf7-aac4-4748-90df-04afd0ea7376 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.517 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.520 226310 DEBUG nova.compute.manager [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.521 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.525 226310 INFO nova.virt.libvirt.driver [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance spawned successfully.#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.527 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:46:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.658 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.661 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.913 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.913 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.914 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.915 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.916 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.917 226310 DEBUG nova.virt.libvirt.driver [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.940 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.940 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402391.5174232, 22e7eaf7-aac4-4748-90df-04afd0ea7376 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.941 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] VM Started (Lifecycle Event)#033[00m
Nov 29 02:46:31 np0005539564 nova_compute[226295]: 2025-11-29 07:46:31.958 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:32 np0005539564 nova_compute[226295]: 2025-11-29 07:46:32.082 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:32 np0005539564 nova_compute[226295]: 2025-11-29 07:46:32.088 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:46:32 np0005539564 nova_compute[226295]: 2025-11-29 07:46:32.130 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:46:32 np0005539564 nova_compute[226295]: 2025-11-29 07:46:32.171 226310 DEBUG nova.compute.manager [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:32 np0005539564 nova_compute[226295]: 2025-11-29 07:46:32.279 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:32 np0005539564 nova_compute[226295]: 2025-11-29 07:46:32.280 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:32 np0005539564 nova_compute[226295]: 2025-11-29 07:46:32.280 226310 DEBUG nova.objects.instance [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:46:32 np0005539564 nova_compute[226295]: 2025-11-29 07:46:32.380 226310 DEBUG oslo_concurrency.lockutils [None req-20085107-c738-4292-b684-9f4e74c449f3 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:33 np0005539564 nova_compute[226295]: 2025-11-29 07:46:33.134 226310 DEBUG nova.compute.manager [req-af74c412-7e32-40ec-9756-68f7f3adce01 req-5f310f70-720a-4542-bf55-fc8ab40583cf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:46:33 np0005539564 nova_compute[226295]: 2025-11-29 07:46:33.135 226310 DEBUG oslo_concurrency.lockutils [req-af74c412-7e32-40ec-9756-68f7f3adce01 req-5f310f70-720a-4542-bf55-fc8ab40583cf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:33 np0005539564 nova_compute[226295]: 2025-11-29 07:46:33.135 226310 DEBUG oslo_concurrency.lockutils [req-af74c412-7e32-40ec-9756-68f7f3adce01 req-5f310f70-720a-4542-bf55-fc8ab40583cf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:33 np0005539564 nova_compute[226295]: 2025-11-29 07:46:33.136 226310 DEBUG oslo_concurrency.lockutils [req-af74c412-7e32-40ec-9756-68f7f3adce01 req-5f310f70-720a-4542-bf55-fc8ab40583cf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:33 np0005539564 nova_compute[226295]: 2025-11-29 07:46:33.136 226310 DEBUG nova.compute.manager [req-af74c412-7e32-40ec-9756-68f7f3adce01 req-5f310f70-720a-4542-bf55-fc8ab40583cf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:46:33 np0005539564 nova_compute[226295]: 2025-11-29 07:46:33.136 226310 WARNING nova.compute.manager [req-af74c412-7e32-40ec-9756-68f7f3adce01 req-5f310f70-720a-4542-bf55-fc8ab40583cf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:46:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:33.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:33.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:34 np0005539564 nova_compute[226295]: 2025-11-29 07:46:34.860 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:35.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.008 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.181 226310 INFO nova.compute.manager [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Rebuilding instance#033[00m
Nov 29 02:46:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:37.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:46:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:46:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:46:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.682 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.700 226310 DEBUG nova.compute.manager [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.751 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.764 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.778 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'resources' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.792 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.804 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:46:37 np0005539564 nova_compute[226295]: 2025-11-29 07:46:37.807 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:46:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:39.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:39 np0005539564 nova_compute[226295]: 2025-11-29 07:46:39.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:41.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:42 np0005539564 nova_compute[226295]: 2025-11-29 07:46:42.012 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:43.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:44 np0005539564 nova_compute[226295]: 2025-11-29 07:46:44.899 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:45.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.351 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.380 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Triggering sync for uuid 15412333-68f0-43a8-b114-af3dde30be2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.381 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Triggering sync for uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "15412333-68f0-43a8-b114-af3dde30be2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.383 226310 INFO nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] During sync_power_state the instance has a pending task (rebuilding). Skip.#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.383 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:45 np0005539564 nova_compute[226295]: 2025-11-29 07:46:45.429 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "15412333-68f0-43a8-b114-af3dde30be2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:46 np0005539564 podman[233950]: 2025-11-29 07:46:46.551164706 +0000 UTC m=+0.090543285 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 02:46:46 np0005539564 podman[233951]: 2025-11-29 07:46:46.557013335 +0000 UTC m=+0.089258071 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 02:46:46 np0005539564 podman[233949]: 2025-11-29 07:46:46.601787339 +0000 UTC m=+0.148712433 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:46:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:47 np0005539564 nova_compute[226295]: 2025-11-29 07:46:47.019 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:47.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:47.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:47 np0005539564 nova_compute[226295]: 2025-11-29 07:46:47.861 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:46:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:49.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:49 np0005539564 nova_compute[226295]: 2025-11-29 07:46:49.902 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:51.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:51.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:52 np0005539564 nova_compute[226295]: 2025-11-29 07:46:52.021 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:52 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:52Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:46:52 np0005539564 ovn_controller[130591]: 2025-11-29T07:46:52Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:46:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:53.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:53.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:54 np0005539564 nova_compute[226295]: 2025-11-29 07:46:54.904 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:55.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:55.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:57 np0005539564 nova_compute[226295]: 2025-11-29 07:46:57.023 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:57.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:57.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:46:58 np0005539564 nova_compute[226295]: 2025-11-29 07:46:58.916 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:46:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:59.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:46:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:59.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:59 np0005539564 nova_compute[226295]: 2025-11-29 07:46:59.914 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:00 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:47:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:01.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:01.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:02 np0005539564 nova_compute[226295]: 2025-11-29 07:47:02.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:47:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570569359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:47:02 np0005539564 nova_compute[226295]: 2025-11-29 07:47:02.935 226310 INFO nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance shutdown successfully after 25 seconds.#033[00m
Nov 29 02:47:03 np0005539564 kernel: tap6a7b5e5f-28 (unregistering): left promiscuous mode
Nov 29 02:47:03 np0005539564 NetworkManager[48997]: <info>  [1764402423.2464] device (tap6a7b5e5f-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:47:03 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:03Z|00064|binding|INFO|Releasing lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 from this chassis (sb_readonly=0)
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.253 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:03Z|00065|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 down in Southbound
Nov 29 02:47:03 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:03Z|00066|binding|INFO|Removing iface tap6a7b5e5f-28 ovn-installed in OVS
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.255 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.258 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:59:71 10.100.0.5'], port_security=['fa:16:3e:66:59:71 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '22e7eaf7-aac4-4748-90df-04afd0ea7376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=6a7b5e5f-28d1-488d-8877-073bc5493a93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.260 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 6a7b5e5f-28d1-488d-8877-073bc5493a93 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 unbound from our chassis#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.261 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.271 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.280 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b59a3d38-eb3b-4373-a748-e97c8606f253]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:03 np0005539564 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.309 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ada3782b-c7ea-4c56-9ee1-7c8816e90ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:03 np0005539564 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 16.661s CPU time.
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.312 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[476de2a1-a166-402a-b2c6-ae3fcdea8223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:03 np0005539564 systemd-machined[190128]: Machine qemu-5-instance-00000009 terminated.
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.339 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[100dc767-4947-4400-bf00-150ff61eccbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:03.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.359 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ae11de76-5840-452e-adf1-f07d27186cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530136, 'reachable_time': 17929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234076, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:03.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.373 226310 INFO nova.virt.libvirt.driver [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance destroyed successfully.#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.377 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[84210d4e-a27d-426d-9560-34bda7b7e57f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530148, 'tstamp': 530148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234085, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530152, 'tstamp': 530152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234085, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.378 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.379 226310 INFO nova.virt.libvirt.driver [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance destroyed successfully.#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.379 226310 DEBUG nova.virt.libvirt.vif [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:46:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:46:36Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.380 226310 DEBUG nova.network.os_vif_util [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.380 226310 DEBUG nova.network.os_vif_util [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.381 226310 DEBUG os_vif [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.382 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.383 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a7b5e5f-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.384 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.384 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.385 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.385 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.385 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.385 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.388 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.390 226310 INFO os_vif [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28')#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.696 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.696 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:03.697 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.698 226310 DEBUG nova.compute.manager [req-b8a4c5aa-432c-49d6-8945-f5c8b29b93fb req-b4e05044-09cf-4ff1-a2b3-5c4fc9b2e041 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.698 226310 DEBUG oslo_concurrency.lockutils [req-b8a4c5aa-432c-49d6-8945-f5c8b29b93fb req-b4e05044-09cf-4ff1-a2b3-5c4fc9b2e041 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.698 226310 DEBUG oslo_concurrency.lockutils [req-b8a4c5aa-432c-49d6-8945-f5c8b29b93fb req-b4e05044-09cf-4ff1-a2b3-5c4fc9b2e041 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.698 226310 DEBUG oslo_concurrency.lockutils [req-b8a4c5aa-432c-49d6-8945-f5c8b29b93fb req-b4e05044-09cf-4ff1-a2b3-5c4fc9b2e041 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.699 226310 DEBUG nova.compute.manager [req-b8a4c5aa-432c-49d6-8945-f5c8b29b93fb req-b4e05044-09cf-4ff1-a2b3-5c4fc9b2e041 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.699 226310 WARNING nova.compute.manager [req-b8a4c5aa-432c-49d6-8945-f5c8b29b93fb req-b4e05044-09cf-4ff1-a2b3-5c4fc9b2e041 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.832 226310 INFO nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deleting instance files /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376_del#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.834 226310 INFO nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deletion of /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376_del complete#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.979 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:47:03 np0005539564 nova_compute[226295]: 2025-11-29 07:47:03.980 226310 INFO nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating image(s)#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.010 226310 DEBUG nova.storage.rbd_utils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.051 226310 DEBUG nova.storage.rbd_utils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.095 226310 DEBUG nova.storage.rbd_utils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.101 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.198 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.199 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.199 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.200 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.249 226310 DEBUG nova.storage.rbd_utils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.253 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.571 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.642 226310 DEBUG nova.storage.rbd_utils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] resizing rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.734 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.735 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Ensure instance console log exists: /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.735 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.736 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.736 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.738 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Start _get_guest_xml network_info=[{"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.744 226310 WARNING nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.752 226310 DEBUG nova.virt.libvirt.host [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.752 226310 DEBUG nova.virt.libvirt.host [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.761 226310 DEBUG nova.virt.libvirt.host [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.763 226310 DEBUG nova.virt.libvirt.host [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.764 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.765 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.766 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.766 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.767 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.767 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.768 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.768 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.769 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.770 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.770 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.771 226310 DEBUG nova.virt.hardware [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.771 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.794 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:04 np0005539564 nova_compute[226295]: 2025-11-29 07:47:04.916 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:47:05 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2973788246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:47:05 np0005539564 nova_compute[226295]: 2025-11-29 07:47:05.227 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:05 np0005539564 nova_compute[226295]: 2025-11-29 07:47:05.253 226310 DEBUG nova.storage.rbd_utils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:05 np0005539564 nova_compute[226295]: 2025-11-29 07:47:05.256 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:05.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:47:05 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4087089054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.112 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.856s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.116 226310 DEBUG nova.virt.libvirt.vif [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:46:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:47:03Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.118 226310 DEBUG nova.network.os_vif_util [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.120 226310 DEBUG nova.network.os_vif_util [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.125 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <uuid>22e7eaf7-aac4-4748-90df-04afd0ea7376</uuid>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <name>instance-00000009</name>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersAdminTestJSON-server-1240733035</nova:name>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:47:04</nova:creationTime>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <nova:user uuid="d94c707cca604d72a8e1d49b636095e1">tempest-ServersAdminTestJSON-1807764482-project-member</nova:user>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <nova:project uuid="96ea84545e71401fb69d21be6e2472f7">tempest-ServersAdminTestJSON-1807764482</nova:project>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <nova:port uuid="6a7b5e5f-28d1-488d-8877-073bc5493a93">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <entry name="serial">22e7eaf7-aac4-4748-90df-04afd0ea7376</entry>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <entry name="uuid">22e7eaf7-aac4-4748-90df-04afd0ea7376</entry>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/22e7eaf7-aac4-4748-90df-04afd0ea7376_disk">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:66:59:71"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <target dev="tap6a7b5e5f-28"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/console.log" append="off"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:47:06 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:47:06 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:47:06 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:47:06 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.128 226310 DEBUG nova.virt.libvirt.vif [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:46:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:47:03Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.129 226310 DEBUG nova.network.os_vif_util [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.130 226310 DEBUG nova.network.os_vif_util [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.130 226310 DEBUG os_vif [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.132 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.132 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.133 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.137 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.137 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a7b5e5f-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.138 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a7b5e5f-28, col_values=(('external_ids', {'iface-id': '6a7b5e5f-28d1-488d-8877-073bc5493a93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:59:71', 'vm-uuid': '22e7eaf7-aac4-4748-90df-04afd0ea7376'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.140 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:06 np0005539564 NetworkManager[48997]: <info>  [1764402426.1413] manager: (tap6a7b5e5f-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.145 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.149 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.150 226310 INFO os_vif [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28')#033[00m
Nov 29 02:47:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.810 226310 DEBUG nova.compute.manager [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.811 226310 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.811 226310 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.811 226310 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.812 226310 DEBUG nova.compute.manager [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.812 226310 WARNING nova.compute.manager [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.832 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.833 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.833 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No VIF found with MAC fa:16:3e:66:59:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.833 226310 INFO nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Using config drive#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.861 226310 DEBUG nova.storage.rbd_utils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.883 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:06 np0005539564 nova_compute[226295]: 2025-11-29 07:47:06.920 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'keypairs' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:07.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:07.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:08 np0005539564 nova_compute[226295]: 2025-11-29 07:47:08.397 226310 INFO nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Creating config drive at /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config#033[00m
Nov 29 02:47:08 np0005539564 nova_compute[226295]: 2025-11-29 07:47:08.407 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpch2gtb9_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:08 np0005539564 nova_compute[226295]: 2025-11-29 07:47:08.540 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpch2gtb9_" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:08 np0005539564 nova_compute[226295]: 2025-11-29 07:47:08.577 226310 DEBUG nova.storage.rbd_utils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:08 np0005539564 nova_compute[226295]: 2025-11-29 07:47:08.583 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:09.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:09 np0005539564 nova_compute[226295]: 2025-11-29 07:47:09.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.180 226310 DEBUG oslo_concurrency.processutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config 22e7eaf7-aac4-4748-90df-04afd0ea7376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.181 226310 INFO nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deleting local config drive /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376/disk.config because it was imported into RBD.#033[00m
Nov 29 02:47:10 np0005539564 kernel: tap6a7b5e5f-28: entered promiscuous mode
Nov 29 02:47:10 np0005539564 NetworkManager[48997]: <info>  [1764402430.2289] manager: (tap6a7b5e5f-28): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Nov 29 02:47:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:10Z|00067|binding|INFO|Claiming lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 for this chassis.
Nov 29 02:47:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:10Z|00068|binding|INFO|6a7b5e5f-28d1-488d-8877-073bc5493a93: Claiming fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.233 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:10Z|00069|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 ovn-installed in OVS
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.250 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.251 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:10 np0005539564 systemd-machined[190128]: New machine qemu-6-instance-00000009.
Nov 29 02:47:10 np0005539564 systemd-udevd[234410]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:47:10 np0005539564 NetworkManager[48997]: <info>  [1764402430.2771] device (tap6a7b5e5f-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:47:10 np0005539564 NetworkManager[48997]: <info>  [1764402430.2780] device (tap6a7b5e5f-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:47:10 np0005539564 systemd[1]: Started Virtual Machine qemu-6-instance-00000009.
Nov 29 02:47:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:10Z|00070|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 up in Southbound
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.616 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:59:71 10.100.0.5'], port_security=['fa:16:3e:66:59:71 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '22e7eaf7-aac4-4748-90df-04afd0ea7376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=6a7b5e5f-28d1-488d-8877-073bc5493a93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.617 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 6a7b5e5f-28d1-488d-8877-073bc5493a93 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 bound to our chassis#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.619 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.632 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[81ad1810-9841-4175-a1b3-63ac849af9f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.658 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc78f28-5a5b-40b0-97fe-a0f39454bc87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.661 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d69a24ce-d2be-4a53-a1ff-9113f32517b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.666 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 22e7eaf7-aac4-4748-90df-04afd0ea7376 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.666 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402430.6660085, 22e7eaf7-aac4-4748-90df-04afd0ea7376 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.667 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.670 226310 DEBUG nova.compute.manager [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.670 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.674 226310 INFO nova.virt.libvirt.driver [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance spawned successfully.#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.675 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.697 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c444ac1e-32e1-47ce-b09a-c4340aabb5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.714 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e97bc3e3-bbe2-4bcb-84d5-d6a153374364]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530136, 'reachable_time': 17929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234466, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.730 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[50a56451-c7f0-4c2c-80ed-8498e542fcbf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530148, 'tstamp': 530148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234467, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530152, 'tstamp': 530152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234467, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.731 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.734 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.735 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.736 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.736 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:10.737 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.904 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.910 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.910 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.911 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.912 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.912 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.912 226310 DEBUG nova.virt.libvirt.driver [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:10 np0005539564 nova_compute[226295]: 2025-11-29 07:47:10.917 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.185 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.261 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.261 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402430.6672983, 22e7eaf7-aac4-4748-90df-04afd0ea7376 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.262 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] VM Started (Lifecycle Event)#033[00m
Nov 29 02:47:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:11.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.447 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.450 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.458 226310 DEBUG nova.compute.manager [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.470 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.525 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.526 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.526 226310 DEBUG nova.objects.instance [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:47:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.765 226310 DEBUG nova.compute.manager [req-b4d467d7-b8ae-4f6a-b38e-edb030f384f3 req-b41d3628-abb9-4cea-ae2e-a0e8897a3fd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.766 226310 DEBUG oslo_concurrency.lockutils [req-b4d467d7-b8ae-4f6a-b38e-edb030f384f3 req-b41d3628-abb9-4cea-ae2e-a0e8897a3fd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.767 226310 DEBUG oslo_concurrency.lockutils [req-b4d467d7-b8ae-4f6a-b38e-edb030f384f3 req-b41d3628-abb9-4cea-ae2e-a0e8897a3fd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.767 226310 DEBUG oslo_concurrency.lockutils [req-b4d467d7-b8ae-4f6a-b38e-edb030f384f3 req-b41d3628-abb9-4cea-ae2e-a0e8897a3fd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.768 226310 DEBUG nova.compute.manager [req-b4d467d7-b8ae-4f6a-b38e-edb030f384f3 req-b41d3628-abb9-4cea-ae2e-a0e8897a3fd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.768 226310 WARNING nova.compute.manager [req-b4d467d7-b8ae-4f6a-b38e-edb030f384f3 req-b41d3628-abb9-4cea-ae2e-a0e8897a3fd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:47:11 np0005539564 nova_compute[226295]: 2025-11-29 07:47:11.775 226310 DEBUG oslo_concurrency.lockutils [None req-f6a9a81d-ec1e-4f4a-8639-e2ae90410fec d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:12 np0005539564 nova_compute[226295]: 2025-11-29 07:47:12.526 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:12.527 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:12.528 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:47:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:13.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:13.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:14 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:14Z|00071|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 02:47:14 np0005539564 nova_compute[226295]: 2025-11-29 07:47:14.789 226310 DEBUG nova.compute.manager [req-f6c49584-8689-4d7d-867d-efd4656ae7b2 req-b3ea6b5d-da8e-454e-8cc7-a4bbfe1aede9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:14 np0005539564 nova_compute[226295]: 2025-11-29 07:47:14.790 226310 DEBUG oslo_concurrency.lockutils [req-f6c49584-8689-4d7d-867d-efd4656ae7b2 req-b3ea6b5d-da8e-454e-8cc7-a4bbfe1aede9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:14 np0005539564 nova_compute[226295]: 2025-11-29 07:47:14.791 226310 DEBUG oslo_concurrency.lockutils [req-f6c49584-8689-4d7d-867d-efd4656ae7b2 req-b3ea6b5d-da8e-454e-8cc7-a4bbfe1aede9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:14 np0005539564 nova_compute[226295]: 2025-11-29 07:47:14.791 226310 DEBUG oslo_concurrency.lockutils [req-f6c49584-8689-4d7d-867d-efd4656ae7b2 req-b3ea6b5d-da8e-454e-8cc7-a4bbfe1aede9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:14 np0005539564 nova_compute[226295]: 2025-11-29 07:47:14.792 226310 DEBUG nova.compute.manager [req-f6c49584-8689-4d7d-867d-efd4656ae7b2 req-b3ea6b5d-da8e-454e-8cc7-a4bbfe1aede9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:14 np0005539564 nova_compute[226295]: 2025-11-29 07:47:14.792 226310 WARNING nova.compute.manager [req-f6c49584-8689-4d7d-867d-efd4656ae7b2 req-b3ea6b5d-da8e-454e-8cc7-a4bbfe1aede9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state error and task_state None.#033[00m
Nov 29 02:47:14 np0005539564 nova_compute[226295]: 2025-11-29 07:47:14.957 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:15.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:16 np0005539564 nova_compute[226295]: 2025-11-29 07:47:16.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:16.530 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:17.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:17.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:17 np0005539564 podman[234471]: 2025-11-29 07:47:17.504887718 +0000 UTC m=+0.053052119 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 02:47:17 np0005539564 podman[234470]: 2025-11-29 07:47:17.511706213 +0000 UTC m=+0.058421505 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 02:47:17 np0005539564 podman[234469]: 2025-11-29 07:47:17.531172241 +0000 UTC m=+0.080844353 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:47:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:19.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:19.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:19 np0005539564 nova_compute[226295]: 2025-11-29 07:47:19.959 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:21 np0005539564 nova_compute[226295]: 2025-11-29 07:47:21.190 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:21 np0005539564 nova_compute[226295]: 2025-11-29 07:47:21.375 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:21 np0005539564 nova_compute[226295]: 2025-11-29 07:47:21.376 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:21.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:22 np0005539564 nova_compute[226295]: 2025-11-29 07:47:22.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:22 np0005539564 nova_compute[226295]: 2025-11-29 07:47:22.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:47:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:23 np0005539564 nova_compute[226295]: 2025-11-29 07:47:23.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:23 np0005539564 nova_compute[226295]: 2025-11-29 07:47:23.341 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:23 np0005539564 nova_compute[226295]: 2025-11-29 07:47:23.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:47:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:23.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.329 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.330 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.330 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.869 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquiring lock "7af8a682-77ba-4f0e-a2c8-0d6890734636" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.870 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "7af8a682-77ba-4f0e-a2c8-0d6890734636" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.899 226310 DEBUG nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.962 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.973 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.974 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.980 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:47:24 np0005539564 nova_compute[226295]: 2025-11-29 07:47:24.980 226310 INFO nova.compute.claims [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:47:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:25.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:25 np0005539564 nova_compute[226295]: 2025-11-29 07:47:25.490 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:25 np0005539564 nova_compute[226295]: 2025-11-29 07:47:25.894 226310 DEBUG nova.compute.manager [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 02:47:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:25 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/154495875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:25 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:25Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:47:25 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:25Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:59:71 10.100.0.5
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.006 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.014 226310 DEBUG nova.compute.provider_tree [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.024 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.044 226310 DEBUG nova.scheduler.client.report [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.079 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.080 226310 DEBUG nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.083 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.128 226310 DEBUG nova.objects.instance [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'pci_requests' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.147 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.148 226310 INFO nova.compute.claims [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.148 226310 DEBUG nova.objects.instance [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'resources' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.169 226310 DEBUG nova.objects.instance [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'numa_topology' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.187 226310 DEBUG nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.190 226310 DEBUG nova.objects.instance [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'pci_devices' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.230 226310 INFO nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.233 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.269 226310 INFO nova.compute.resource_tracker [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating resource usage from migration e3caf3fb-c703-4009-b623-b79ea4aabd7b#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.270 226310 DEBUG nova.compute.resource_tracker [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Starting to track incoming migration e3caf3fb-c703-4009-b623-b79ea4aabd7b with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.273 226310 DEBUG nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.419 226310 DEBUG nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.421 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.422 226310 INFO nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Creating image(s)#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.458 226310 DEBUG nova.storage.rbd_utils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] rbd image 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.501 226310 DEBUG nova.storage.rbd_utils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] rbd image 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.534 226310 DEBUG nova.storage.rbd_utils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] rbd image 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.538 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.555 226310 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.609 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.611 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.612 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.612 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.643 226310 DEBUG nova.storage.rbd_utils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] rbd image 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.650 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.759 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Updating instance_info_cache with network_info: [{"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.779 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-22e7eaf7-aac4-4748-90df-04afd0ea7376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.780 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.781 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.781 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.782 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.783 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:26 np0005539564 nova_compute[226295]: 2025-11-29 07:47:26.811 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4181092242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.008 226310 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.015 226310 DEBUG nova.compute.provider_tree [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.037 226310 DEBUG nova.scheduler.client.report [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.072 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.073 226310 INFO nova.compute.manager [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Migrating#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.073 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.074 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.075 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.075 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.075 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.075 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.119 226310 INFO nova.compute.rpcapi [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.121 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.201 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.274 226310 DEBUG nova.storage.rbd_utils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] resizing rbd image 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:47:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:27.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.397 226310 DEBUG nova.objects.instance [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lazy-loading 'migration_context' on Instance uuid 7af8a682-77ba-4f0e-a2c8-0d6890734636 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.414 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.415 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Ensure instance console log exists: /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.415 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.416 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.416 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.417 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.421 226310 WARNING nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.425 226310 DEBUG nova.virt.libvirt.host [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.425 226310 DEBUG nova.virt.libvirt.host [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.431 226310 DEBUG nova.virt.libvirt.host [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.431 226310 DEBUG nova.virt.libvirt.host [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.432 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.433 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.433 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.433 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.434 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.434 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.434 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.434 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.434 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.435 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.435 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.435 226310 DEBUG nova.virt.hardware [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.442 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1055571765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.538 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.610 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.610 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.613 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.614 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.803 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.805 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4444MB free_disk=20.788719177246094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.805 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.805 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:47:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2570882985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.904 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Migration for instance dc42f6b3-eda5-409e-aac8-68275e50922e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.906 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.907 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.907 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.907 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.907 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.909 226310 INFO nova.compute.manager [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Terminating instance#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.910 226310 DEBUG nova.compute.manager [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.920 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.947 226310 DEBUG nova.storage.rbd_utils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] rbd image 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.951 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:27 np0005539564 kernel: tap04aab360-df (unregistering): left promiscuous mode
Nov 29 02:47:27 np0005539564 NetworkManager[48997]: <info>  [1764402447.9622] device (tap04aab360-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:47:27 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:27Z|00072|binding|INFO|Releasing lport 04aab360-dfb8-4a22-be37-eeec9d964e92 from this chassis (sb_readonly=0)
Nov 29 02:47:27 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:27Z|00073|binding|INFO|Setting lport 04aab360-dfb8-4a22-be37-eeec9d964e92 down in Southbound
Nov 29 02:47:27 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:27Z|00074|binding|INFO|Removing iface tap04aab360-df ovn-installed in OVS
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.974 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.983 226310 INFO nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating resource usage from migration e3caf3fb-c703-4009-b623-b79ea4aabd7b#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.984 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Starting to track incoming migration e3caf3fb-c703-4009-b623-b79ea4aabd7b with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:47:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:27.985 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:93:b2 10.100.0.13'], port_security=['fa:16:3e:66:93:b2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '15412333-68f0-43a8-b114-af3dde30be2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=04aab360-dfb8-4a22-be37-eeec9d964e92) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:27.987 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 04aab360-dfb8-4a22-be37-eeec9d964e92 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 unbound from our chassis#033[00m
Nov 29 02:47:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:27.988 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050#033[00m
Nov 29 02:47:27 np0005539564 nova_compute[226295]: 2025-11-29 07:47:27.991 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.013 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd78537-fa4f-4419-8440-f970d9197e8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.014 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 22e7eaf7-aac4-4748-90df-04afd0ea7376 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.015 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 15412333-68f0-43a8-b114-af3dde30be2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.015 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 7af8a682-77ba-4f0e-a2c8-0d6890734636 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.041 226310 WARNING nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance dc42f6b3-eda5-409e-aac8-68275e50922e has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.042 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.042 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.050 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[561cc8fc-031d-4910-9dd8-0f36a5cd1a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:28 np0005539564 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 29 02:47:28 np0005539564 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 18.447s CPU time.
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.053 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3404876f-0c6e-4f87-b21e-6c264dfb73b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:28 np0005539564 systemd-machined[190128]: Machine qemu-4-instance-0000000b terminated.
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.068 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.092 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[37d7e3f6-2663-48ca-b23d-c5d4bf59adf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.093 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.094 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.108 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5bae0928-ba7a-4b76-afa4-d198ab24b986]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530136, 'reachable_time': 17929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234835, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.119 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.128 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dfffcf2d-499e-40f4-9fcd-32b2d21ca3e4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530148, 'tstamp': 530148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234836, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap788595a6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530152, 'tstamp': 530152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234836, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.130 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.131 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.136 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.142 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.143 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.143 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:28.144 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.145 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.152 226310 INFO nova.virt.libvirt.driver [-] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Instance destroyed successfully.#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.154 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.157 226310 DEBUG nova.objects.instance [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'resources' on Instance uuid 15412333-68f0-43a8-b114-af3dde30be2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.174 226310 DEBUG nova.virt.libvirt.vif [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:45:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1517476307',display_name='tempest-ServersAdminTestJSON-server-1517476307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1517476307',id=11,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:45:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-8uugksx5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:45:50Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=15412333-68f0-43a8-b114-af3dde30be2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.174 226310 DEBUG nova.network.os_vif_util [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "04aab360-dfb8-4a22-be37-eeec9d964e92", "address": "fa:16:3e:66:93:b2", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap04aab360-df", "ovs_interfaceid": "04aab360-dfb8-4a22-be37-eeec9d964e92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.175 226310 DEBUG nova.network.os_vif_util [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:93:b2,bridge_name='br-int',has_traffic_filtering=True,id=04aab360-dfb8-4a22-be37-eeec9d964e92,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04aab360-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.176 226310 DEBUG os_vif [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:93:b2,bridge_name='br-int',has_traffic_filtering=True,id=04aab360-dfb8-4a22-be37-eeec9d964e92,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04aab360-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.178 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.179 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04aab360-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.186 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.188 226310 INFO os_vif [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:93:b2,bridge_name='br-int',has_traffic_filtering=True,id=04aab360-dfb8-4a22-be37-eeec9d964e92,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap04aab360-df')#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.221 226310 DEBUG nova.compute.manager [req-0bf083ec-f0bd-42b8-9b3c-bd1d66fcecf9 req-5a7effc2-bad0-48fc-9701-f56dadb5e019 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received event network-vif-unplugged-04aab360-dfb8-4a22-be37-eeec9d964e92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.222 226310 DEBUG oslo_concurrency.lockutils [req-0bf083ec-f0bd-42b8-9b3c-bd1d66fcecf9 req-5a7effc2-bad0-48fc-9701-f56dadb5e019 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.223 226310 DEBUG oslo_concurrency.lockutils [req-0bf083ec-f0bd-42b8-9b3c-bd1d66fcecf9 req-5a7effc2-bad0-48fc-9701-f56dadb5e019 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.223 226310 DEBUG oslo_concurrency.lockutils [req-0bf083ec-f0bd-42b8-9b3c-bd1d66fcecf9 req-5a7effc2-bad0-48fc-9701-f56dadb5e019 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.223 226310 DEBUG nova.compute.manager [req-0bf083ec-f0bd-42b8-9b3c-bd1d66fcecf9 req-5a7effc2-bad0-48fc-9701-f56dadb5e019 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] No waiting events found dispatching network-vif-unplugged-04aab360-dfb8-4a22-be37-eeec9d964e92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.224 226310 DEBUG nova.compute.manager [req-0bf083ec-f0bd-42b8-9b3c-bd1d66fcecf9 req-5a7effc2-bad0-48fc-9701-f56dadb5e019 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received event network-vif-unplugged-04aab360-dfb8-4a22-be37-eeec9d964e92 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.311 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:28 np0005539564 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 02:47:28 np0005539564 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 02:47:28 np0005539564 systemd-logind[785]: New session 51 of user nova.
Nov 29 02:47:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:47:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2776053813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:47:28 np0005539564 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.452 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.456 226310 DEBUG nova.objects.instance [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7af8a682-77ba-4f0e-a2c8-0d6890734636 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:28 np0005539564 systemd[1]: Starting User Manager for UID 42436...
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.471 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <uuid>7af8a682-77ba-4f0e-a2c8-0d6890734636</uuid>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <name>instance-00000011</name>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-1517927961</nova:name>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:47:27</nova:creationTime>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <nova:user uuid="a40c3321c50c4854a402a01fc8d364fb">tempest-ServerDiagnosticsV248Test-709551292-project-member</nova:user>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <nova:project uuid="5a9b7f6ed83e472387636ac0e55e5c3e">tempest-ServerDiagnosticsV248Test-709551292</nova:project>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <entry name="serial">7af8a682-77ba-4f0e-a2c8-0d6890734636</entry>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <entry name="uuid">7af8a682-77ba-4f0e-a2c8-0d6890734636</entry>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/7af8a682-77ba-4f0e-a2c8-0d6890734636_disk">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/7af8a682-77ba-4f0e-a2c8-0d6890734636_disk.config">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636/console.log" append="off"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:47:28 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:47:28 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:47:28 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:47:28 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.527 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.528 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.528 226310 INFO nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Using config drive#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.554 226310 DEBUG nova.storage.rbd_utils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] rbd image 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:28 np0005539564 systemd[234874]: Queued start job for default target Main User Target.
Nov 29 02:47:28 np0005539564 systemd[234874]: Created slice User Application Slice.
Nov 29 02:47:28 np0005539564 systemd[234874]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:47:28 np0005539564 systemd[234874]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:47:28 np0005539564 systemd[234874]: Reached target Paths.
Nov 29 02:47:28 np0005539564 systemd[234874]: Reached target Timers.
Nov 29 02:47:28 np0005539564 systemd[234874]: Starting D-Bus User Message Bus Socket...
Nov 29 02:47:28 np0005539564 systemd[234874]: Starting Create User's Volatile Files and Directories...
Nov 29 02:47:28 np0005539564 systemd[234874]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:47:28 np0005539564 systemd[234874]: Reached target Sockets.
Nov 29 02:47:28 np0005539564 systemd[234874]: Finished Create User's Volatile Files and Directories.
Nov 29 02:47:28 np0005539564 systemd[234874]: Reached target Basic System.
Nov 29 02:47:28 np0005539564 systemd[234874]: Reached target Main User Target.
Nov 29 02:47:28 np0005539564 systemd[234874]: Startup finished in 155ms.
Nov 29 02:47:28 np0005539564 systemd[1]: Started User Manager for UID 42436.
Nov 29 02:47:28 np0005539564 systemd[1]: Started Session 51 of User nova.
Nov 29 02:47:28 np0005539564 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 02:47:28 np0005539564 systemd-logind[785]: Session 51 logged out. Waiting for processes to exit.
Nov 29 02:47:28 np0005539564 systemd-logind[785]: Removed session 51.
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.812 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.820 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.835 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.860 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:47:28 np0005539564 nova_compute[226295]: 2025-11-29 07:47:28.861 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:28 np0005539564 systemd-logind[785]: New session 53 of user nova.
Nov 29 02:47:28 np0005539564 systemd[1]: Started Session 53 of User nova.
Nov 29 02:47:28 np0005539564 systemd[1]: session-53.scope: Deactivated successfully.
Nov 29 02:47:28 np0005539564 systemd-logind[785]: Session 53 logged out. Waiting for processes to exit.
Nov 29 02:47:28 np0005539564 systemd-logind[785]: Removed session 53.
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.259 226310 INFO nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Creating config drive at /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636/disk.config#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.264 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpda9ffwas execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.405 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpda9ffwas" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:29.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.834 226310 DEBUG nova.storage.rbd_utils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] rbd image 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.839 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636/disk.config 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.884 226310 INFO nova.virt.libvirt.driver [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Deleting instance files /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a_del#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.886 226310 INFO nova.virt.libvirt.driver [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Deletion of /var/lib/nova/instances/15412333-68f0-43a8-b114-af3dde30be2a_del complete#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.964 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.983 226310 INFO nova.compute.manager [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Took 2.07 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.984 226310 DEBUG oslo.service.loopingcall [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.985 226310 DEBUG nova.compute.manager [-] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:47:29 np0005539564 nova_compute[226295]: 2025-11-29 07:47:29.985 226310 DEBUG nova.network.neutron [-] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:47:30 np0005539564 nova_compute[226295]: 2025-11-29 07:47:30.355 226310 DEBUG nova.compute.manager [req-02468f94-6e14-4355-82e6-2a02cf310c8b req-da336baa-4a0d-44be-ac54-22b5d78ebf29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received event network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:30 np0005539564 nova_compute[226295]: 2025-11-29 07:47:30.356 226310 DEBUG oslo_concurrency.lockutils [req-02468f94-6e14-4355-82e6-2a02cf310c8b req-da336baa-4a0d-44be-ac54-22b5d78ebf29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15412333-68f0-43a8-b114-af3dde30be2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:30 np0005539564 nova_compute[226295]: 2025-11-29 07:47:30.357 226310 DEBUG oslo_concurrency.lockutils [req-02468f94-6e14-4355-82e6-2a02cf310c8b req-da336baa-4a0d-44be-ac54-22b5d78ebf29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:30 np0005539564 nova_compute[226295]: 2025-11-29 07:47:30.358 226310 DEBUG oslo_concurrency.lockutils [req-02468f94-6e14-4355-82e6-2a02cf310c8b req-da336baa-4a0d-44be-ac54-22b5d78ebf29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:30 np0005539564 nova_compute[226295]: 2025-11-29 07:47:30.358 226310 DEBUG nova.compute.manager [req-02468f94-6e14-4355-82e6-2a02cf310c8b req-da336baa-4a0d-44be-ac54-22b5d78ebf29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] No waiting events found dispatching network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:30 np0005539564 nova_compute[226295]: 2025-11-29 07:47:30.359 226310 WARNING nova.compute.manager [req-02468f94-6e14-4355-82e6-2a02cf310c8b req-da336baa-4a0d-44be-ac54-22b5d78ebf29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received unexpected event network-vif-plugged-04aab360-dfb8-4a22-be37-eeec9d964e92 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:47:31 np0005539564 nova_compute[226295]: 2025-11-29 07:47:31.149 226310 DEBUG oslo_concurrency.processutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636/disk.config 7af8a682-77ba-4f0e-a2c8-0d6890734636_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:31 np0005539564 nova_compute[226295]: 2025-11-29 07:47:31.150 226310 INFO nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Deleting local config drive /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636/disk.config because it was imported into RBD.#033[00m
Nov 29 02:47:31 np0005539564 systemd-machined[190128]: New machine qemu-7-instance-00000011.
Nov 29 02:47:31 np0005539564 systemd[1]: Started Virtual Machine qemu-7-instance-00000011.
Nov 29 02:47:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:31.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:31 np0005539564 nova_compute[226295]: 2025-11-29 07:47:31.867 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402451.8669858, 7af8a682-77ba-4f0e-a2c8-0d6890734636 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:31 np0005539564 nova_compute[226295]: 2025-11-29 07:47:31.868 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:47:31 np0005539564 nova_compute[226295]: 2025-11-29 07:47:31.872 226310 DEBUG nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:47:31 np0005539564 nova_compute[226295]: 2025-11-29 07:47:31.872 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:47:31 np0005539564 nova_compute[226295]: 2025-11-29 07:47:31.877 226310 INFO nova.virt.libvirt.driver [-] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Instance spawned successfully.#033[00m
Nov 29 02:47:31 np0005539564 nova_compute[226295]: 2025-11-29 07:47:31.877 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.048 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.054 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.058 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.058 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.058 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.059 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.059 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.060 226310 DEBUG nova.virt.libvirt.driver [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.091 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.092 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402451.8681955, 7af8a682-77ba-4f0e-a2c8-0d6890734636 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.092 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] VM Started (Lifecycle Event)#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.118 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.127 226310 INFO nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Took 5.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.128 226310 DEBUG nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.129 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.161 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.191 226310 INFO nova.compute.manager [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Took 7.25 seconds to build instance.#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.210 226310 DEBUG oslo_concurrency.lockutils [None req-4d466942-1447-44c3-83f6-5eccd2a8745b a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "7af8a682-77ba-4f0e-a2c8-0d6890734636" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.539 226310 DEBUG nova.network.neutron [-] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.559 226310 INFO nova.compute.manager [-] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Took 2.57 seconds to deallocate network for instance.#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.604 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.605 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.655 226310 DEBUG nova.compute.manager [req-3312017c-551b-453c-b611-5a33de4111f1 req-60ed5445-5ee4-4d82-a877-09c3a75544dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Received event network-vif-deleted-04aab360-dfb8-4a22-be37-eeec9d964e92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:32 np0005539564 nova_compute[226295]: 2025-11-29 07:47:32.706 226310 DEBUG oslo_concurrency.processutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4147426160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.165 226310 DEBUG oslo_concurrency.processutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.172 226310 DEBUG nova.compute.provider_tree [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.181 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.196 226310 DEBUG nova.scheduler.client.report [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.219 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.246 226310 INFO nova.scheduler.client.report [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Deleted allocations for instance 15412333-68f0-43a8-b114-af3dde30be2a#033[00m
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.339 226310 DEBUG oslo_concurrency.lockutils [None req-4a32b99c-908e-4c00-b717-e11c374b847a d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15412333-68f0-43a8-b114-af3dde30be2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:33.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.692 226310 DEBUG nova.compute.manager [None req-021fa0f5-8b7e-487d-af52-99633f1540f1 07f15449c8bd4f0283216ce726f1ef1b 96c6081f38b447948bab3c4e2e2c08b8 - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:33 np0005539564 nova_compute[226295]: 2025-11-29 07:47:33.696 226310 INFO nova.compute.manager [None req-021fa0f5-8b7e-487d-af52-99633f1540f1 07f15449c8bd4f0283216ce726f1ef1b 96c6081f38b447948bab3c4e2e2c08b8 - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Retrieving diagnostics#033[00m
Nov 29 02:47:34 np0005539564 nova_compute[226295]: 2025-11-29 07:47:34.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:35.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:35.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:37.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:37.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:38 np0005539564 nova_compute[226295]: 2025-11-29 07:47:38.183 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:39 np0005539564 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 02:47:39 np0005539564 systemd[234874]: Activating special unit Exit the Session...
Nov 29 02:47:39 np0005539564 systemd[234874]: Stopped target Main User Target.
Nov 29 02:47:39 np0005539564 systemd[234874]: Stopped target Basic System.
Nov 29 02:47:39 np0005539564 systemd[234874]: Stopped target Paths.
Nov 29 02:47:39 np0005539564 systemd[234874]: Stopped target Sockets.
Nov 29 02:47:39 np0005539564 systemd[234874]: Stopped target Timers.
Nov 29 02:47:39 np0005539564 systemd[234874]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:47:39 np0005539564 systemd[234874]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:47:39 np0005539564 systemd[234874]: Closed D-Bus User Message Bus Socket.
Nov 29 02:47:39 np0005539564 systemd[234874]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:47:39 np0005539564 systemd[234874]: Removed slice User Application Slice.
Nov 29 02:47:39 np0005539564 systemd[234874]: Reached target Shutdown.
Nov 29 02:47:39 np0005539564 systemd[234874]: Finished Exit the Session.
Nov 29 02:47:39 np0005539564 systemd[234874]: Reached target Exit the Session.
Nov 29 02:47:39 np0005539564 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 02:47:39 np0005539564 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 02:47:39 np0005539564 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 02:47:39 np0005539564 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 02:47:39 np0005539564 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 02:47:39 np0005539564 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 02:47:39 np0005539564 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 02:47:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:39.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:39.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:40 np0005539564 nova_compute[226295]: 2025-11-29 07:47:40.011 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:41.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:41.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:43 np0005539564 nova_compute[226295]: 2025-11-29 07:47:43.152 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402448.1505153, 15412333-68f0-43a8-b114-af3dde30be2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:43 np0005539564 nova_compute[226295]: 2025-11-29 07:47:43.153 226310 INFO nova.compute.manager [-] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:47:43 np0005539564 nova_compute[226295]: 2025-11-29 07:47:43.176 226310 DEBUG nova.compute.manager [None req-21a4266b-8e58-4447-83e6-405d65e4a540 - - - - - -] [instance: 15412333-68f0-43a8-b114-af3dde30be2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:43 np0005539564 nova_compute[226295]: 2025-11-29 07:47:43.185 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:43.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:43.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:43 np0005539564 nova_compute[226295]: 2025-11-29 07:47:43.834 226310 DEBUG nova.compute.manager [None req-050cdab3-f3dd-4a1f-a25e-16c17a57389b 07f15449c8bd4f0283216ce726f1ef1b 96c6081f38b447948bab3c4e2e2c08b8 - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:43 np0005539564 nova_compute[226295]: 2025-11-29 07:47:43.839 226310 INFO nova.compute.manager [None req-050cdab3-f3dd-4a1f-a25e-16c17a57389b 07f15449c8bd4f0283216ce726f1ef1b 96c6081f38b447948bab3c4e2e2c08b8 - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Retrieving diagnostics#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.038 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquiring lock "7af8a682-77ba-4f0e-a2c8-0d6890734636" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.039 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "7af8a682-77ba-4f0e-a2c8-0d6890734636" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.039 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquiring lock "7af8a682-77ba-4f0e-a2c8-0d6890734636-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.039 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "7af8a682-77ba-4f0e-a2c8-0d6890734636-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.040 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "7af8a682-77ba-4f0e-a2c8-0d6890734636-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.041 226310 INFO nova.compute.manager [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Terminating instance#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.042 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquiring lock "refresh_cache-7af8a682-77ba-4f0e-a2c8-0d6890734636" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.043 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquired lock "refresh_cache-7af8a682-77ba-4f0e-a2c8-0d6890734636" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.043 226310 DEBUG nova.network.neutron [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.329 226310 DEBUG nova.network.neutron [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.786 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.787 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.787 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.787 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.788 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.789 226310 INFO nova.compute.manager [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Terminating instance#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.790 226310 DEBUG nova.compute.manager [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:47:44 np0005539564 kernel: tap6a7b5e5f-28 (unregistering): left promiscuous mode
Nov 29 02:47:44 np0005539564 NetworkManager[48997]: <info>  [1764402464.8851] device (tap6a7b5e5f-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.897 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:44 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:44Z|00075|binding|INFO|Releasing lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 from this chassis (sb_readonly=0)
Nov 29 02:47:44 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:44Z|00076|binding|INFO|Setting lport 6a7b5e5f-28d1-488d-8877-073bc5493a93 down in Southbound
Nov 29 02:47:44 np0005539564 ovn_controller[130591]: 2025-11-29T07:47:44Z|00077|binding|INFO|Removing iface tap6a7b5e5f-28 ovn-installed in OVS
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:44 np0005539564 nova_compute[226295]: 2025-11-29 07:47:44.919 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:44.954 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:59:71 10.100.0.5'], port_security=['fa:16:3e:66:59:71 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '22e7eaf7-aac4-4748-90df-04afd0ea7376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=6a7b5e5f-28d1-488d-8877-073bc5493a93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:44.956 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 6a7b5e5f-28d1-488d-8877-073bc5493a93 in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 unbound from our chassis#033[00m
Nov 29 02:47:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:44.957 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 788595a6-8f3f-45f7-807d-f88c9bf0e050, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:47:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:44.959 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[060dd3c4-b6b6-4b1e-9b55-4d3bd6c8bfe3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:44.959 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 namespace which is not needed anymore#033[00m
Nov 29 02:47:44 np0005539564 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 29 02:47:44 np0005539564 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Consumed 14.548s CPU time.
Nov 29 02:47:44 np0005539564 systemd-machined[190128]: Machine qemu-6-instance-00000009 terminated.
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.017 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.026 226310 INFO nova.virt.libvirt.driver [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Instance destroyed successfully.#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.026 226310 DEBUG nova.objects.instance [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'resources' on Instance uuid 22e7eaf7-aac4-4748-90df-04afd0ea7376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.177 226310 DEBUG nova.virt.libvirt.vif [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1240733035',display_name='tempest-ServersAdminTestJSON-server-1240733035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1240733035',id=9,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:47:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-22jb6edg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:47:14Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=22e7eaf7-aac4-4748-90df-04afd0ea7376,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.178 226310 DEBUG nova.network.os_vif_util [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "address": "fa:16:3e:66:59:71", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a7b5e5f-28", "ovs_interfaceid": "6a7b5e5f-28d1-488d-8877-073bc5493a93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.179 226310 DEBUG nova.network.os_vif_util [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.180 226310 DEBUG os_vif [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.181 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.182 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a7b5e5f-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.183 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.187 226310 INFO os_vif [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:59:71,bridge_name='br-int',has_traffic_filtering=True,id=6a7b5e5f-28d1-488d-8877-073bc5493a93,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a7b5e5f-28')#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.313 226310 DEBUG nova.network.neutron [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.329 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Releasing lock "refresh_cache-7af8a682-77ba-4f0e-a2c8-0d6890734636" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.330 226310 DEBUG nova.compute.manager [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:47:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:45.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:45.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:45 np0005539564 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[232739]: [NOTICE]   (232743) : haproxy version is 2.8.14-c23fe91
Nov 29 02:47:45 np0005539564 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[232739]: [NOTICE]   (232743) : path to executable is /usr/sbin/haproxy
Nov 29 02:47:45 np0005539564 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[232739]: [WARNING]  (232743) : Exiting Master process...
Nov 29 02:47:45 np0005539564 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[232739]: [ALERT]    (232743) : Current worker (232745) exited with code 143 (Terminated)
Nov 29 02:47:45 np0005539564 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[232739]: [WARNING]  (232743) : All workers exited. Exiting... (0)
Nov 29 02:47:45 np0005539564 systemd[1]: libpod-889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5.scope: Deactivated successfully.
Nov 29 02:47:45 np0005539564 podman[235095]: 2025-11-29 07:47:45.625860612 +0000 UTC m=+0.553267483 container died 889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 02:47:45 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5-userdata-shm.mount: Deactivated successfully.
Nov 29 02:47:45 np0005539564 systemd[1]: var-lib-containers-storage-overlay-922ca8cd455975622217533ffbcab3cf4b1abd0399d09c00eab8dc44d01ee6cf-merged.mount: Deactivated successfully.
Nov 29 02:47:45 np0005539564 podman[235095]: 2025-11-29 07:47:45.713391025 +0000 UTC m=+0.640797866 container cleanup 889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:47:45 np0005539564 systemd[1]: libpod-conmon-889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5.scope: Deactivated successfully.
Nov 29 02:47:45 np0005539564 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 29 02:47:45 np0005539564 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000011.scope: Consumed 12.950s CPU time.
Nov 29 02:47:45 np0005539564 systemd-machined[190128]: Machine qemu-7-instance-00000011 terminated.
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.769 226310 DEBUG nova.compute.manager [req-e992e579-0338-4771-8b50-975d7a533037 req-86479bd6-63c8-4497-a6fa-b0cab27a3cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.770 226310 DEBUG oslo_concurrency.lockutils [req-e992e579-0338-4771-8b50-975d7a533037 req-86479bd6-63c8-4497-a6fa-b0cab27a3cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.770 226310 DEBUG oslo_concurrency.lockutils [req-e992e579-0338-4771-8b50-975d7a533037 req-86479bd6-63c8-4497-a6fa-b0cab27a3cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.771 226310 DEBUG oslo_concurrency.lockutils [req-e992e579-0338-4771-8b50-975d7a533037 req-86479bd6-63c8-4497-a6fa-b0cab27a3cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.771 226310 DEBUG nova.compute.manager [req-e992e579-0338-4771-8b50-975d7a533037 req-86479bd6-63c8-4497-a6fa-b0cab27a3cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.772 226310 DEBUG nova.compute.manager [req-e992e579-0338-4771-8b50-975d7a533037 req-86479bd6-63c8-4497-a6fa-b0cab27a3cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-unplugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:47:45 np0005539564 podman[235142]: 2025-11-29 07:47:45.791334718 +0000 UTC m=+0.051861957 container remove 889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.796 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c3edbadf-0a26-4dfd-bde7-801a31d0e82b]: (4, ('Sat Nov 29 07:47:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 (889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5)\n889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5\nSat Nov 29 07:47:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 (889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5)\n889a1b37b22bca5fbf21bad0edf5e7698cd952d7ad5158dbb698ea93904ae8a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.798 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7d75c803-0c2b-4a71-8a5d-6f941f1ced0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.798 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.800 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:45 np0005539564 kernel: tap788595a6-80: left promiscuous mode
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.813 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.817 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d328f066-5d80-4a6f-bee0-c277d22fc65c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.829 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3550f30e-e2eb-4cfd-86ca-d6f5976809dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.830 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e685a875-7dce-4a07-93a1-76b4487716f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.845 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e589748b-f4cf-454b-9cc9-c2f1d8f5e232]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530129, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235157, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.847 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:47:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:45.847 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[607defa7-42bb-49cf-b4e8-f1e4198cf8ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:45 np0005539564 systemd[1]: run-netns-ovnmeta\x2d788595a6\x2d8f3f\x2d45f7\x2d807d\x2df88c9bf0e050.mount: Deactivated successfully.
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.952 226310 INFO nova.virt.libvirt.driver [-] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Instance destroyed successfully.#033[00m
Nov 29 02:47:45 np0005539564 nova_compute[226295]: 2025-11-29 07:47:45.952 226310 DEBUG nova.objects.instance [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lazy-loading 'resources' on Instance uuid 7af8a682-77ba-4f0e-a2c8-0d6890734636 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.106 226310 INFO nova.virt.libvirt.driver [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deleting instance files /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376_del#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.107 226310 INFO nova.virt.libvirt.driver [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deletion of /var/lib/nova/instances/22e7eaf7-aac4-4748-90df-04afd0ea7376_del complete#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.174 226310 INFO nova.compute.manager [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Took 1.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.175 226310 DEBUG oslo.service.loopingcall [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.175 226310 DEBUG nova.compute.manager [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.176 226310 DEBUG nova.network.neutron [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.512 226310 INFO nova.virt.libvirt.driver [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Deleting instance files /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636_del#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.514 226310 INFO nova.virt.libvirt.driver [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Deletion of /var/lib/nova/instances/7af8a682-77ba-4f0e-a2c8-0d6890734636_del complete#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.578 226310 INFO nova.compute.manager [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.580 226310 DEBUG oslo.service.loopingcall [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.580 226310 DEBUG nova.compute.manager [-] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.580 226310 DEBUG nova.network.neutron [-] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.771 226310 DEBUG nova.network.neutron [-] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.794 226310 DEBUG nova.network.neutron [-] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.803 226310 DEBUG nova.network.neutron [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.819 226310 INFO nova.compute.manager [-] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Took 0.24 seconds to deallocate network for instance.#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.824 226310 INFO nova.compute.manager [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Took 0.65 seconds to deallocate network for instance.#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.881 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.882 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.902 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:46 np0005539564 nova_compute[226295]: 2025-11-29 07:47:46.989 226310 DEBUG oslo_concurrency.processutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/346466577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:47.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:47 np0005539564 nova_compute[226295]: 2025-11-29 07:47:47.469 226310 DEBUG oslo_concurrency.processutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:47 np0005539564 nova_compute[226295]: 2025-11-29 07:47:47.476 226310 DEBUG nova.compute.provider_tree [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:47 np0005539564 nova_compute[226295]: 2025-11-29 07:47:47.515 226310 DEBUG nova.scheduler.client.report [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.070 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.074 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.206 226310 DEBUG nova.compute.manager [req-ea8f3c27-ac7b-4518-83c6-913047811e21 req-06961c7a-13be-4bac-975b-2710e6b1890d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.207 226310 DEBUG oslo_concurrency.lockutils [req-ea8f3c27-ac7b-4518-83c6-913047811e21 req-06961c7a-13be-4bac-975b-2710e6b1890d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.208 226310 DEBUG oslo_concurrency.lockutils [req-ea8f3c27-ac7b-4518-83c6-913047811e21 req-06961c7a-13be-4bac-975b-2710e6b1890d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.208 226310 DEBUG oslo_concurrency.lockutils [req-ea8f3c27-ac7b-4518-83c6-913047811e21 req-06961c7a-13be-4bac-975b-2710e6b1890d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.209 226310 DEBUG nova.compute.manager [req-ea8f3c27-ac7b-4518-83c6-913047811e21 req-06961c7a-13be-4bac-975b-2710e6b1890d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] No waiting events found dispatching network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.210 226310 WARNING nova.compute.manager [req-ea8f3c27-ac7b-4518-83c6-913047811e21 req-06961c7a-13be-4bac-975b-2710e6b1890d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received unexpected event network-vif-plugged-6a7b5e5f-28d1-488d-8877-073bc5493a93 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.210 226310 DEBUG nova.compute.manager [req-ea8f3c27-ac7b-4518-83c6-913047811e21 req-06961c7a-13be-4bac-975b-2710e6b1890d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Received event network-vif-deleted-6a7b5e5f-28d1-488d-8877-073bc5493a93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.221 226310 DEBUG oslo_concurrency.processutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:48 np0005539564 podman[235226]: 2025-11-29 07:47:48.552384831 +0000 UTC m=+0.096037375 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:47:48 np0005539564 podman[235225]: 2025-11-29 07:47:48.55529652 +0000 UTC m=+0.106446007 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:47:48 np0005539564 podman[235224]: 2025-11-29 07:47:48.600869406 +0000 UTC m=+0.150268186 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:47:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3164445492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.708 226310 DEBUG oslo_concurrency.processutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.716 226310 DEBUG nova.compute.provider_tree [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.849 226310 INFO nova.scheduler.client.report [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Deleted allocations for instance 7af8a682-77ba-4f0e-a2c8-0d6890734636#033[00m
Nov 29 02:47:48 np0005539564 nova_compute[226295]: 2025-11-29 07:47:48.876 226310 DEBUG nova.scheduler.client.report [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:49 np0005539564 nova_compute[226295]: 2025-11-29 07:47:49.030 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:49 np0005539564 nova_compute[226295]: 2025-11-29 07:47:49.113 226310 DEBUG oslo_concurrency.lockutils [None req-e8134346-177d-4eb6-a6dd-4f89de08672f a40c3321c50c4854a402a01fc8d364fb 5a9b7f6ed83e472387636ac0e55e5c3e - - default default] Lock "7af8a682-77ba-4f0e-a2c8-0d6890734636" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:49 np0005539564 nova_compute[226295]: 2025-11-29 07:47:49.116 226310 INFO nova.scheduler.client.report [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Deleted allocations for instance 22e7eaf7-aac4-4748-90df-04afd0ea7376#033[00m
Nov 29 02:47:49 np0005539564 nova_compute[226295]: 2025-11-29 07:47:49.206 226310 DEBUG oslo_concurrency.lockutils [None req-3c6f37c3-31f2-45b4-af70-c46e800c6652 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "22e7eaf7-aac4-4748-90df-04afd0ea7376" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:50 np0005539564 nova_compute[226295]: 2025-11-29 07:47:50.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:50 np0005539564 nova_compute[226295]: 2025-11-29 07:47:50.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:50 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 02:47:51 np0005539564 nova_compute[226295]: 2025-11-29 07:47:51.342 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:51.343 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:51.344 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:47:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:51.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:53.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:53.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:53 np0005539564 nova_compute[226295]: 2025-11-29 07:47:53.657 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:47:53 np0005539564 nova_compute[226295]: 2025-11-29 07:47:53.657 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquired lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:47:53 np0005539564 nova_compute[226295]: 2025-11-29 07:47:53.657 226310 DEBUG nova.network.neutron [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:47:54 np0005539564 nova_compute[226295]: 2025-11-29 07:47:54.234 226310 DEBUG nova.network.neutron [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:47:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:47:54.346 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:54 np0005539564 nova_compute[226295]: 2025-11-29 07:47:54.744 226310 DEBUG nova.network.neutron [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:54 np0005539564 nova_compute[226295]: 2025-11-29 07:47:54.759 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Releasing lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:47:54 np0005539564 nova_compute[226295]: 2025-11-29 07:47:54.907 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 02:47:54 np0005539564 nova_compute[226295]: 2025-11-29 07:47:54.909 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 02:47:54 np0005539564 nova_compute[226295]: 2025-11-29 07:47:54.909 226310 INFO nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Creating image(s)#033[00m
Nov 29 02:47:54 np0005539564 nova_compute[226295]: 2025-11-29 07:47:54.953 226310 DEBUG nova.storage.rbd_utils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] creating snapshot(nova-resize) on rbd image(dc42f6b3-eda5-409e-aac8-68275e50922e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.047 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.185 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:55.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:55.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.728 226310 DEBUG nova.objects.instance [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'trusted_certs' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.865 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.865 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Ensure instance console log exists: /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.866 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.866 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.866 226310 DEBUG oslo_concurrency.lockutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.868 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.873 226310 WARNING nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.879 226310 DEBUG nova.virt.libvirt.host [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.880 226310 DEBUG nova.virt.libvirt.host [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.885 226310 DEBUG nova.virt.libvirt.host [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.886 226310 DEBUG nova.virt.libvirt.host [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.887 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.887 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.888 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.888 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.888 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.888 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.889 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.889 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.889 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.890 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.890 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.890 226310 DEBUG nova.virt.hardware [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.890 226310 DEBUG nova.objects.instance [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lazy-loading 'vcpu_model' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:55 np0005539564 nova_compute[226295]: 2025-11-29 07:47:55.907 226310 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:47:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3147788635' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:47:56 np0005539564 nova_compute[226295]: 2025-11-29 07:47:56.377 226310 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:56 np0005539564 nova_compute[226295]: 2025-11-29 07:47:56.428 226310 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:47:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2723236095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:47:56 np0005539564 nova_compute[226295]: 2025-11-29 07:47:56.877 226310 DEBUG oslo_concurrency.processutils [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:56 np0005539564 nova_compute[226295]: 2025-11-29 07:47:56.881 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <uuid>dc42f6b3-eda5-409e-aac8-68275e50922e</uuid>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <name>instance-00000010</name>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <nova:name>tempest-MigrationsAdminTest-server-734359268</nova:name>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:47:55</nova:creationTime>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <entry name="serial">dc42f6b3-eda5-409e-aac8-68275e50922e</entry>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <entry name="uuid">dc42f6b3-eda5-409e-aac8-68275e50922e</entry>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/dc42f6b3-eda5-409e-aac8-68275e50922e_disk">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/dc42f6b3-eda5-409e-aac8-68275e50922e_disk.config">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e/console.log" append="off"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:47:56 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:47:56 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:47:56 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:47:56 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.308 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.310 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.311 226310 INFO nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Using config drive#033[00m
Nov 29 02:47:57 np0005539564 systemd-machined[190128]: New machine qemu-8-instance-00000010.
Nov 29 02:47:57 np0005539564 systemd[1]: Started Virtual Machine qemu-8-instance-00000010.
Nov 29 02:47:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:47:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:57.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:57 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:57.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.858 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402477.8579998, dc42f6b3-eda5-409e-aac8-68275e50922e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.859 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.863 226310 DEBUG nova.compute.manager [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.868 226310 INFO nova.virt.libvirt.driver [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance running successfully.#033[00m
Nov 29 02:47:57 np0005539564 virtqemud[225880]: argument unsupported: QEMU guest agent is not configured
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.871 226310 DEBUG nova.virt.libvirt.guest [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.871 226310 DEBUG nova.virt.libvirt.driver [None req-d4fa4f27-d787-42ab-89fe-2dbf9fb577e8 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.926 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:57 np0005539564 nova_compute[226295]: 2025-11-29 07:47:57.934 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:58 np0005539564 nova_compute[226295]: 2025-11-29 07:47:58.003 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 02:47:58 np0005539564 nova_compute[226295]: 2025-11-29 07:47:58.003 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402477.8581114, dc42f6b3-eda5-409e-aac8-68275e50922e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:58 np0005539564 nova_compute[226295]: 2025-11-29 07:47:58.004 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] VM Started (Lifecycle Event)#033[00m
Nov 29 02:47:58 np0005539564 nova_compute[226295]: 2025-11-29 07:47:58.021 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:58 np0005539564 nova_compute[226295]: 2025-11-29 07:47:58.024 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:59.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:47:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:59.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:00 np0005539564 nova_compute[226295]: 2025-11-29 07:48:00.025 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402465.0224898, 22e7eaf7-aac4-4748-90df-04afd0ea7376 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:00 np0005539564 nova_compute[226295]: 2025-11-29 07:48:00.026 226310 INFO nova.compute.manager [-] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:48:00 np0005539564 nova_compute[226295]: 2025-11-29 07:48:00.051 226310 DEBUG nova.compute.manager [None req-fee592fa-6a74-4c43-8fc6-6365aa00982e - - - - - -] [instance: 22e7eaf7-aac4-4748-90df-04afd0ea7376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:00 np0005539564 nova_compute[226295]: 2025-11-29 07:48:00.052 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:00 np0005539564 nova_compute[226295]: 2025-11-29 07:48:00.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:00 np0005539564 nova_compute[226295]: 2025-11-29 07:48:00.952 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402465.9505818, 7af8a682-77ba-4f0e-a2c8-0d6890734636 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:00 np0005539564 nova_compute[226295]: 2025-11-29 07:48:00.952 226310 INFO nova.compute.manager [-] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:48:00 np0005539564 nova_compute[226295]: 2025-11-29 07:48:00.979 226310 DEBUG nova.compute.manager [None req-befdeac2-6178-480d-9ce3-11e4a7af915b - - - - - -] [instance: 7af8a682-77ba-4f0e-a2c8-0d6890734636] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:01.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:01.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:48:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:48:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 02:48:01 np0005539564 nova_compute[226295]: 2025-11-29 07:48:01.919 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "f1710cd2-4467-4d84-9adc-5a022d404b26" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:01 np0005539564 nova_compute[226295]: 2025-11-29 07:48:01.919 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "f1710cd2-4467-4d84-9adc-5a022d404b26" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:01 np0005539564 nova_compute[226295]: 2025-11-29 07:48:01.939 226310 DEBUG nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.041 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.042 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.053 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.054 226310 INFO nova.compute.claims [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.184 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3648557177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.633 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.639 226310 DEBUG nova.compute.provider_tree [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.655 226310 DEBUG nova.scheduler.client.report [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.828 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.829 226310 DEBUG nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.880 226310 DEBUG nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.881 226310 DEBUG nova.network.neutron [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.902 226310 INFO nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:48:02 np0005539564 nova_compute[226295]: 2025-11-29 07:48:02.926 226310 DEBUG nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.022 226310 DEBUG nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.023 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.024 226310 INFO nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Creating image(s)#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.059 226310 DEBUG nova.storage.rbd_utils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image f1710cd2-4467-4d84-9adc-5a022d404b26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.103 226310 DEBUG nova.storage.rbd_utils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image f1710cd2-4467-4d84-9adc-5a022d404b26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.138 226310 DEBUG nova.storage.rbd_utils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image f1710cd2-4467-4d84-9adc-5a022d404b26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.143 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.218 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.219 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.219 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.220 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.268 226310 DEBUG nova.storage.rbd_utils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image f1710cd2-4467-4d84-9adc-5a022d404b26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.273 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f1710cd2-4467-4d84-9adc-5a022d404b26_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:03.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:03.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.543 226310 DEBUG nova.network.neutron [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:48:03 np0005539564 nova_compute[226295]: 2025-11-29 07:48:03.544 226310 DEBUG nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:48:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:48:03.697 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:48:03.697 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:48:03.698 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:05 np0005539564 nova_compute[226295]: 2025-11-29 07:48:05.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539564 nova_compute[226295]: 2025-11-29 07:48:05.190 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539564 nova_compute[226295]: 2025-11-29 07:48:05.217 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f1710cd2-4467-4d84-9adc-5a022d404b26_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.944s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:05 np0005539564 nova_compute[226295]: 2025-11-29 07:48:05.303 226310 DEBUG nova.storage.rbd_utils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] resizing rbd image f1710cd2-4467-4d84-9adc-5a022d404b26_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:48:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:05.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:05.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.071 226310 DEBUG nova.objects.instance [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'migration_context' on Instance uuid f1710cd2-4467-4d84-9adc-5a022d404b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.091 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.092 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Ensure instance console log exists: /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.092 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.093 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.093 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.095 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.100 226310 WARNING nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.106 226310 DEBUG nova.virt.libvirt.host [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.107 226310 DEBUG nova.virt.libvirt.host [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.110 226310 DEBUG nova.virt.libvirt.host [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.110 226310 DEBUG nova.virt.libvirt.host [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.111 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.112 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.113 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.113 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.113 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.114 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.114 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.114 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.115 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.115 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.116 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.116 226310 DEBUG nova.virt.hardware [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.120 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/205804548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.606 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.632 226310 DEBUG nova.storage.rbd_utils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image f1710cd2-4467-4d84-9adc-5a022d404b26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:06 np0005539564 nova_compute[226295]: 2025-11-29 07:48:06.637 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184317448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.087 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.091 226310 DEBUG nova.objects.instance [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'pci_devices' on Instance uuid f1710cd2-4467-4d84-9adc-5a022d404b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.129 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <uuid>f1710cd2-4467-4d84-9adc-5a022d404b26</uuid>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <name>instance-00000013</name>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersOnMultiNodesTest-server-280463277</nova:name>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:48:06</nova:creationTime>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <nova:user uuid="386584ea971049e3b0c06b8237710848">tempest-ServersOnMultiNodesTest-893669333-project-member</nova:user>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <nova:project uuid="c80f8d4661784e8faaf78d28df3fb677">tempest-ServersOnMultiNodesTest-893669333</nova:project>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <entry name="serial">f1710cd2-4467-4d84-9adc-5a022d404b26</entry>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <entry name="uuid">f1710cd2-4467-4d84-9adc-5a022d404b26</entry>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/f1710cd2-4467-4d84-9adc-5a022d404b26_disk">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/f1710cd2-4467-4d84-9adc-5a022d404b26_disk.config">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26/console.log" append="off"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:48:07 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:48:07 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:48:07 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:48:07 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.247 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.248 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.249 226310 INFO nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Using config drive#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.287 226310 DEBUG nova.storage.rbd_utils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image f1710cd2-4467-4d84-9adc-5a022d404b26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:07.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:07.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.502 226310 INFO nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Creating config drive at /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26/disk.config#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.508 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo3q_9c2m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.641 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo3q_9c2m" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.685 226310 DEBUG nova.storage.rbd_utils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image f1710cd2-4467-4d84-9adc-5a022d404b26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:07 np0005539564 nova_compute[226295]: 2025-11-29 07:48:07.689 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26/disk.config f1710cd2-4467-4d84-9adc-5a022d404b26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:09 np0005539564 nova_compute[226295]: 2025-11-29 07:48:09.177 226310 DEBUG oslo_concurrency.processutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26/disk.config f1710cd2-4467-4d84-9adc-5a022d404b26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:09 np0005539564 nova_compute[226295]: 2025-11-29 07:48:09.179 226310 INFO nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Deleting local config drive /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26/disk.config because it was imported into RBD.#033[00m
Nov 29 02:48:09 np0005539564 systemd-machined[190128]: New machine qemu-9-instance-00000013.
Nov 29 02:48:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 02:48:09 np0005539564 systemd[1]: Started Virtual Machine qemu-9-instance-00000013.
Nov 29 02:48:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:09.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:09.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:10 np0005539564 nova_compute[226295]: 2025-11-29 07:48:10.056 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:10 np0005539564 nova_compute[226295]: 2025-11-29 07:48:10.192 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:10 np0005539564 nova_compute[226295]: 2025-11-29 07:48:10.927 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402490.9273217, f1710cd2-4467-4d84-9adc-5a022d404b26 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:10 np0005539564 nova_compute[226295]: 2025-11-29 07:48:10.928 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:48:10 np0005539564 nova_compute[226295]: 2025-11-29 07:48:10.932 226310 DEBUG nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:48:10 np0005539564 nova_compute[226295]: 2025-11-29 07:48:10.932 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:48:10 np0005539564 nova_compute[226295]: 2025-11-29 07:48:10.940 226310 INFO nova.virt.libvirt.driver [-] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Instance spawned successfully.#033[00m
Nov 29 02:48:10 np0005539564 nova_compute[226295]: 2025-11-29 07:48:10.941 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.064 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.068 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.246 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.248 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.249 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.250 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.251 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.252 226310 DEBUG nova.virt.libvirt.driver [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.380 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.381 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402490.928549, f1710cd2-4467-4d84-9adc-5a022d404b26 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.381 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] VM Started (Lifecycle Event)#033[00m
Nov 29 02:48:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:11.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:11.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.487 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.492 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.524 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.546 226310 INFO nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Took 8.52 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.547 226310 DEBUG nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.656 226310 INFO nova.compute.manager [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Took 9.66 seconds to build instance.#033[00m
Nov 29 02:48:11 np0005539564 nova_compute[226295]: 2025-11-29 07:48:11.674 226310 DEBUG oslo_concurrency.lockutils [None req-60cc88da-f4bf-4f62-a713-2ba20d883ad3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "f1710cd2-4467-4d84-9adc-5a022d404b26" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:13.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:13.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:15 np0005539564 nova_compute[226295]: 2025-11-29 07:48:15.058 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:15 np0005539564 nova_compute[226295]: 2025-11-29 07:48:15.195 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:15.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:15.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:17.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:17.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:19.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:19.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:19 np0005539564 podman[236049]: 2025-11-29 07:48:19.531670324 +0000 UTC m=+0.072886058 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:48:19 np0005539564 podman[236048]: 2025-11-29 07:48:19.531948111 +0000 UTC m=+0.074404359 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 02:48:19 np0005539564 podman[236047]: 2025-11-29 07:48:19.600024137 +0000 UTC m=+0.142358380 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:48:20 np0005539564 nova_compute[226295]: 2025-11-29 07:48:20.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:20 np0005539564 nova_compute[226295]: 2025-11-29 07:48:20.197 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:21.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:21.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:23 np0005539564 nova_compute[226295]: 2025-11-29 07:48:23.422 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:23 np0005539564 nova_compute[226295]: 2025-11-29 07:48:23.423 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:23 np0005539564 nova_compute[226295]: 2025-11-29 07:48:23.423 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:23 np0005539564 nova_compute[226295]: 2025-11-29 07:48:23.423 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:23 np0005539564 nova_compute[226295]: 2025-11-29 07:48:23.424 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:48:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:23.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:24 np0005539564 nova_compute[226295]: 2025-11-29 07:48:24.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:24 np0005539564 nova_compute[226295]: 2025-11-29 07:48:24.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:48:24 np0005539564 nova_compute[226295]: 2025-11-29 07:48:24.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:48:24 np0005539564 nova_compute[226295]: 2025-11-29 07:48:24.609 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:24 np0005539564 nova_compute[226295]: 2025-11-29 07:48:24.610 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:24 np0005539564 nova_compute[226295]: 2025-11-29 07:48:24.610 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:48:24 np0005539564 nova_compute[226295]: 2025-11-29 07:48:24.610 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:25 np0005539564 nova_compute[226295]: 2025-11-29 07:48:25.099 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:25 np0005539564 nova_compute[226295]: 2025-11-29 07:48:25.199 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:25 np0005539564 nova_compute[226295]: 2025-11-29 07:48:25.396 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:48:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:25.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:25.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:26 np0005539564 nova_compute[226295]: 2025-11-29 07:48:26.546 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:26 np0005539564 nova_compute[226295]: 2025-11-29 07:48:26.940 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:26 np0005539564 nova_compute[226295]: 2025-11-29 07:48:26.941 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:48:26 np0005539564 nova_compute[226295]: 2025-11-29 07:48:26.941 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:26 np0005539564 nova_compute[226295]: 2025-11-29 07:48:26.942 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:27 np0005539564 nova_compute[226295]: 2025-11-29 07:48:27.176 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:27 np0005539564 nova_compute[226295]: 2025-11-29 07:48:27.176 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:27 np0005539564 nova_compute[226295]: 2025-11-29 07:48:27.177 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:27 np0005539564 nova_compute[226295]: 2025-11-29 07:48:27.177 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:48:27 np0005539564 nova_compute[226295]: 2025-11-29 07:48:27.177 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:27.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:27.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/608384582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.394 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.594 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.594 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.601 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.601 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.802 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.803 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4509MB free_disk=20.844467163085938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.803 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.804 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.885 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance dc42f6b3-eda5-409e-aac8-68275e50922e actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.885 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance f1710cd2-4467-4d84-9adc-5a022d404b26 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.885 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.885 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:48:28 np0005539564 nova_compute[226295]: 2025-11-29 07:48:28.931 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3938264456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:29 np0005539564 nova_compute[226295]: 2025-11-29 07:48:29.393 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:29 np0005539564 nova_compute[226295]: 2025-11-29 07:48:29.403 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:29 np0005539564 nova_compute[226295]: 2025-11-29 07:48:29.464 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:29.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:29 np0005539564 nova_compute[226295]: 2025-11-29 07:48:29.601 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:48:29 np0005539564 nova_compute[226295]: 2025-11-29 07:48:29.601 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:30 np0005539564 nova_compute[226295]: 2025-11-29 07:48:30.003 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:30 np0005539564 nova_compute[226295]: 2025-11-29 07:48:30.004 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:30 np0005539564 nova_compute[226295]: 2025-11-29 07:48:30.100 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:30 np0005539564 nova_compute[226295]: 2025-11-29 07:48:30.201 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:30 np0005539564 nova_compute[226295]: 2025-11-29 07:48:30.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:31.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:31.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:32 np0005539564 ovn_controller[130591]: 2025-11-29T07:48:32Z|00078|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 02:48:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:33.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:33.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.102 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.216 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.493 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.493 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:35.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.511 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:48:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.623 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.623 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.632 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.632 226310 INFO nova.compute.claims [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:48:35 np0005539564 nova_compute[226295]: 2025-11-29 07:48:35.815 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3710328202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.306 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.312 226310 DEBUG nova.compute.provider_tree [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.336 226310 DEBUG nova.scheduler.client.report [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.366 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.388 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "ab04b720-a78e-44fa-97ac-e9d8085fea7b" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.389 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "ab04b720-a78e-44fa-97ac-e9d8085fea7b" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.414 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] No node specified, defaulting to compute-1.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.515 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "ab04b720-a78e-44fa-97ac-e9d8085fea7b" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.517 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.629 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.630 226310 DEBUG nova.network.neutron [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.675 226310 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.699 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.829 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.831 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.831 226310 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Creating image(s)#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.861 226310 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.891 226310 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.919 226310 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.923 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.996 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.997 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.997 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:36 np0005539564 nova_compute[226295]: 2025-11-29 07:48:36.997 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:37 np0005539564 nova_compute[226295]: 2025-11-29 07:48:37.028 226310 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:37 np0005539564 nova_compute[226295]: 2025-11-29 07:48:37.032 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:37 np0005539564 nova_compute[226295]: 2025-11-29 07:48:37.215 226310 DEBUG nova.network.neutron [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:48:37 np0005539564 nova_compute[226295]: 2025-11-29 07:48:37.215 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:48:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:37.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:39.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:39.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:40 np0005539564 nova_compute[226295]: 2025-11-29 07:48:40.104 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:40 np0005539564 nova_compute[226295]: 2025-11-29 07:48:40.218 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.482 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:41.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:41.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.605 226310 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] resizing rbd image 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.755 226310 DEBUG nova.objects.instance [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'migration_context' on Instance uuid 33ad043e-54e2-42cc-83ac-b9b98d8fbc46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.772 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.774 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Ensure instance console log exists: /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.775 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.776 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.777 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.780 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.788 226310 WARNING nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.795 226310 DEBUG nova.virt.libvirt.host [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.796 226310 DEBUG nova.virt.libvirt.host [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.801 226310 DEBUG nova.virt.libvirt.host [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.802 226310 DEBUG nova.virt.libvirt.host [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.805 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.806 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.807 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.808 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.809 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.809 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.810 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.810 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.811 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.811 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.812 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.812 226310 DEBUG nova.virt.hardware [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:48:41 np0005539564 nova_compute[226295]: 2025-11-29 07:48:41.818 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:43.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:43.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/379740943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:43 np0005539564 nova_compute[226295]: 2025-11-29 07:48:43.924 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:44 np0005539564 nova_compute[226295]: 2025-11-29 07:48:44.175 226310 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:44 np0005539564 nova_compute[226295]: 2025-11-29 07:48:44.185 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:45 np0005539564 nova_compute[226295]: 2025-11-29 07:48:45.107 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:45 np0005539564 nova_compute[226295]: 2025-11-29 07:48:45.220 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:45.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:45.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2523292697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.272 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.274 226310 DEBUG nova.objects.instance [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33ad043e-54e2-42cc-83ac-b9b98d8fbc46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.309 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <uuid>33ad043e-54e2-42cc-83ac-b9b98d8fbc46</uuid>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <name>instance-00000017</name>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersOnMultiNodesTest-server-966081188-1</nova:name>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:48:41</nova:creationTime>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <nova:user uuid="386584ea971049e3b0c06b8237710848">tempest-ServersOnMultiNodesTest-893669333-project-member</nova:user>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <nova:project uuid="c80f8d4661784e8faaf78d28df3fb677">tempest-ServersOnMultiNodesTest-893669333</nova:project>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <entry name="serial">33ad043e-54e2-42cc-83ac-b9b98d8fbc46</entry>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <entry name="uuid">33ad043e-54e2-42cc-83ac-b9b98d8fbc46</entry>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk.config">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46/console.log" append="off"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:48:46 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:48:46 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:48:46 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:48:46 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.463 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.464 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.464 226310 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Using config drive#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.488 226310 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.786 226310 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Creating config drive at /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46/disk.config#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.791 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3lvsdc8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:46 np0005539564 nova_compute[226295]: 2025-11-29 07:48:46.933 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3lvsdc8" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:47 np0005539564 nova_compute[226295]: 2025-11-29 07:48:47.278 226310 DEBUG nova.storage.rbd_utils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:47 np0005539564 nova_compute[226295]: 2025-11-29 07:48:47.282 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46/disk.config 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:47.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:47.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:48 np0005539564 nova_compute[226295]: 2025-11-29 07:48:48.843 226310 DEBUG oslo_concurrency.processutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46/disk.config 33ad043e-54e2-42cc-83ac-b9b98d8fbc46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:48 np0005539564 nova_compute[226295]: 2025-11-29 07:48:48.844 226310 INFO nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Deleting local config drive /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46/disk.config because it was imported into RBD.#033[00m
Nov 29 02:48:48 np0005539564 systemd-machined[190128]: New machine qemu-10-instance-00000017.
Nov 29 02:48:48 np0005539564 systemd[1]: Started Virtual Machine qemu-10-instance-00000017.
Nov 29 02:48:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:49.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:49.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:49 np0005539564 podman[236521]: 2025-11-29 07:48:49.656996389 +0000 UTC m=+0.072606646 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.672 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402529.6721275, 33ad043e-54e2-42cc-83ac-b9b98d8fbc46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.673 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:48:49 np0005539564 podman[236522]: 2025-11-29 07:48:49.675446068 +0000 UTC m=+0.090643224 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.676 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.676 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.683 226310 INFO nova.virt.libvirt.driver [-] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Instance spawned successfully.#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.684 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.708 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.714 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.718 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.718 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.719 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.719 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.719 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.720 226310 DEBUG nova.virt.libvirt.driver [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.756 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.757 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402529.6766188, 33ad043e-54e2-42cc-83ac-b9b98d8fbc46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.757 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] VM Started (Lifecycle Event)#033[00m
Nov 29 02:48:49 np0005539564 podman[236557]: 2025-11-29 07:48:49.790954835 +0000 UTC m=+0.113838012 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.804 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.807 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.846 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.867 226310 INFO nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Took 13.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:48:49 np0005539564 nova_compute[226295]: 2025-11-29 07:48:49.867 226310 DEBUG nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:50 np0005539564 nova_compute[226295]: 2025-11-29 07:48:50.053 226310 INFO nova.compute.manager [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Took 14.46 seconds to build instance.#033[00m
Nov 29 02:48:50 np0005539564 nova_compute[226295]: 2025-11-29 07:48:50.111 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:50 np0005539564 nova_compute[226295]: 2025-11-29 07:48:50.140 226310 DEBUG oslo_concurrency.lockutils [None req-2688ee52-d345-41fd-bfdf-af7ba4ba99ea 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:50 np0005539564 nova_compute[226295]: 2025-11-29 07:48:50.221 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:51.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:51.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 02:48:52 np0005539564 ceph-osd[79212]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.403 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.405 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.406 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.406 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.406 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.408 226310 INFO nova.compute.manager [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Terminating instance#033[00m
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.409 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "refresh_cache-33ad043e-54e2-42cc-83ac-b9b98d8fbc46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.410 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquired lock "refresh_cache-33ad043e-54e2-42cc-83ac-b9b98d8fbc46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.410 226310 DEBUG nova.network.neutron [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:48:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:53.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:53.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:53 np0005539564 nova_compute[226295]: 2025-11-29 07:48:53.582 226310 DEBUG nova.network.neutron [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:48:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:54 np0005539564 nova_compute[226295]: 2025-11-29 07:48:54.726 226310 DEBUG nova.network.neutron [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:54 np0005539564 nova_compute[226295]: 2025-11-29 07:48:54.838 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Releasing lock "refresh_cache-33ad043e-54e2-42cc-83ac-b9b98d8fbc46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:54 np0005539564 nova_compute[226295]: 2025-11-29 07:48:54.839 226310 DEBUG nova.compute.manager [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:48:55 np0005539564 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 29 02:48:55 np0005539564 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000017.scope: Consumed 5.993s CPU time.
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.112 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:55 np0005539564 systemd-machined[190128]: Machine qemu-10-instance-00000017 terminated.
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.223 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.264 226310 INFO nova.virt.libvirt.driver [-] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Instance destroyed successfully.#033[00m
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.265 226310 DEBUG nova.objects.instance [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'resources' on Instance uuid 33ad043e-54e2-42cc-83ac-b9b98d8fbc46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:55.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:55.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.746 226310 INFO nova.virt.libvirt.driver [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Deleting instance files /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46_del#033[00m
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.746 226310 INFO nova.virt.libvirt.driver [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Deletion of /var/lib/nova/instances/33ad043e-54e2-42cc-83ac-b9b98d8fbc46_del complete#033[00m
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.811 226310 INFO nova.compute.manager [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.812 226310 DEBUG oslo.service.loopingcall [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.813 226310 DEBUG nova.compute.manager [-] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:48:55 np0005539564 nova_compute[226295]: 2025-11-29 07:48:55.813 226310 DEBUG nova.network.neutron [-] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:48:56 np0005539564 nova_compute[226295]: 2025-11-29 07:48:56.555 226310 DEBUG nova.network.neutron [-] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:48:56 np0005539564 nova_compute[226295]: 2025-11-29 07:48:56.568 226310 DEBUG nova.network.neutron [-] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:56 np0005539564 nova_compute[226295]: 2025-11-29 07:48:56.592 226310 INFO nova.compute.manager [-] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Took 0.78 seconds to deallocate network for instance.#033[00m
Nov 29 02:48:56 np0005539564 nova_compute[226295]: 2025-11-29 07:48:56.649 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:56 np0005539564 nova_compute[226295]: 2025-11-29 07:48:56.650 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:56 np0005539564 nova_compute[226295]: 2025-11-29 07:48:56.742 226310 DEBUG oslo_concurrency.processutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3564154323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:57 np0005539564 nova_compute[226295]: 2025-11-29 07:48:57.202 226310 DEBUG oslo_concurrency.processutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:57 np0005539564 nova_compute[226295]: 2025-11-29 07:48:57.208 226310 DEBUG nova.compute.provider_tree [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:57 np0005539564 nova_compute[226295]: 2025-11-29 07:48:57.224 226310 DEBUG nova.scheduler.client.report [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:57 np0005539564 nova_compute[226295]: 2025-11-29 07:48:57.244 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:57 np0005539564 nova_compute[226295]: 2025-11-29 07:48:57.270 226310 INFO nova.scheduler.client.report [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Deleted allocations for instance 33ad043e-54e2-42cc-83ac-b9b98d8fbc46#033[00m
Nov 29 02:48:57 np0005539564 nova_compute[226295]: 2025-11-29 07:48:57.352 226310 DEBUG oslo_concurrency.lockutils [None req-f9809492-e821-4d7a-a38c-ef1160b55115 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "33ad043e-54e2-42cc-83ac-b9b98d8fbc46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:57.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:57.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 02:48:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:59.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:48:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:59.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:00 np0005539564 nova_compute[226295]: 2025-11-29 07:49:00.114 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:00 np0005539564 nova_compute[226295]: 2025-11-29 07:49:00.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.254 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:01.255 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:49:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:01.258 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:49:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:01.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:01.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.602 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.602 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.633 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.753 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.753 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.761 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.761 226310 INFO nova.compute.claims [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:49:01 np0005539564 nova_compute[226295]: 2025-11-29 07:49:01.909 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1874588273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.448 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.456 226310 DEBUG nova.compute.provider_tree [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.473 226310 DEBUG nova.scheduler.client.report [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.503 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.504 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.559 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.560 226310 DEBUG nova.network.neutron [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.589 226310 INFO nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.605 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.705 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.707 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.708 226310 INFO nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Creating image(s)#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.757 226310 DEBUG nova.storage.rbd_utils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 55a96092-65f2-4612-a809-0f145c804f96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.799 226310 DEBUG nova.storage.rbd_utils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 55a96092-65f2-4612-a809-0f145c804f96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.831 226310 DEBUG nova.storage.rbd_utils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 55a96092-65f2-4612-a809-0f145c804f96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.834 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.860 226310 DEBUG nova.policy [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93506ec26b16451c91dc820b139e8707', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b2c58ae2e706424fa3147694fc571db0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.907 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.908 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.908 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.908 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.961 226310 DEBUG nova.storage.rbd_utils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 55a96092-65f2-4612-a809-0f145c804f96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:02 np0005539564 nova_compute[226295]: 2025-11-29 07:49:02.965 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 55a96092-65f2-4612-a809-0f145c804f96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:03.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:03.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:03.697 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:03.698 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:03.698 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:04 np0005539564 nova_compute[226295]: 2025-11-29 07:49:04.348 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 55a96092-65f2-4612-a809-0f145c804f96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:04 np0005539564 nova_compute[226295]: 2025-11-29 07:49:04.477 226310 DEBUG nova.storage.rbd_utils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] resizing rbd image 55a96092-65f2-4612-a809-0f145c804f96_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:49:04 np0005539564 nova_compute[226295]: 2025-11-29 07:49:04.531 226310 DEBUG nova.network.neutron [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Successfully created port: 0b323b38-c9ec-4cca-a4db-a839bbb3d14b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:49:05 np0005539564 nova_compute[226295]: 2025-11-29 07:49:05.162 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:05 np0005539564 nova_compute[226295]: 2025-11-29 07:49:05.226 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:05.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:05.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:05 np0005539564 nova_compute[226295]: 2025-11-29 07:49:05.659 226310 DEBUG nova.network.neutron [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Successfully updated port: 0b323b38-c9ec-4cca-a4db-a839bbb3d14b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:49:05 np0005539564 nova_compute[226295]: 2025-11-29 07:49:05.703 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:05 np0005539564 nova_compute[226295]: 2025-11-29 07:49:05.703 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquired lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:05 np0005539564 nova_compute[226295]: 2025-11-29 07:49:05.704 226310 DEBUG nova.network.neutron [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:49:05 np0005539564 nova_compute[226295]: 2025-11-29 07:49:05.890 226310 DEBUG nova.network.neutron [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:49:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.501 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "f1710cd2-4467-4d84-9adc-5a022d404b26" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.502 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "f1710cd2-4467-4d84-9adc-5a022d404b26" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.502 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "f1710cd2-4467-4d84-9adc-5a022d404b26-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.503 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "f1710cd2-4467-4d84-9adc-5a022d404b26-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.503 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "f1710cd2-4467-4d84-9adc-5a022d404b26-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.505 226310 INFO nova.compute.manager [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Terminating instance#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.507 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "refresh_cache-f1710cd2-4467-4d84-9adc-5a022d404b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.508 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquired lock "refresh_cache-f1710cd2-4467-4d84-9adc-5a022d404b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.508 226310 DEBUG nova.network.neutron [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.579 226310 DEBUG nova.compute.manager [req-63cd3ee8-a8d2-44d0-955f-719cc15a75cc req-a864e8e8-6874-4358-9208-17340e388c5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-changed-0b323b38-c9ec-4cca-a4db-a839bbb3d14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.579 226310 DEBUG nova.compute.manager [req-63cd3ee8-a8d2-44d0-955f-719cc15a75cc req-a864e8e8-6874-4358-9208-17340e388c5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Refreshing instance network info cache due to event network-changed-0b323b38-c9ec-4cca-a4db-a839bbb3d14b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.579 226310 DEBUG oslo_concurrency.lockutils [req-63cd3ee8-a8d2-44d0-955f-719cc15a75cc req-a864e8e8-6874-4358-9208-17340e388c5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:06 np0005539564 nova_compute[226295]: 2025-11-29 07:49:06.790 226310 DEBUG nova.network.neutron [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:49:07 np0005539564 nova_compute[226295]: 2025-11-29 07:49:07.058 226310 DEBUG nova.network.neutron [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:07 np0005539564 nova_compute[226295]: 2025-11-29 07:49:07.074 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Releasing lock "refresh_cache-f1710cd2-4467-4d84-9adc-5a022d404b26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:49:07 np0005539564 nova_compute[226295]: 2025-11-29 07:49:07.074 226310 DEBUG nova.compute.manager [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:49:07 np0005539564 nova_compute[226295]: 2025-11-29 07:49:07.318 226310 DEBUG nova.network.neutron [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updating instance_info_cache with network_info: [{"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:07 np0005539564 nova_compute[226295]: 2025-11-29 07:49:07.412 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Releasing lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:49:07 np0005539564 nova_compute[226295]: 2025-11-29 07:49:07.412 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Instance network_info: |[{"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:49:07 np0005539564 nova_compute[226295]: 2025-11-29 07:49:07.413 226310 DEBUG oslo_concurrency.lockutils [req-63cd3ee8-a8d2-44d0-955f-719cc15a75cc req-a864e8e8-6874-4358-9208-17340e388c5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:07 np0005539564 nova_compute[226295]: 2025-11-29 07:49:07.413 226310 DEBUG nova.network.neutron [req-63cd3ee8-a8d2-44d0-955f-719cc15a75cc req-a864e8e8-6874-4358-9208-17340e388c5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Refreshing network info cache for port 0b323b38-c9ec-4cca-a4db-a839bbb3d14b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:49:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:07.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:07.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:09.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:09.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:10.261 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.704 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.707 226310 DEBUG nova.network.neutron [req-63cd3ee8-a8d2-44d0-955f-719cc15a75cc req-a864e8e8-6874-4358-9208-17340e388c5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updated VIF entry in instance network info cache for port 0b323b38-c9ec-4cca-a4db-a839bbb3d14b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.708 226310 DEBUG nova.network.neutron [req-63cd3ee8-a8d2-44d0-955f-719cc15a75cc req-a864e8e8-6874-4358-9208-17340e388c5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updating instance_info_cache with network_info: [{"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.711 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402535.2614083, 33ad043e-54e2-42cc-83ac-b9b98d8fbc46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.711 226310 INFO nova.compute.manager [-] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.722 226310 DEBUG nova.objects.instance [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'migration_context' on Instance uuid 55a96092-65f2-4612-a809-0f145c804f96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.749 226310 DEBUG nova.compute.manager [None req-4316f1da-c60e-44d5-a4ce-63d776977579 - - - - - -] [instance: 33ad043e-54e2-42cc-83ac-b9b98d8fbc46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.750 226310 DEBUG oslo_concurrency.lockutils [req-63cd3ee8-a8d2-44d0-955f-719cc15a75cc req-a864e8e8-6874-4358-9208-17340e388c5b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.758 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.759 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Ensure instance console log exists: /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.759 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.760 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.761 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.765 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Start _get_guest_xml network_info=[{"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.773 226310 WARNING nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.782 226310 DEBUG nova.virt.libvirt.host [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.783 226310 DEBUG nova.virt.libvirt.host [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.788 226310 DEBUG nova.virt.libvirt.host [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.788 226310 DEBUG nova.virt.libvirt.host [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.790 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.791 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.791 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.792 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.792 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.792 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.793 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.793 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.793 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.794 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.794 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.794 226310 DEBUG nova.virt.hardware [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:49:10 np0005539564 nova_compute[226295]: 2025-11-29 07:49:10.798 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:10 np0005539564 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 29 02:49:10 np0005539564 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000013.scope: Consumed 16.625s CPU time.
Nov 29 02:49:10 np0005539564 systemd-machined[190128]: Machine qemu-9-instance-00000013 terminated.
Nov 29 02:49:11 np0005539564 nova_compute[226295]: 2025-11-29 07:49:11.106 226310 INFO nova.virt.libvirt.driver [-] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Instance destroyed successfully.#033[00m
Nov 29 02:49:11 np0005539564 nova_compute[226295]: 2025-11-29 07:49:11.107 226310 DEBUG nova.objects.instance [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'resources' on Instance uuid f1710cd2-4467-4d84-9adc-5a022d404b26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:11.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:11.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:49:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/552190905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:49:12 np0005539564 nova_compute[226295]: 2025-11-29 07:49:12.886 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:12 np0005539564 nova_compute[226295]: 2025-11-29 07:49:12.954 226310 DEBUG nova.storage.rbd_utils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 55a96092-65f2-4612-a809-0f145c804f96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:12 np0005539564 nova_compute[226295]: 2025-11-29 07:49:12.959 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:49:13 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1753198554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.397 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.400 226310 DEBUG nova.virt.libvirt.vif [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-2079509307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-2079509307',id=25,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEYgVhV5oCWL/zQAqB0DQOOXmiTf0DMuz+TQcrYDPPKNZbRx/P2PRwEEgf3Xvpb7WhJ4XE5LOnipChRiobaw1mrfCL6W7daqE2XxiRFHktfVRQSPzC2uzKZew970NImApw==',key_name='tempest-keypair-1714751908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2c58ae2e706424fa3147694fc571db0',ramdisk_id='',reservation_id='r-h9zj9wcd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93506ec26b16451c91dc820b139e8707',uuid=55a96092-65f2-4612-a809-0f145c804f96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.401 226310 DEBUG nova.network.os_vif_util [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converting VIF {"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.403 226310 DEBUG nova.network.os_vif_util [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:67:81,bridge_name='br-int',has_traffic_filtering=True,id=0b323b38-c9ec-4cca-a4db-a839bbb3d14b,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b323b38-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.405 226310 DEBUG nova.objects.instance [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 55a96092-65f2-4612-a809-0f145c804f96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.427 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <uuid>55a96092-65f2-4612-a809-0f145c804f96</uuid>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <name>instance-00000019</name>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-2079509307</nova:name>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:49:10</nova:creationTime>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <nova:user uuid="93506ec26b16451c91dc820b139e8707">tempest-UpdateMultiattachVolumeNegativeTest-1774120772-project-member</nova:user>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <nova:project uuid="b2c58ae2e706424fa3147694fc571db0">tempest-UpdateMultiattachVolumeNegativeTest-1774120772</nova:project>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <nova:port uuid="0b323b38-c9ec-4cca-a4db-a839bbb3d14b">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <entry name="serial">55a96092-65f2-4612-a809-0f145c804f96</entry>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <entry name="uuid">55a96092-65f2-4612-a809-0f145c804f96</entry>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/55a96092-65f2-4612-a809-0f145c804f96_disk">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/55a96092-65f2-4612-a809-0f145c804f96_disk.config">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:81:67:81"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <target dev="tap0b323b38-c9"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96/console.log" append="off"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:49:13 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:49:13 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:49:13 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:49:13 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.428 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Preparing to wait for external event network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.429 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.429 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.429 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.430 226310 DEBUG nova.virt.libvirt.vif [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-2079509307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-2079509307',id=25,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEYgVhV5oCWL/zQAqB0DQOOXmiTf0DMuz+TQcrYDPPKNZbRx/P2PRwEEgf3Xvpb7WhJ4XE5LOnipChRiobaw1mrfCL6W7daqE2XxiRFHktfVRQSPzC2uzKZew970NImApw==',key_name='tempest-keypair-1714751908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2c58ae2e706424fa3147694fc571db0',ramdisk_id='',reservation_id='r-h9zj9wcd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93506ec26b16451c91dc820b139e8707',uuid=55a96092-65f2-4612-a809-0f145c804f96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.431 226310 DEBUG nova.network.os_vif_util [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converting VIF {"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.432 226310 DEBUG nova.network.os_vif_util [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:67:81,bridge_name='br-int',has_traffic_filtering=True,id=0b323b38-c9ec-4cca-a4db-a839bbb3d14b,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b323b38-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.433 226310 DEBUG os_vif [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:67:81,bridge_name='br-int',has_traffic_filtering=True,id=0b323b38-c9ec-4cca-a4db-a839bbb3d14b,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b323b38-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.434 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.434 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.435 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.440 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.440 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b323b38-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.441 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b323b38-c9, col_values=(('external_ids', {'iface-id': '0b323b38-c9ec-4cca-a4db-a839bbb3d14b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:67:81', 'vm-uuid': '55a96092-65f2-4612-a809-0f145c804f96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.443 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:13 np0005539564 NetworkManager[48997]: <info>  [1764402553.4442] manager: (tap0b323b38-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.445 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.451 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.452 226310 INFO os_vif [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:67:81,bridge_name='br-int',has_traffic_filtering=True,id=0b323b38-c9ec-4cca-a4db-a839bbb3d14b,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b323b38-c9')#033[00m
Nov 29 02:49:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:13.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:13.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.787 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.787 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.788 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No VIF found with MAC fa:16:3e:81:67:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.789 226310 INFO nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Using config drive#033[00m
Nov 29 02:49:13 np0005539564 nova_compute[226295]: 2025-11-29 07:49:13.827 226310 DEBUG nova.storage.rbd_utils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 55a96092-65f2-4612-a809-0f145c804f96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:13 np0005539564 podman[237068]: 2025-11-29 07:49:13.949451175 +0000 UTC m=+0.474319272 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:49:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:14 np0005539564 podman[237108]: 2025-11-29 07:49:14.188066955 +0000 UTC m=+0.101628562 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:49:14 np0005539564 podman[237068]: 2025-11-29 07:49:14.479068013 +0000 UTC m=+1.003936080 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:49:15 np0005539564 nova_compute[226295]: 2025-11-29 07:49:15.166 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:15.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:15 np0005539564 nova_compute[226295]: 2025-11-29 07:49:15.574 226310 INFO nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Creating config drive at /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96/disk.config#033[00m
Nov 29 02:49:15 np0005539564 nova_compute[226295]: 2025-11-29 07:49:15.586 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphif94mko execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:15 np0005539564 nova_compute[226295]: 2025-11-29 07:49:15.738 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphif94mko" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:15 np0005539564 nova_compute[226295]: 2025-11-29 07:49:15.849 226310 DEBUG nova.storage.rbd_utils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] rbd image 55a96092-65f2-4612-a809-0f145c804f96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:15 np0005539564 nova_compute[226295]: 2025-11-29 07:49:15.855 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96/disk.config 55a96092-65f2-4612-a809-0f145c804f96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.412 226310 DEBUG oslo_concurrency.processutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96/disk.config 55a96092-65f2-4612-a809-0f145c804f96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.414 226310 INFO nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Deleting local config drive /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96/disk.config because it was imported into RBD.#033[00m
Nov 29 02:49:17 np0005539564 kernel: tap0b323b38-c9: entered promiscuous mode
Nov 29 02:49:17 np0005539564 NetworkManager[48997]: <info>  [1764402557.4898] manager: (tap0b323b38-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Nov 29 02:49:17 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:17Z|00079|binding|INFO|Claiming lport 0b323b38-c9ec-4cca-a4db-a839bbb3d14b for this chassis.
Nov 29 02:49:17 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:17Z|00080|binding|INFO|0b323b38-c9ec-4cca-a4db-a839bbb3d14b: Claiming fa:16:3e:81:67:81 10.100.0.9
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.492 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.500 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.505 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.523 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:67:81 10.100.0.9'], port_security=['fa:16:3e:81:67:81 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '55a96092-65f2-4612-a809-0f145c804f96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2c58ae2e706424fa3147694fc571db0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad0084cd-f9d9-4dc4-8cfd-f48e086021ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61f2e0e3-be06-454a-8b4e-1d0721b87b15, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0b323b38-c9ec-4cca-a4db-a839bbb3d14b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.524 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0b323b38-c9ec-4cca-a4db-a839bbb3d14b in datapath 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d bound to our chassis#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.526 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.541 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3575bb-5721-4693-baf9-cd1224d8fba3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.542 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ccff1f0-61 in ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:49:17 np0005539564 systemd-udevd[237396]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:49:17 np0005539564 systemd-machined[190128]: New machine qemu-11-instance-00000019.
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.545 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ccff1f0-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.545 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a72ff838-a9aa-4074-893e-61885b9c10b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.547 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[daeee935-878d-4cd4-a6d0-1b7f5cf75b3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:17.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:17 np0005539564 systemd[1]: Started Virtual Machine qemu-11-instance-00000019.
Nov 29 02:49:17 np0005539564 NetworkManager[48997]: <info>  [1764402557.5687] device (tap0b323b38-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:49:17 np0005539564 NetworkManager[48997]: <info>  [1764402557.5713] device (tap0b323b38-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.571 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[be77a2c5-456d-4424-a1ef-750dfd25ffb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:17.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.600 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc24563-9a38-4090-9358-f6395b2c1494]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.613 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:17 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:17Z|00081|binding|INFO|Setting lport 0b323b38-c9ec-4cca-a4db-a839bbb3d14b ovn-installed in OVS
Nov 29 02:49:17 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:17Z|00082|binding|INFO|Setting lport 0b323b38-c9ec-4cca-a4db-a839bbb3d14b up in Southbound
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.619 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.635 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4332c0f8-368e-4002-ad9b-ae4b77d0d556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 NetworkManager[48997]: <info>  [1764402557.6440] manager: (tap5ccff1f0-60): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.642 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa9e7d4-a7fb-4571-9cfc-5fb2a5264833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.685 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb72c66-7833-45b5-a9c3-d70c45338df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.688 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a906b309-7691-42e5-9148-9fda8ba121a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 NetworkManager[48997]: <info>  [1764402557.7091] device (tap5ccff1f0-60): carrier: link connected
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.713 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[06d00e92-b2bc-48b2-b58a-f6d9f806a316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.734 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8beb37c2-5a57-4450-b0b0-b2e8dee35f8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ccff1f0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:ab:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553239, 'reachable_time': 21169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237430, 'error': None, 'target': 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.752 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[284bb6f4-d7ea-48a2-8bb7-6b588d1e9a1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:ab1b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553239, 'tstamp': 553239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237431, 'error': None, 'target': 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.772 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[71c75b11-9f09-4bfa-adb2-5d4cd547f554]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ccff1f0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:ab:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553239, 'reachable_time': 21169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237432, 'error': None, 'target': 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.806 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8acc41e1-8b7f-4d4d-bbce-05c2761f0d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.883 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[404270e0-296d-4c62-ba45-a68ed36f1277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.884 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ccff1f0-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.885 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.885 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ccff1f0-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:17 np0005539564 kernel: tap5ccff1f0-60: entered promiscuous mode
Nov 29 02:49:17 np0005539564 NetworkManager[48997]: <info>  [1764402557.8891] manager: (tap5ccff1f0-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.890 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ccff1f0-60, col_values=(('external_ids', {'iface-id': '09a417c6-99cf-4665-bfe6-2a3bd0914a3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:17 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:17Z|00083|binding|INFO|Releasing lport 09a417c6-99cf-4665-bfe6-2a3bd0914a3c from this chassis (sb_readonly=0)
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.886 226310 DEBUG nova.compute.manager [req-f8c9b53c-a8a8-49e7-a4fe-c63a65d2c45c req-44f0b9a6-5938-427b-a5df-6213ac4ad912 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.908 226310 DEBUG oslo_concurrency.lockutils [req-f8c9b53c-a8a8-49e7-a4fe-c63a65d2c45c req-44f0b9a6-5938-427b-a5df-6213ac4ad912 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.909 226310 DEBUG oslo_concurrency.lockutils [req-f8c9b53c-a8a8-49e7-a4fe-c63a65d2c45c req-44f0b9a6-5938-427b-a5df-6213ac4ad912 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.909 226310 DEBUG oslo_concurrency.lockutils [req-f8c9b53c-a8a8-49e7-a4fe-c63a65d2c45c req-44f0b9a6-5938-427b-a5df-6213ac4ad912 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.909 226310 DEBUG nova.compute.manager [req-f8c9b53c-a8a8-49e7-a4fe-c63a65d2c45c req-44f0b9a6-5938-427b-a5df-6213ac4ad912 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Processing event network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.910 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:17 np0005539564 nova_compute[226295]: 2025-11-29 07:49:17.914 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.915 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.916 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc2a75c-9113-423a-b852-eafc0ca2949f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.916 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d.pid.haproxy
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:49:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:49:17.918 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'env', 'PROCESS_TAG=haproxy-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:49:18 np0005539564 podman[237482]: 2025-11-29 07:49:18.305363422 +0000 UTC m=+0.035014228 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:49:18 np0005539564 nova_compute[226295]: 2025-11-29 07:49:18.443 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.240 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402559.2392914, 55a96092-65f2-4612-a809-0f145c804f96 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.241 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] VM Started (Lifecycle Event)#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.244 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.249 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.255 226310 INFO nova.virt.libvirt.driver [-] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Instance spawned successfully.#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.255 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.267 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.277 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.283 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.284 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.285 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.285 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.286 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.287 226310 DEBUG nova.virt.libvirt.driver [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.305 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.305 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402559.2396326, 55a96092-65f2-4612-a809-0f145c804f96 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.305 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.327 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.331 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402559.2487442, 55a96092-65f2-4612-a809-0f145c804f96 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.331 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.361 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.364 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.374 226310 INFO nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Took 16.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.375 226310 DEBUG nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.405 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.467 226310 INFO nova.compute.manager [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Took 17.74 seconds to build instance.#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.483 226310 DEBUG oslo_concurrency.lockutils [None req-ac5e4032-41b8-4eb1-892a-9c01764d8d55 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:19.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:19.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.997 226310 DEBUG nova.compute.manager [req-5ca5b6ba-867f-4742-bcb9-3a20327fc72e req-3fb188bb-8831-4b18-80b3-64558a96aa22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.998 226310 DEBUG oslo_concurrency.lockutils [req-5ca5b6ba-867f-4742-bcb9-3a20327fc72e req-3fb188bb-8831-4b18-80b3-64558a96aa22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.998 226310 DEBUG oslo_concurrency.lockutils [req-5ca5b6ba-867f-4742-bcb9-3a20327fc72e req-3fb188bb-8831-4b18-80b3-64558a96aa22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.998 226310 DEBUG oslo_concurrency.lockutils [req-5ca5b6ba-867f-4742-bcb9-3a20327fc72e req-3fb188bb-8831-4b18-80b3-64558a96aa22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:19 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.999 226310 DEBUG nova.compute.manager [req-5ca5b6ba-867f-4742-bcb9-3a20327fc72e req-3fb188bb-8831-4b18-80b3-64558a96aa22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] No waiting events found dispatching network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:49:20 np0005539564 nova_compute[226295]: 2025-11-29 07:49:19.999 226310 WARNING nova.compute.manager [req-5ca5b6ba-867f-4742-bcb9-3a20327fc72e req-3fb188bb-8831-4b18-80b3-64558a96aa22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received unexpected event network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b for instance with vm_state active and task_state None.#033[00m
Nov 29 02:49:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:49:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:49:20 np0005539564 nova_compute[226295]: 2025-11-29 07:49:20.169 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:20 np0005539564 podman[237520]: 2025-11-29 07:49:20.568353817 +0000 UTC m=+0.112135037 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:49:20 np0005539564 podman[237521]: 2025-11-29 07:49:20.576343134 +0000 UTC m=+0.112009554 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:49:20 np0005539564 podman[237519]: 2025-11-29 07:49:20.662190057 +0000 UTC m=+0.208937787 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:49:21 np0005539564 nova_compute[226295]: 2025-11-29 07:49:21.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:21.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:21.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:21 np0005539564 podman[237482]: 2025-11-29 07:49:21.622176417 +0000 UTC m=+3.351827243 container create 2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 02:49:22 np0005539564 nova_compute[226295]: 2025-11-29 07:49:22.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:22 np0005539564 NetworkManager[48997]: <info>  [1764402562.0110] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 02:49:22 np0005539564 NetworkManager[48997]: <info>  [1764402562.0120] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 02:49:22 np0005539564 nova_compute[226295]: 2025-11-29 07:49:22.212 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:22 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:22Z|00084|binding|INFO|Releasing lport 09a417c6-99cf-4665-bfe6-2a3bd0914a3c from this chassis (sb_readonly=0)
Nov 29 02:49:22 np0005539564 nova_compute[226295]: 2025-11-29 07:49:22.236 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:22 np0005539564 nova_compute[226295]: 2025-11-29 07:49:22.469 226310 DEBUG nova.compute.manager [req-5fe94589-d924-400f-b496-59915b75723b req-0b20e807-f7c1-4e12-a36b-6b072d16054f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-changed-0b323b38-c9ec-4cca-a4db-a839bbb3d14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:22 np0005539564 nova_compute[226295]: 2025-11-29 07:49:22.470 226310 DEBUG nova.compute.manager [req-5fe94589-d924-400f-b496-59915b75723b req-0b20e807-f7c1-4e12-a36b-6b072d16054f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Refreshing instance network info cache due to event network-changed-0b323b38-c9ec-4cca-a4db-a839bbb3d14b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:49:22 np0005539564 nova_compute[226295]: 2025-11-29 07:49:22.470 226310 DEBUG oslo_concurrency.lockutils [req-5fe94589-d924-400f-b496-59915b75723b req-0b20e807-f7c1-4e12-a36b-6b072d16054f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:22 np0005539564 nova_compute[226295]: 2025-11-29 07:49:22.470 226310 DEBUG oslo_concurrency.lockutils [req-5fe94589-d924-400f-b496-59915b75723b req-0b20e807-f7c1-4e12-a36b-6b072d16054f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:22 np0005539564 nova_compute[226295]: 2025-11-29 07:49:22.471 226310 DEBUG nova.network.neutron [req-5fe94589-d924-400f-b496-59915b75723b req-0b20e807-f7c1-4e12-a36b-6b072d16054f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Refreshing network info cache for port 0b323b38-c9ec-4cca-a4db-a839bbb3d14b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:49:23 np0005539564 systemd[1]: Started libpod-conmon-2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f.scope.
Nov 29 02:49:23 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:49:23 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d587100524d928cc6cd5b1e7086851fac0b8aefc827abe58966def7bbb3c098/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:23 np0005539564 nova_compute[226295]: 2025-11-29 07:49:23.445 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:23 np0005539564 nova_compute[226295]: 2025-11-29 07:49:23.583 226310 DEBUG nova.network.neutron [req-5fe94589-d924-400f-b496-59915b75723b req-0b20e807-f7c1-4e12-a36b-6b072d16054f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updated VIF entry in instance network info cache for port 0b323b38-c9ec-4cca-a4db-a839bbb3d14b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:49:23 np0005539564 nova_compute[226295]: 2025-11-29 07:49:23.583 226310 DEBUG nova.network.neutron [req-5fe94589-d924-400f-b496-59915b75723b req-0b20e807-f7c1-4e12-a36b-6b072d16054f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updating instance_info_cache with network_info: [{"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:23 np0005539564 nova_compute[226295]: 2025-11-29 07:49:23.610 226310 DEBUG oslo_concurrency.lockutils [req-5fe94589-d924-400f-b496-59915b75723b req-0b20e807-f7c1-4e12-a36b-6b072d16054f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:49:23 np0005539564 podman[237482]: 2025-11-29 07:49:23.62540455 +0000 UTC m=+5.355055336 container init 2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:49:23 np0005539564 podman[237482]: 2025-11-29 07:49:23.631753502 +0000 UTC m=+5.361404288 container start 2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:49:23 np0005539564 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[237583]: [NOTICE]   (237587) : New worker (237589) forked
Nov 29 02:49:23 np0005539564 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[237583]: [NOTICE]   (237587) : Loading success.
Nov 29 02:49:24 np0005539564 nova_compute[226295]: 2025-11-29 07:49:24.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:24 np0005539564 nova_compute[226295]: 2025-11-29 07:49:24.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:24 np0005539564 nova_compute[226295]: 2025-11-29 07:49:24.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:49:24 np0005539564 nova_compute[226295]: 2025-11-29 07:49:24.390 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Nov 29 02:49:24 np0005539564 nova_compute[226295]: 2025-11-29 07:49:24.390 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:49:24 np0005539564 nova_compute[226295]: 2025-11-29 07:49:24.391 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:24 np0005539564 nova_compute[226295]: 2025-11-29 07:49:24.392 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:24 np0005539564 nova_compute[226295]: 2025-11-29 07:49:24.393 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:49:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:25 np0005539564 nova_compute[226295]: 2025-11-29 07:49:25.237 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:25.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:25.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.105 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402551.1028488, f1710cd2-4467-4d84-9adc-5a022d404b26 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.107 226310 INFO nova.compute.manager [-] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.297 226310 DEBUG nova.compute.manager [None req-260ac19e-f53d-46db-98fe-a130cd8b6e12 - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.303 226310 DEBUG nova.compute.manager [None req-260ac19e-f53d-46db-98fe-a130cd8b6e12 - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.383 226310 INFO nova.compute.manager [None req-260ac19e-f53d-46db-98fe-a130cd8b6e12 - - - - - -] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.396 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.397 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.398 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.398 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.399 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3239411733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.907 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.986 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.987 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.991 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:49:26 np0005539564 nova_compute[226295]: 2025-11-29 07:49:26.991 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.003 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.003 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.190 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.192 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4522MB free_disk=20.85523223876953GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.192 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.193 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.264 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance dc42f6b3-eda5-409e-aac8-68275e50922e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.264 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance f1710cd2-4467-4d84-9adc-5a022d404b26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.265 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 55a96092-65f2-4612-a809-0f145c804f96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.265 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.266 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.354 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:27.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2466301257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.806 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.812 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.828 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.858 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:49:27 np0005539564 nova_compute[226295]: 2025-11-29 07:49:27.859 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:28 np0005539564 nova_compute[226295]: 2025-11-29 07:49:28.447 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:29.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:29.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:29 np0005539564 nova_compute[226295]: 2025-11-29 07:49:29.860 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.000 226310 INFO nova.virt.libvirt.driver [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Deleting instance files /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26_del#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.001 226310 INFO nova.virt.libvirt.driver [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Deletion of /var/lib/nova/instances/f1710cd2-4467-4d84-9adc-5a022d404b26_del complete#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.063 226310 INFO nova.compute.manager [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Took 22.99 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.065 226310 DEBUG oslo.service.loopingcall [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.066 226310 DEBUG nova.compute.manager [-] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.066 226310 DEBUG nova.network.neutron [-] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.235 226310 DEBUG nova.network.neutron [-] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.240 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.421 226310 DEBUG nova.network.neutron [-] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.860 226310 INFO nova.compute.manager [-] [instance: f1710cd2-4467-4d84-9adc-5a022d404b26] Took 0.79 seconds to deallocate network for instance.#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.908 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.909 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:30 np0005539564 nova_compute[226295]: 2025-11-29 07:49:30.997 226310 DEBUG oslo_concurrency.processutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1578617882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:31 np0005539564 nova_compute[226295]: 2025-11-29 07:49:31.468 226310 DEBUG oslo_concurrency.processutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:31 np0005539564 nova_compute[226295]: 2025-11-29 07:49:31.474 226310 DEBUG nova.compute.provider_tree [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:31 np0005539564 nova_compute[226295]: 2025-11-29 07:49:31.494 226310 DEBUG nova.scheduler.client.report [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:31 np0005539564 nova_compute[226295]: 2025-11-29 07:49:31.530 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:31.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:31 np0005539564 nova_compute[226295]: 2025-11-29 07:49:31.580 226310 INFO nova.scheduler.client.report [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Deleted allocations for instance f1710cd2-4467-4d84-9adc-5a022d404b26#033[00m
Nov 29 02:49:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:31.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:31 np0005539564 nova_compute[226295]: 2025-11-29 07:49:31.668 226310 DEBUG oslo_concurrency.lockutils [None req-09480306-73a3-444a-a98e-9995717f3b08 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "f1710cd2-4467-4d84-9adc-5a022d404b26" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 25.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:33 np0005539564 nova_compute[226295]: 2025-11-29 07:49:33.452 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:33.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:33.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:35 np0005539564 nova_compute[226295]: 2025-11-29 07:49:35.242 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:35.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:35.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:36 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:49:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:37.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:37Z|00085|binding|INFO|Releasing lport 09a417c6-99cf-4665-bfe6-2a3bd0914a3c from this chassis (sb_readonly=0)
Nov 29 02:49:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:37 np0005539564 nova_compute[226295]: 2025-11-29 07:49:37.715 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:38 np0005539564 nova_compute[226295]: 2025-11-29 07:49:38.456 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 5.005629539s
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 5.004908085s
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_flush, latency = 5.102675438s
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.197101116s
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.198131084s, txc = 0x55ba4eb00f00
Nov 29 02:49:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.344013214s, txc = 0x55ba4ebd6600
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.343040943s, txc = 0x55ba4f7f3500
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.342199802s, txc = 0x55ba4eb2d800
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.341415405s, txc = 0x55ba4f7f3200
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.341235161s, txc = 0x55ba4e5cd800
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.340667248s, txc = 0x55ba4e95e000
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.339525700s, txc = 0x55ba4e876300
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.339229584s, txc = 0x55ba4f7f2f00
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.337281227s, txc = 0x55ba4eb01500
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.335989952s, txc = 0x55ba4eb54300
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.335733414s, txc = 0x55ba4e8af800
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.335485935s, txc = 0x55ba4e52d500
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.335351467s, txc = 0x55ba4e185200
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.335622311s, txc = 0x55ba4e184f00
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.335349083s, txc = 0x55ba4f7f2c00
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.335145950s, txc = 0x55ba4e876000
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.334960938s, txc = 0x55ba4e876f00
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.288867950s, txc = 0x55ba4f374000
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.262836933s, txc = 0x55ba4ea91800
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.262431145s, txc = 0x55ba4e815b00
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.261271000s, txc = 0x55ba4f7f2900
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.260678291s, txc = 0x55ba4f766900
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.251500607s, txc = 0x55ba4e184900
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.249127865s, txc = 0x55ba4e877200
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.248780727s, txc = 0x55ba4e815800
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.203041077s, txc = 0x55ba4e815500
Nov 29 02:49:38 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.247498989s, txc = 0x55ba4e52db00
Nov 29 02:49:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:39.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:39.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:40 np0005539564 nova_compute[226295]: 2025-11-29 07:49:40.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:41.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:41.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:41 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:41Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:67:81 10.100.0.9
Nov 29 02:49:41 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:41Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:67:81 10.100.0.9
Nov 29 02:49:43 np0005539564 nova_compute[226295]: 2025-11-29 07:49:43.458 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:43.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:43.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:43 np0005539564 ovn_controller[130591]: 2025-11-29T07:49:43Z|00086|binding|INFO|Releasing lport 09a417c6-99cf-4665-bfe6-2a3bd0914a3c from this chassis (sb_readonly=0)
Nov 29 02:49:43 np0005539564 nova_compute[226295]: 2025-11-29 07:49:43.972 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:45 np0005539564 nova_compute[226295]: 2025-11-29 07:49:45.245 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:45.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:45.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:49:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:47.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:47 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:47.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:48 np0005539564 nova_compute[226295]: 2025-11-29 07:49:48.460 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.101097) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589101153, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2293, "num_deletes": 253, "total_data_size": 5373782, "memory_usage": 5450784, "flush_reason": "Manual Compaction"}
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589166039, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3511905, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23863, "largest_seqno": 26151, "table_properties": {"data_size": 3502473, "index_size": 5862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20700, "raw_average_key_size": 20, "raw_value_size": 3483220, "raw_average_value_size": 3507, "num_data_blocks": 259, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402379, "oldest_key_time": 1764402379, "file_creation_time": 1764402589, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 65053 microseconds, and 9658 cpu microseconds.
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.166156) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3511905 bytes OK
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.166201) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.167862) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.167875) EVENT_LOG_v1 {"time_micros": 1764402589167871, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.167894) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 5363442, prev total WAL file size 5363442, number of live WAL files 2.
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.169731) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3429KB)], [48(8822KB)]
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589169805, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 12545819, "oldest_snapshot_seqno": -1}
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5470 keys, 10467542 bytes, temperature: kUnknown
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589333718, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 10467542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10429460, "index_size": 23316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 139423, "raw_average_key_size": 25, "raw_value_size": 10329255, "raw_average_value_size": 1888, "num_data_blocks": 953, "num_entries": 5470, "num_filter_entries": 5470, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764402589, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.333993) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10467542 bytes
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.335844) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.5 rd, 63.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 5994, records dropped: 524 output_compression: NoCompression
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.335863) EVENT_LOG_v1 {"time_micros": 1764402589335852, "job": 28, "event": "compaction_finished", "compaction_time_micros": 163975, "compaction_time_cpu_micros": 42637, "output_level": 6, "num_output_files": 1, "total_output_size": 10467542, "num_input_records": 5994, "num_output_records": 5470, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589336603, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589338375, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.169636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.338467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.338475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.338479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.338483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:49:49.338487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:49.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:49.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:50 np0005539564 nova_compute[226295]: 2025-11-29 07:49:50.252 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:51 np0005539564 podman[237721]: 2025-11-29 07:49:51.537603044 +0000 UTC m=+0.086999717 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 02:49:51 np0005539564 podman[237722]: 2025-11-29 07:49:51.537658095 +0000 UTC m=+0.078322241 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 02:49:51 np0005539564 podman[237720]: 2025-11-29 07:49:51.594940615 +0000 UTC m=+0.143452674 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:49:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:49:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:51 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:51.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:51.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:53 np0005539564 nova_compute[226295]: 2025-11-29 07:49:53.462 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:49:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:53 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:53.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:53.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:55 np0005539564 nova_compute[226295]: 2025-11-29 07:49:55.255 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:55.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:55.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:57 np0005539564 nova_compute[226295]: 2025-11-29 07:49:57.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:57.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:49:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:57 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:57.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:58 np0005539564 nova_compute[226295]: 2025-11-29 07:49:58.464 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:59.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:49:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:59.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:00 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 02:50:00 np0005539564 nova_compute[226295]: 2025-11-29 07:50:00.257 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 02:50:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:01.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:01.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:02 np0005539564 nova_compute[226295]: 2025-11-29 07:50:02.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:03 np0005539564 nova_compute[226295]: 2025-11-29 07:50:03.466 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:03.699 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:03.700 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:03.700 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:03.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:03.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:05 np0005539564 nova_compute[226295]: 2025-11-29 07:50:05.260 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:05.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:05 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:05.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.043 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "68def5bd-3a13-48c4-abe2-a7d5282f493b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.043 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "68def5bd-3a13-48c4-abe2-a7d5282f493b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.061 226310 DEBUG nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.151 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.151 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.161 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.162 226310 INFO nova.compute.claims [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.343 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4081036947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.818 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.826 226310 DEBUG nova.compute.provider_tree [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:07.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:07.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.879 226310 DEBUG nova.scheduler.client.report [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.960 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:07 np0005539564 nova_compute[226295]: 2025-11-29 07:50:07.961 226310 DEBUG nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:50:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:08.007 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.008 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:08.010 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.042 226310 DEBUG nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.043 226310 DEBUG nova.network.neutron [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.070 226310 INFO nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.100 226310 DEBUG nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.216 226310 DEBUG nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.218 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.219 226310 INFO nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Creating image(s)#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.258 226310 DEBUG nova.storage.rbd_utils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.300 226310 DEBUG nova.storage.rbd_utils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.335 226310 DEBUG nova.storage.rbd_utils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.340 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.404 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.405 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.406 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.406 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.436 226310 DEBUG nova.storage.rbd_utils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.440 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.468 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.782 226310 DEBUG nova.network.neutron [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.783 226310 DEBUG nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.854 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:08 np0005539564 nova_compute[226295]: 2025-11-29 07:50:08.943 226310 DEBUG nova.storage.rbd_utils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] resizing rbd image 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.114 226310 DEBUG nova.objects.instance [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'migration_context' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.149 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.149 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Ensure instance console log exists: /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.150 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.150 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.150 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.152 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.159 226310 WARNING nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.165 226310 DEBUG nova.virt.libvirt.host [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.166 226310 DEBUG nova.virt.libvirt.host [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.169 226310 DEBUG nova.virt.libvirt.host [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.170 226310 DEBUG nova.virt.libvirt.host [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.171 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.172 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.172 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.173 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.173 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.173 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.173 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.174 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.174 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.174 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.174 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.175 226310 DEBUG nova.virt.hardware [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.179 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3704285363' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.647 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.689 226310 DEBUG nova.storage.rbd_utils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:09 np0005539564 nova_compute[226295]: 2025-11-29 07:50:09.696 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:09.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:09 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:09.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:10 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/291087062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.189 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.190 226310 DEBUG nova.objects.instance [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'pci_devices' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.205 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <uuid>68def5bd-3a13-48c4-abe2-a7d5282f493b</uuid>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <name>instance-0000001d</name>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <nova:name>tempest-MigrationsAdminTest-server-1556132117</nova:name>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:50:09</nova:creationTime>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <entry name="serial">68def5bd-3a13-48c4-abe2-a7d5282f493b</entry>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <entry name="uuid">68def5bd-3a13-48c4-abe2-a7d5282f493b</entry>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/68def5bd-3a13-48c4-abe2-a7d5282f493b_disk">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/68def5bd-3a13-48c4-abe2-a7d5282f493b_disk.config">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/console.log" append="off"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:50:10 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:50:10 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:50:10 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:50:10 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.407 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.407 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.408 226310 INFO nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Using config drive#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.436 226310 DEBUG nova.storage.rbd_utils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.604 226310 INFO nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Creating config drive at /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/disk.config#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.611 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzt7087j1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.741 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzt7087j1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.773 226310 DEBUG nova.storage.rbd_utils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rbd image 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.778 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/disk.config 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.954 226310 DEBUG oslo_concurrency.processutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/disk.config 68def5bd-3a13-48c4-abe2-a7d5282f493b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:10 np0005539564 nova_compute[226295]: 2025-11-29 07:50:10.956 226310 INFO nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Deleting local config drive /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/disk.config because it was imported into RBD.#033[00m
Nov 29 02:50:11 np0005539564 systemd-machined[190128]: New machine qemu-12-instance-0000001d.
Nov 29 02:50:11 np0005539564 systemd[1]: Started Virtual Machine qemu-12-instance-0000001d.
Nov 29 02:50:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 02:50:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:11.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:11 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:11.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.909 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402611.9084685, 68def5bd-3a13-48c4-abe2-a7d5282f493b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.911 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.914 226310 DEBUG nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.915 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.918 226310 INFO nova.virt.libvirt.driver [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance spawned successfully.#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.919 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.943 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.949 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.949 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.950 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.951 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.951 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.952 226310 DEBUG nova.virt.libvirt.driver [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.957 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.990 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.990 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402611.9086983, 68def5bd-3a13-48c4-abe2-a7d5282f493b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:11 np0005539564 nova_compute[226295]: 2025-11-29 07:50:11.990 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:12.013 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.014 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.018 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.048 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.050 226310 INFO nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Took 3.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.050 226310 DEBUG nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.127 226310 INFO nova.compute.manager [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Took 5.00 seconds to build instance.#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.144 226310 DEBUG oslo_concurrency.lockutils [None req-5ac124e8-2b63-4f19-8ad2-70dd39ac16ba e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "68def5bd-3a13-48c4-abe2-a7d5282f493b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.333 226310 DEBUG nova.compute.manager [req-cd6feaa9-d710-4310-bdf3-205396a14309 req-97647f0e-7cad-4769-84f6-e9965daa60b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-changed-0b323b38-c9ec-4cca-a4db-a839bbb3d14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.334 226310 DEBUG nova.compute.manager [req-cd6feaa9-d710-4310-bdf3-205396a14309 req-97647f0e-7cad-4769-84f6-e9965daa60b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Refreshing instance network info cache due to event network-changed-0b323b38-c9ec-4cca-a4db-a839bbb3d14b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.335 226310 DEBUG oslo_concurrency.lockutils [req-cd6feaa9-d710-4310-bdf3-205396a14309 req-97647f0e-7cad-4769-84f6-e9965daa60b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.335 226310 DEBUG oslo_concurrency.lockutils [req-cd6feaa9-d710-4310-bdf3-205396a14309 req-97647f0e-7cad-4769-84f6-e9965daa60b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:12 np0005539564 nova_compute[226295]: 2025-11-29 07:50:12.336 226310 DEBUG nova.network.neutron [req-cd6feaa9-d710-4310-bdf3-205396a14309 req-97647f0e-7cad-4769-84f6-e9965daa60b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Refreshing network info cache for port 0b323b38-c9ec-4cca-a4db-a839bbb3d14b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:13 np0005539564 nova_compute[226295]: 2025-11-29 07:50:13.471 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:13.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:13 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:13.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:14 np0005539564 nova_compute[226295]: 2025-11-29 07:50:14.143 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:14 np0005539564 nova_compute[226295]: 2025-11-29 07:50:14.187 226310 DEBUG nova.network.neutron [req-cd6feaa9-d710-4310-bdf3-205396a14309 req-97647f0e-7cad-4769-84f6-e9965daa60b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updated VIF entry in instance network info cache for port 0b323b38-c9ec-4cca-a4db-a839bbb3d14b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:14 np0005539564 nova_compute[226295]: 2025-11-29 07:50:14.187 226310 DEBUG nova.network.neutron [req-cd6feaa9-d710-4310-bdf3-205396a14309 req-97647f0e-7cad-4769-84f6-e9965daa60b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updating instance_info_cache with network_info: [{"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:14 np0005539564 nova_compute[226295]: 2025-11-29 07:50:14.203 226310 DEBUG oslo_concurrency.lockutils [req-cd6feaa9-d710-4310-bdf3-205396a14309 req-97647f0e-7cad-4769-84f6-e9965daa60b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-55a96092-65f2-4612-a809-0f145c804f96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 02:50:15 np0005539564 nova_compute[226295]: 2025-11-29 07:50:15.265 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:15.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:15.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.165 226310 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.165 226310 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquired lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.166 226310 DEBUG nova.network.neutron [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.284 226310 DEBUG nova.network.neutron [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.618 226310 DEBUG nova.network.neutron [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.649 226310 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Releasing lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.755 226310 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.756 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Creating file /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/c24349373c7b403f8b33818a2eaa1525.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 02:50:16 np0005539564 nova_compute[226295]: 2025-11-29 07:50:16.756 226310 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/c24349373c7b403f8b33818a2eaa1525.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:17 np0005539564 nova_compute[226295]: 2025-11-29 07:50:17.321 226310 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/c24349373c7b403f8b33818a2eaa1525.tmp" returned: 1 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:17 np0005539564 nova_compute[226295]: 2025-11-29 07:50:17.323 226310 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/c24349373c7b403f8b33818a2eaa1525.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:50:17 np0005539564 nova_compute[226295]: 2025-11-29 07:50:17.323 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Creating directory /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 02:50:17 np0005539564 nova_compute[226295]: 2025-11-29 07:50:17.323 226310 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:17 np0005539564 nova_compute[226295]: 2025-11-29 07:50:17.603 226310 DEBUG oslo_concurrency.processutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:17 np0005539564 nova_compute[226295]: 2025-11-29 07:50:17.608 226310 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:50:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:17.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:17.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:18 np0005539564 nova_compute[226295]: 2025-11-29 07:50:18.473 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:19 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:20 np0005539564 nova_compute[226295]: 2025-11-29 07:50:20.266 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 02:50:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:21 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 02:50:22 np0005539564 nova_compute[226295]: 2025-11-29 07:50:22.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:22 np0005539564 podman[238160]: 2025-11-29 07:50:22.511401864 +0000 UTC m=+0.061956899 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:50:22 np0005539564 podman[238159]: 2025-11-29 07:50:22.536282357 +0000 UTC m=+0.076910473 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:50:22 np0005539564 podman[238158]: 2025-11-29 07:50:22.548804157 +0000 UTC m=+0.104229333 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:50:23 np0005539564 nova_compute[226295]: 2025-11-29 07:50:23.475 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:23 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:50:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:50:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:24 np0005539564 nova_compute[226295]: 2025-11-29 07:50:24.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:24 np0005539564 nova_compute[226295]: 2025-11-29 07:50:24.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:50:24 np0005539564 nova_compute[226295]: 2025-11-29 07:50:24.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:50:24 np0005539564 nova_compute[226295]: 2025-11-29 07:50:24.600 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:24 np0005539564 nova_compute[226295]: 2025-11-29 07:50:24.600 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:24 np0005539564 nova_compute[226295]: 2025-11-29 07:50:24.601 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:50:24 np0005539564 nova_compute[226295]: 2025-11-29 07:50:24.601 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:24 np0005539564 nova_compute[226295]: 2025-11-29 07:50:24.829 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:25 np0005539564 nova_compute[226295]: 2025-11-29 07:50:25.268 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:25 np0005539564 nova_compute[226295]: 2025-11-29 07:50:25.293 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:25 np0005539564 nova_compute[226295]: 2025-11-29 07:50:25.314 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:25 np0005539564 nova_compute[226295]: 2025-11-29 07:50:25.315 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:50:25 np0005539564 nova_compute[226295]: 2025-11-29 07:50:25.315 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:25 np0005539564 nova_compute[226295]: 2025-11-29 07:50:25.315 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:50:25 np0005539564 nova_compute[226295]: 2025-11-29 07:50:25.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:25 np0005539564 nova_compute[226295]: 2025-11-29 07:50:25.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:25.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:25 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:25.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:26 np0005539564 nova_compute[226295]: 2025-11-29 07:50:26.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 02:50:27 np0005539564 nova_compute[226295]: 2025-11-29 07:50:27.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:27 np0005539564 nova_compute[226295]: 2025-11-29 07:50:27.395 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:27 np0005539564 nova_compute[226295]: 2025-11-29 07:50:27.396 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:27 np0005539564 nova_compute[226295]: 2025-11-29 07:50:27.396 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:27 np0005539564 nova_compute[226295]: 2025-11-29 07:50:27.397 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:50:27 np0005539564 nova_compute[226295]: 2025-11-29 07:50:27.397 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:27 np0005539564 nova_compute[226295]: 2025-11-29 07:50:27.655 226310 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:50:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:27.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:27 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:27.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/184874410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:27 np0005539564 nova_compute[226295]: 2025-11-29 07:50:27.884 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:50:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2852710428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:50:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:50:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2852710428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.161 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.161 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.165 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.165 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.169 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.169 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.268 226310 DEBUG oslo_concurrency.lockutils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.268 226310 DEBUG oslo_concurrency.lockutils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.285 226310 DEBUG nova.objects.instance [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'flavor' on Instance uuid 55a96092-65f2-4612-a809-0f145c804f96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.345 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.346 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4239MB free_disk=20.704662322998047GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.346 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.347 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.412 226310 DEBUG oslo_concurrency.lockutils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.434 226310 INFO nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating resource usage from migration 00749f9e-ca06-4ea5-9e1f-4c19e99b8a91#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.464 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance dc42f6b3-eda5-409e-aac8-68275e50922e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.464 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 55a96092-65f2-4612-a809-0f145c804f96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.464 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Migration 00749f9e-ca06-4ea5-9e1f-4c19e99b8a91 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.465 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.465 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.477 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.531 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.659 226310 DEBUG oslo_concurrency.lockutils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.659 226310 DEBUG oslo_concurrency.lockutils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.660 226310 INFO nova.compute.manager [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Attaching volume 3466633d-ae13-4d07-b35e-af08eaa91384 to /dev/vdb#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.829 226310 DEBUG os_brick.utils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.830 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.842 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.843 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bb6f71-2386-4409-a521-24f9f4a0045b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.845 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.855 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.855 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5f3236-d042-48c2-a7ee-9c7e613424ec]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.857 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.870 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.870 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d03615-8217-4950-a750-822620e6ee70]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.872 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[9b10575b-35dc-4af9-b3ef-0b407cebe80a]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.873 226310 DEBUG oslo_concurrency.processutils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.904 226310 DEBUG oslo_concurrency.processutils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.908 226310 DEBUG os_brick.initiator.connectors.lightos [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.909 226310 DEBUG os_brick.initiator.connectors.lightos [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.909 226310 DEBUG os_brick.initiator.connectors.lightos [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.910 226310 DEBUG os_brick.utils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:50:28 np0005539564 nova_compute[226295]: 2025-11-29 07:50:28.911 226310 DEBUG nova.virt.block_device [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updating existing volume attachment record: fe599284-9dea-4a95-aebc-ba8f7e86a4d9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:50:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3165290517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:29 np0005539564 nova_compute[226295]: 2025-11-29 07:50:29.019 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:29 np0005539564 nova_compute[226295]: 2025-11-29 07:50:29.026 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:29 np0005539564 nova_compute[226295]: 2025-11-29 07:50:29.047 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:29 np0005539564 nova_compute[226295]: 2025-11-29 07:50:29.081 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:50:29 np0005539564 nova_compute[226295]: 2025-11-29 07:50:29.081 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:29.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:29 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:29.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:29 np0005539564 nova_compute[226295]: 2025-11-29 07:50:29.902 226310 DEBUG nova.objects.instance [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'flavor' on Instance uuid 55a96092-65f2-4612-a809-0f145c804f96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:29 np0005539564 nova_compute[226295]: 2025-11-29 07:50:29.935 226310 DEBUG nova.virt.libvirt.driver [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Attempting to attach volume 3466633d-ae13-4d07-b35e-af08eaa91384 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 02:50:29 np0005539564 nova_compute[226295]: 2025-11-29 07:50:29.938 226310 DEBUG nova.virt.libvirt.guest [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 02:50:29 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-3466633d-ae13-4d07-b35e-af08eaa91384">
Nov 29 02:50:29 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 02:50:29 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:  </auth>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:  <serial>3466633d-ae13-4d07-b35e-af08eaa91384</serial>
Nov 29 02:50:29 np0005539564 nova_compute[226295]:  <shareable/>
Nov 29 02:50:29 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:50:29 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 02:50:29 np0005539564 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 29 02:50:29 np0005539564 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001d.scope: Consumed 14.180s CPU time.
Nov 29 02:50:29 np0005539564 systemd-machined[190128]: Machine qemu-12-instance-0000001d terminated.
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.081 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.109 226310 DEBUG nova.virt.libvirt.driver [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.109 226310 DEBUG nova.virt.libvirt.driver [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.109 226310 DEBUG nova.virt.libvirt.driver [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.110 226310 DEBUG nova.virt.libvirt.driver [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] No VIF found with MAC fa:16:3e:81:67:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.272 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.384 226310 DEBUG oslo_concurrency.lockutils [None req-ac459ce6-5d12-4689-9238-0ce9b1cd1c18 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.678 226310 INFO nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.683 226310 INFO nova.virt.libvirt.driver [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance destroyed successfully.#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.686 226310 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.686 226310 DEBUG nova.virt.libvirt.driver [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.795 226310 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Acquiring lock "68def5bd-3a13-48c4-abe2-a7d5282f493b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.796 226310 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "68def5bd-3a13-48c4-abe2-a7d5282f493b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:30 np0005539564 nova_compute[226295]: 2025-11-29 07:50:30.796 226310 DEBUG oslo_concurrency.lockutils [None req-15cefe1d-c837-4b0a-b6ae-d3bff1ab8e0f 3a74818e754b4b7393e65e132a8bcf98 6326848a05cb4a28bef4b0c4d3b5726c - - default default] Lock "68def5bd-3a13-48c4-abe2-a7d5282f493b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:31.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:31.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 02:50:33 np0005539564 nova_compute[226295]: 2025-11-29 07:50:33.479 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:33.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:33.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 02:50:35 np0005539564 nova_compute[226295]: 2025-11-29 07:50:35.272 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:35 np0005539564 nova_compute[226295]: 2025-11-29 07:50:35.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:35.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:35.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:37.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:37.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:38 np0005539564 nova_compute[226295]: 2025-11-29 07:50:38.481 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539564 nova_compute[226295]: 2025-11-29 07:50:38.580 226310 INFO nova.compute.manager [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Swapping old allocation on dict_keys(['ea190a43-1246-44b8-8f8b-a61b155a1d3b']) held by migration 00749f9e-ca06-4ea5-9e1f-4c19e99b8a91 for instance#033[00m
Nov 29 02:50:38 np0005539564 nova_compute[226295]: 2025-11-29 07:50:38.609 226310 DEBUG nova.scheduler.client.report [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Overwriting current allocation {'allocations': {'77f31ad1-818f-4610-8dd1-3fbcd25133f2': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}, 'generation': 30}}, 'project_id': 'f7e8ae9fdefb4049959228954fb4250e', 'user_id': 'e1c26cd8138e4114b4801d377b39933a', 'consumer_generation': 1} on consumer 68def5bd-3a13-48c4-abe2-a7d5282f493b move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Nov 29 02:50:38 np0005539564 nova_compute[226295]: 2025-11-29 07:50:38.793 226310 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:38 np0005539564 nova_compute[226295]: 2025-11-29 07:50:38.794 226310 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:38 np0005539564 nova_compute[226295]: 2025-11-29 07:50:38.794 226310 DEBUG nova.network.neutron [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:39 np0005539564 nova_compute[226295]: 2025-11-29 07:50:39.065 226310 DEBUG nova.network.neutron [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:39 np0005539564 nova_compute[226295]: 2025-11-29 07:50:39.387 226310 DEBUG nova.network.neutron [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:39 np0005539564 nova_compute[226295]: 2025-11-29 07:50:39.402 226310 DEBUG oslo_concurrency.lockutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:39 np0005539564 nova_compute[226295]: 2025-11-29 07:50:39.402 226310 DEBUG nova.virt.libvirt.driver [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Nov 29 02:50:39 np0005539564 nova_compute[226295]: 2025-11-29 07:50:39.507 226310 DEBUG nova.storage.rbd_utils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] rolling back rbd image(68def5bd-3a13-48c4-abe2-a7d5282f493b_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Nov 29 02:50:39 np0005539564 nova_compute[226295]: 2025-11-29 07:50:39.625 226310 DEBUG nova.storage.rbd_utils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] removing snapshot(nova-resize) on rbd image(68def5bd-3a13-48c4-abe2-a7d5282f493b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:50:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:39 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:39.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:39.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 02:50:39 np0005539564 nova_compute[226295]: 2025-11-29 07:50:39.997 226310 DEBUG nova.virt.libvirt.driver [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.001 226310 WARNING nova.virt.libvirt.driver [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.007 226310 DEBUG nova.virt.libvirt.host [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.008 226310 DEBUG nova.virt.libvirt.host [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.011 226310 DEBUG nova.virt.libvirt.host [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.012 226310 DEBUG nova.virt.libvirt.host [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.013 226310 DEBUG nova.virt.libvirt.driver [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.013 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.013 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.014 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.014 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.014 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.014 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.015 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.015 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.015 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.016 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.016 226310 DEBUG nova.virt.hardware [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.016 226310 DEBUG nova.objects.instance [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.033 226310 DEBUG oslo_concurrency.processutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.274 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.322 226310 DEBUG oslo_concurrency.lockutils [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.322 226310 DEBUG oslo_concurrency.lockutils [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.337 226310 INFO nova.compute.manager [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Detaching volume 3466633d-ae13-4d07-b35e-af08eaa91384#033[00m
Nov 29 02:50:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/757320924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.480 226310 DEBUG oslo_concurrency.processutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.520 226310 DEBUG oslo_concurrency.processutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.546 226310 INFO nova.virt.block_device [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Attempting to driver detach volume 3466633d-ae13-4d07-b35e-af08eaa91384 from mountpoint /dev/vdb#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.557 226310 DEBUG nova.virt.libvirt.driver [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Attempting to detach device vdb from instance 55a96092-65f2-4612-a809-0f145c804f96 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.557 226310 DEBUG nova.virt.libvirt.guest [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-3466633d-ae13-4d07-b35e-af08eaa91384">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <serial>3466633d-ae13-4d07-b35e-af08eaa91384</serial>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <shareable/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:50:40 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.565 226310 INFO nova.virt.libvirt.driver [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Successfully detached device vdb from instance 55a96092-65f2-4612-a809-0f145c804f96 from the persistent domain config.#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.565 226310 DEBUG nova.virt.libvirt.driver [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 55a96092-65f2-4612-a809-0f145c804f96 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.566 226310 DEBUG nova.virt.libvirt.guest [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-3466633d-ae13-4d07-b35e-af08eaa91384">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <serial>3466633d-ae13-4d07-b35e-af08eaa91384</serial>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <shareable/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:50:40 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.618 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764402640.6180825, 55a96092-65f2-4612-a809-0f145c804f96 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.619 226310 DEBUG nova.virt.libvirt.driver [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 55a96092-65f2-4612-a809-0f145c804f96 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.621 226310 INFO nova.virt.libvirt.driver [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Successfully detached device vdb from instance 55a96092-65f2-4612-a809-0f145c804f96 from the live domain config.#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.911 226310 DEBUG nova.objects.instance [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'flavor' on Instance uuid 55a96092-65f2-4612-a809-0f145c804f96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1453412454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.947 226310 DEBUG oslo_concurrency.lockutils [None req-07fa6906-2de1-4d17-b894-ff7a89ca0072 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.963 226310 DEBUG oslo_concurrency.processutils [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:40 np0005539564 nova_compute[226295]: 2025-11-29 07:50:40.966 226310 DEBUG nova.virt.libvirt.driver [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <uuid>68def5bd-3a13-48c4-abe2-a7d5282f493b</uuid>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <name>instance-0000001d</name>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <nova:name>tempest-MigrationsAdminTest-server-1556132117</nova:name>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:50:40</nova:creationTime>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <entry name="serial">68def5bd-3a13-48c4-abe2-a7d5282f493b</entry>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <entry name="uuid">68def5bd-3a13-48c4-abe2-a7d5282f493b</entry>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/68def5bd-3a13-48c4-abe2-a7d5282f493b_disk">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/68def5bd-3a13-48c4-abe2-a7d5282f493b_disk.config">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b/console.log" append="off"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:50:40 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:50:40 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:50:40 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:50:40 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:50:41 np0005539564 systemd-machined[190128]: New machine qemu-13-instance-0000001d.
Nov 29 02:50:41 np0005539564 systemd[1]: Started Virtual Machine qemu-13-instance-0000001d.
Nov 29 02:50:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:41 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.071 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 68def5bd-3a13-48c4-abe2-a7d5282f493b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.072 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402642.0712676, 68def5bd-3a13-48c4-abe2-a7d5282f493b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.072 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.074 226310 DEBUG nova.compute.manager [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.076 226310 INFO nova.virt.libvirt.driver [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance running successfully.#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.077 226310 DEBUG nova.virt.libvirt.driver [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.109 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.116 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.152 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.152 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402642.0731628, 68def5bd-3a13-48c4-abe2-a7d5282f493b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.153 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.164 226310 INFO nova.compute.manager [None req-f968d56c-cf46-47cc-b0b3-693c6ab72b46 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating instance to original state: 'active'#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.191 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.195 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:42 np0005539564 nova_compute[226295]: 2025-11-29 07:50:42.222 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 02:50:43 np0005539564 nova_compute[226295]: 2025-11-29 07:50:43.483 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:43.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:43 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:43.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.103 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "68def5bd-3a13-48c4-abe2-a7d5282f493b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.104 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "68def5bd-3a13-48c4-abe2-a7d5282f493b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.106 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "68def5bd-3a13-48c4-abe2-a7d5282f493b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.106 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "68def5bd-3a13-48c4-abe2-a7d5282f493b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.106 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "68def5bd-3a13-48c4-abe2-a7d5282f493b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.107 226310 INFO nova.compute.manager [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Terminating instance#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.109 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.109 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.109 226310 DEBUG nova.network.neutron [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.505 226310 DEBUG nova.network.neutron [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.763 226310 DEBUG nova.network.neutron [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.782 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-68def5bd-3a13-48c4-abe2-a7d5282f493b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:44 np0005539564 nova_compute[226295]: 2025-11-29 07:50:44.783 226310 DEBUG nova.compute.manager [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:50:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:50:44 np0005539564 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 29 02:50:44 np0005539564 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001d.scope: Consumed 3.715s CPU time.
Nov 29 02:50:44 np0005539564 systemd-machined[190128]: Machine qemu-13-instance-0000001d terminated.
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.004 226310 INFO nova.virt.libvirt.driver [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance destroyed successfully.#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.004 226310 DEBUG nova.objects.instance [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'resources' on Instance uuid 68def5bd-3a13-48c4-abe2-a7d5282f493b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.828 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.829 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.829 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.829 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.830 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.831 226310 INFO nova.compute.manager [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Terminating instance#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.832 226310 DEBUG nova.compute.manager [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:50:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:45.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:45 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:45.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:45 np0005539564 kernel: tap0b323b38-c9 (unregistering): left promiscuous mode
Nov 29 02:50:45 np0005539564 NetworkManager[48997]: <info>  [1764402645.8907] device (tap0b323b38-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:50:45 np0005539564 ovn_controller[130591]: 2025-11-29T07:50:45Z|00087|binding|INFO|Releasing lport 0b323b38-c9ec-4cca-a4db-a839bbb3d14b from this chassis (sb_readonly=0)
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.896 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539564 ovn_controller[130591]: 2025-11-29T07:50:45Z|00088|binding|INFO|Setting lport 0b323b38-c9ec-4cca-a4db-a839bbb3d14b down in Southbound
Nov 29 02:50:45 np0005539564 ovn_controller[130591]: 2025-11-29T07:50:45Z|00089|binding|INFO|Removing iface tap0b323b38-c9 ovn-installed in OVS
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.900 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:45.906 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:67:81 10.100.0.9'], port_security=['fa:16:3e:81:67:81 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '55a96092-65f2-4612-a809-0f145c804f96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2c58ae2e706424fa3147694fc571db0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad0084cd-f9d9-4dc4-8cfd-f48e086021ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61f2e0e3-be06-454a-8b4e-1d0721b87b15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0b323b38-c9ec-4cca-a4db-a839bbb3d14b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:45.908 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0b323b38-c9ec-4cca-a4db-a839bbb3d14b in datapath 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d unbound from our chassis#033[00m
Nov 29 02:50:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:45.911 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:50:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:45.912 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[059018a4-8725-47c4-85aa-d5da31a7ebda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:45.913 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d namespace which is not needed anymore#033[00m
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.918 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539564 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.955 226310 INFO nova.virt.libvirt.driver [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Deleting instance files /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b_del#033[00m
Nov 29 02:50:45 np0005539564 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000019.scope: Consumed 18.440s CPU time.
Nov 29 02:50:45 np0005539564 nova_compute[226295]: 2025-11-29 07:50:45.955 226310 INFO nova.virt.libvirt.driver [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Deletion of /var/lib/nova/instances/68def5bd-3a13-48c4-abe2-a7d5282f493b_del complete#033[00m
Nov 29 02:50:45 np0005539564 systemd-machined[190128]: Machine qemu-11-instance-00000019 terminated.
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.013 226310 INFO nova.compute.manager [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.014 226310 DEBUG oslo.service.loopingcall [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.014 226310 DEBUG nova.compute.manager [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.015 226310 DEBUG nova.network.neutron [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:50:46 np0005539564 kernel: tap0b323b38-c9: entered promiscuous mode
Nov 29 02:50:46 np0005539564 kernel: tap0b323b38-c9 (unregistering): left promiscuous mode
Nov 29 02:50:46 np0005539564 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[237583]: [NOTICE]   (237587) : haproxy version is 2.8.14-c23fe91
Nov 29 02:50:46 np0005539564 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[237583]: [NOTICE]   (237587) : path to executable is /usr/sbin/haproxy
Nov 29 02:50:46 np0005539564 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[237583]: [WARNING]  (237587) : Exiting Master process...
Nov 29 02:50:46 np0005539564 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[237583]: [WARNING]  (237587) : Exiting Master process...
Nov 29 02:50:46 np0005539564 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[237583]: [ALERT]    (237587) : Current worker (237589) exited with code 143 (Terminated)
Nov 29 02:50:46 np0005539564 neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d[237583]: [WARNING]  (237587) : All workers exited. Exiting... (0)
Nov 29 02:50:46 np0005539564 systemd[1]: libpod-2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f.scope: Deactivated successfully.
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539564 podman[238648]: 2025-11-29 07:50:46.068456031 +0000 UTC m=+0.046988823 container died 2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.070 226310 INFO nova.virt.libvirt.driver [-] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Instance destroyed successfully.#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.071 226310 DEBUG nova.objects.instance [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lazy-loading 'resources' on Instance uuid 55a96092-65f2-4612-a809-0f145c804f96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.090 226310 DEBUG nova.virt.libvirt.vif [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:49:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-2079509307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-2079509307',id=25,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEYgVhV5oCWL/zQAqB0DQOOXmiTf0DMuz+TQcrYDPPKNZbRx/P2PRwEEgf3Xvpb7WhJ4XE5LOnipChRiobaw1mrfCL6W7daqE2XxiRFHktfVRQSPzC2uzKZew970NImApw==',key_name='tempest-keypair-1714751908',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:49:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b2c58ae2e706424fa3147694fc571db0',ramdisk_id='',reservation_id='r-h9zj9wcd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1774120772-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:49:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93506ec26b16451c91dc820b139e8707',uuid=55a96092-65f2-4612-a809-0f145c804f96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.091 226310 DEBUG nova.network.os_vif_util [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converting VIF {"id": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "address": "fa:16:3e:81:67:81", "network": {"id": "5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1728480926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2c58ae2e706424fa3147694fc571db0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b323b38-c9", "ovs_interfaceid": "0b323b38-c9ec-4cca-a4db-a839bbb3d14b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.091 226310 DEBUG nova.network.os_vif_util [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:67:81,bridge_name='br-int',has_traffic_filtering=True,id=0b323b38-c9ec-4cca-a4db-a839bbb3d14b,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b323b38-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.092 226310 DEBUG os_vif [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:67:81,bridge_name='br-int',has_traffic_filtering=True,id=0b323b38-c9ec-4cca-a4db-a839bbb3d14b,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b323b38-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.094 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b323b38-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.096 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.097 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f-userdata-shm.mount: Deactivated successfully.
Nov 29 02:50:46 np0005539564 systemd[1]: var-lib-containers-storage-overlay-5d587100524d928cc6cd5b1e7086851fac0b8aefc827abe58966def7bbb3c098-merged.mount: Deactivated successfully.
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.104 226310 INFO os_vif [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:67:81,bridge_name='br-int',has_traffic_filtering=True,id=0b323b38-c9ec-4cca-a4db-a839bbb3d14b,network=Network(5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b323b38-c9')#033[00m
Nov 29 02:50:46 np0005539564 podman[238648]: 2025-11-29 07:50:46.112244126 +0000 UTC m=+0.090776908 container cleanup 2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 02:50:46 np0005539564 systemd[1]: libpod-conmon-2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f.scope: Deactivated successfully.
Nov 29 02:50:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:50:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 57K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 3488 syncs, 3.82 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6455 writes, 30K keys, 6455 commit groups, 1.0 writes per commit group, ingest: 36.28 MB, 0.06 MB/s#012Interval WAL: 6454 writes, 2067 syncs, 3.12 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.636 226310 DEBUG nova.network.neutron [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.657 226310 DEBUG nova.network.neutron [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.677 226310 INFO nova.compute.manager [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Took 0.66 seconds to deallocate network for instance.#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.750 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.750 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:46 np0005539564 nova_compute[226295]: 2025-11-29 07:50:46.864 226310 DEBUG oslo_concurrency.processutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:47 np0005539564 nova_compute[226295]: 2025-11-29 07:50:47.332 226310 DEBUG oslo_concurrency.processutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:47 np0005539564 nova_compute[226295]: 2025-11-29 07:50:47.339 226310 DEBUG nova.compute.provider_tree [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:47 np0005539564 nova_compute[226295]: 2025-11-29 07:50:47.355 226310 DEBUG nova.scheduler.client.report [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:47 np0005539564 nova_compute[226295]: 2025-11-29 07:50:47.382 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:47 np0005539564 nova_compute[226295]: 2025-11-29 07:50:47.411 226310 INFO nova.scheduler.client.report [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Deleted allocations for instance 68def5bd-3a13-48c4-abe2-a7d5282f493b#033[00m
Nov 29 02:50:47 np0005539564 nova_compute[226295]: 2025-11-29 07:50:47.487 226310 DEBUG oslo_concurrency.lockutils [None req-b346d258-2c3b-46a5-b75c-7453b6c19123 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "68def5bd-3a13-48c4-abe2-a7d5282f493b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:47.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:47.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 02:50:48 np0005539564 podman[238702]: 2025-11-29 07:50:48.01614139 +0000 UTC m=+1.883719879 container remove 2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.023 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[40cabb04-9bff-4046-a868-8c0814e8dc5e]: (4, ('Sat Nov 29 07:50:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d (2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f)\n2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f\nSat Nov 29 07:50:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d (2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f)\n2ea30c7c3031237a14946be22252a24cc761243a2d15a55cd09622af9bfcd90f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.025 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b8558526-6292-45e5-abf3-f844e0af694b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.026 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ccff1f0-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:48 np0005539564 nova_compute[226295]: 2025-11-29 07:50:48.028 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:48 np0005539564 kernel: tap5ccff1f0-60: left promiscuous mode
Nov 29 02:50:48 np0005539564 nova_compute[226295]: 2025-11-29 07:50:48.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.048 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b9728557-6201-4400-b2b6-424217adb636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.073 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1f560cc6-0668-4116-9929-8dc12604fe6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.074 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dec83393-a51f-42aa-a6a4-8878ecf473ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.089 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[724533a0-c975-42c5-b7ed-d1b60bb1c5a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553231, 'reachable_time': 22840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238746, 'error': None, 'target': 'ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:48 np0005539564 systemd[1]: run-netns-ovnmeta\x2d5ccff1f0\x2d6b4b\x2d41d4\x2da60d\x2d18a7eff7fe9d.mount: Deactivated successfully.
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.093 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ccff1f0-6b4b-41d4-a60d-18a7eff7fe9d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:50:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:50:48.093 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[f3214185-6743-4c60-a6dd-6539597450f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:49.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:49.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.026 226310 DEBUG nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-vif-unplugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.026 226310 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.027 226310 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.027 226310 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.027 226310 DEBUG nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] No waiting events found dispatching network-vif-unplugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.027 226310 DEBUG nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-vif-unplugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.028 226310 DEBUG nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.028 226310 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "55a96092-65f2-4612-a809-0f145c804f96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.028 226310 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.028 226310 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.028 226310 DEBUG nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] No waiting events found dispatching network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.029 226310 WARNING nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received unexpected event network-vif-plugged-0b323b38-c9ec-4cca-a4db-a839bbb3d14b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:50:50 np0005539564 nova_compute[226295]: 2025-11-29 07:50:50.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:51 np0005539564 nova_compute[226295]: 2025-11-29 07:50:51.096 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:51 np0005539564 nova_compute[226295]: 2025-11-29 07:50:51.418 226310 INFO nova.virt.libvirt.driver [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Deleting instance files /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96_del#033[00m
Nov 29 02:50:51 np0005539564 nova_compute[226295]: 2025-11-29 07:50:51.419 226310 INFO nova.virt.libvirt.driver [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Deletion of /var/lib/nova/instances/55a96092-65f2-4612-a809-0f145c804f96_del complete#033[00m
Nov 29 02:50:51 np0005539564 nova_compute[226295]: 2025-11-29 07:50:51.509 226310 INFO nova.compute.manager [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Took 5.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:50:51 np0005539564 nova_compute[226295]: 2025-11-29 07:50:51.510 226310 DEBUG oslo.service.loopingcall [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:50:51 np0005539564 nova_compute[226295]: 2025-11-29 07:50:51.510 226310 DEBUG nova.compute.manager [-] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:50:51 np0005539564 nova_compute[226295]: 2025-11-29 07:50:51.511 226310 DEBUG nova.network.neutron [-] [instance: 55a96092-65f2-4612-a809-0f145c804f96] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:50:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:51.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:51.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:53 np0005539564 podman[238756]: 2025-11-29 07:50:53.530727777 +0000 UTC m=+0.073658615 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:50:53 np0005539564 podman[238754]: 2025-11-29 07:50:53.541848168 +0000 UTC m=+0.085085924 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 02:50:53 np0005539564 podman[238748]: 2025-11-29 07:50:53.57033937 +0000 UTC m=+0.113023472 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 02:50:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:53.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:53.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616219678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 02:50:54 np0005539564 nova_compute[226295]: 2025-11-29 07:50:54.625 226310 DEBUG nova.network.neutron [-] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:54 np0005539564 nova_compute[226295]: 2025-11-29 07:50:54.650 226310 INFO nova.compute.manager [-] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Took 3.14 seconds to deallocate network for instance.#033[00m
Nov 29 02:50:54 np0005539564 nova_compute[226295]: 2025-11-29 07:50:54.675 226310 DEBUG nova.compute.manager [req-56a6aa5e-1976-481e-a546-885871f25631 req-17d9f3a7-8b44-4851-85bb-0568c3fa7f4d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Received event network-vif-deleted-0b323b38-c9ec-4cca-a4db-a839bbb3d14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:54 np0005539564 nova_compute[226295]: 2025-11-29 07:50:54.702 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:54 np0005539564 nova_compute[226295]: 2025-11-29 07:50:54.703 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:54 np0005539564 nova_compute[226295]: 2025-11-29 07:50:54.785 226310 DEBUG oslo_concurrency.processutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/231599509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:55 np0005539564 nova_compute[226295]: 2025-11-29 07:50:55.203 226310 DEBUG oslo_concurrency.processutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:55 np0005539564 nova_compute[226295]: 2025-11-29 07:50:55.213 226310 DEBUG nova.compute.provider_tree [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:55 np0005539564 nova_compute[226295]: 2025-11-29 07:50:55.231 226310 DEBUG nova.scheduler.client.report [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:55 np0005539564 nova_compute[226295]: 2025-11-29 07:50:55.263 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:55 np0005539564 nova_compute[226295]: 2025-11-29 07:50:55.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:55 np0005539564 nova_compute[226295]: 2025-11-29 07:50:55.320 226310 INFO nova.scheduler.client.report [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Deleted allocations for instance 55a96092-65f2-4612-a809-0f145c804f96#033[00m
Nov 29 02:50:55 np0005539564 nova_compute[226295]: 2025-11-29 07:50:55.431 226310 DEBUG oslo_concurrency.lockutils [None req-67adf746-9237-4f41-8c1d-5ca96447e7a3 93506ec26b16451c91dc820b139e8707 b2c58ae2e706424fa3147694fc571db0 - - default default] Lock "55a96092-65f2-4612-a809-0f145c804f96" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:55.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:55.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:56 np0005539564 nova_compute[226295]: 2025-11-29 07:50:56.099 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:57.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:57.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 02:50:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:50:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:59.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:50:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:59 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:59.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.004 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402645.00238, 68def5bd-3a13-48c4-abe2-a7d5282f493b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.004 226310 INFO nova.compute.manager [-] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.032 226310 DEBUG nova.compute.manager [None req-fe4ef802-29f7-4ebc-8c9b-a0a97da23576 - - - - - -] [instance: 68def5bd-3a13-48c4-abe2-a7d5282f493b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.219 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "dc42f6b3-eda5-409e-aac8-68275e50922e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.220 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.221 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "dc42f6b3-eda5-409e-aac8-68275e50922e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.221 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.222 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.224 226310 INFO nova.compute.manager [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Terminating instance#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.225 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.226 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.226 226310 DEBUG nova.network.neutron [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.284 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.422 226310 DEBUG nova.network.neutron [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.626 226310 DEBUG nova.network.neutron [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.642 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-dc42f6b3-eda5-409e-aac8-68275e50922e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:51:00 np0005539564 nova_compute[226295]: 2025-11-29 07:51:00.643 226310 DEBUG nova.compute.manager [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:51:01 np0005539564 nova_compute[226295]: 2025-11-29 07:51:01.069 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402646.068572, 55a96092-65f2-4612-a809-0f145c804f96 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:01 np0005539564 nova_compute[226295]: 2025-11-29 07:51:01.071 226310 INFO nova.compute.manager [-] [instance: 55a96092-65f2-4612-a809-0f145c804f96] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:01 np0005539564 nova_compute[226295]: 2025-11-29 07:51:01.122 226310 DEBUG nova.compute.manager [None req-10071954-57b6-4416-b88a-0a9c19d2c85a - - - - - -] [instance: 55a96092-65f2-4612-a809-0f145c804f96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:01 np0005539564 nova_compute[226295]: 2025-11-29 07:51:01.122 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:01 np0005539564 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 29 02:51:01 np0005539564 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000010.scope: Consumed 22.046s CPU time.
Nov 29 02:51:01 np0005539564 systemd-machined[190128]: Machine qemu-8-instance-00000010 terminated.
Nov 29 02:51:01 np0005539564 nova_compute[226295]: 2025-11-29 07:51:01.274 226310 INFO nova.virt.libvirt.driver [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance destroyed successfully.#033[00m
Nov 29 02:51:01 np0005539564 nova_compute[226295]: 2025-11-29 07:51:01.275 226310 DEBUG nova.objects.instance [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'resources' on Instance uuid dc42f6b3-eda5-409e-aac8-68275e50922e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:01.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:01 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:01.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 02:51:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:51:03.700 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:51:03.700 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:51:03.701 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:03.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:03 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:03.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:05 np0005539564 nova_compute[226295]: 2025-11-29 07:51:05.286 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539564 nova_compute[226295]: 2025-11-29 07:51:05.904 226310 INFO nova.virt.libvirt.driver [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Deleting instance files /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e_del#033[00m
Nov 29 02:51:05 np0005539564 nova_compute[226295]: 2025-11-29 07:51:05.905 226310 INFO nova.virt.libvirt.driver [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Deletion of /var/lib/nova/instances/dc42f6b3-eda5-409e-aac8-68275e50922e_del complete#033[00m
Nov 29 02:51:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:05.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:05.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.045 226310 INFO nova.compute.manager [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Took 5.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.045 226310 DEBUG oslo.service.loopingcall [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.046 226310 DEBUG nova.compute.manager [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.046 226310 DEBUG nova.network.neutron [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.125 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.202 226310 DEBUG nova.network.neutron [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.261 226310 DEBUG nova.network.neutron [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.281 226310 INFO nova.compute.manager [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Took 0.24 seconds to deallocate network for instance.#033[00m
Nov 29 02:51:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.423 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.424 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.498 226310 DEBUG oslo_concurrency.processutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3406527470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.949 226310 DEBUG oslo_concurrency.processutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.958 226310 DEBUG nova.compute.provider_tree [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:06 np0005539564 nova_compute[226295]: 2025-11-29 07:51:06.983 226310 DEBUG nova.scheduler.client.report [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:07 np0005539564 nova_compute[226295]: 2025-11-29 07:51:07.028 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:07 np0005539564 nova_compute[226295]: 2025-11-29 07:51:07.072 226310 INFO nova.scheduler.client.report [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Deleted allocations for instance dc42f6b3-eda5-409e-aac8-68275e50922e#033[00m
Nov 29 02:51:07 np0005539564 nova_compute[226295]: 2025-11-29 07:51:07.158 226310 DEBUG oslo_concurrency.lockutils [None req-65957b4f-a136-4c9b-bbb8-0f929fe64e3d e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "dc42f6b3-eda5-409e-aac8-68275e50922e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:07.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:09.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:09.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:10 np0005539564 nova_compute[226295]: 2025-11-29 07:51:10.292 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:11 np0005539564 nova_compute[226295]: 2025-11-29 07:51:11.186 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:11.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:11.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:13.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:13.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:15 np0005539564 nova_compute[226295]: 2025-11-29 07:51:15.293 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:15.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:15.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:16 np0005539564 nova_compute[226295]: 2025-11-29 07:51:16.189 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:16 np0005539564 nova_compute[226295]: 2025-11-29 07:51:16.272 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402661.270639, dc42f6b3-eda5-409e-aac8-68275e50922e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:16 np0005539564 nova_compute[226295]: 2025-11-29 07:51:16.273 226310 INFO nova.compute.manager [-] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:16 np0005539564 nova_compute[226295]: 2025-11-29 07:51:16.585 226310 DEBUG nova.compute.manager [None req-6191d4c7-91d6-41ec-b0e7-46d94724309b - - - - - -] [instance: dc42f6b3-eda5-409e-aac8-68275e50922e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:17 np0005539564 nova_compute[226295]: 2025-11-29 07:51:17.446 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539564 nova_compute[226295]: 2025-11-29 07:51:17.665 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:17.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:19 np0005539564 nova_compute[226295]: 2025-11-29 07:51:19.644 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:51:19.647 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:51:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:51:19.648 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:51:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:19.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:20 np0005539564 nova_compute[226295]: 2025-11-29 07:51:20.295 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:21 np0005539564 nova_compute[226295]: 2025-11-29 07:51:21.192 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:21.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:21.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:23 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:23.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:23.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:24 np0005539564 nova_compute[226295]: 2025-11-29 07:51:24.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:24 np0005539564 nova_compute[226295]: 2025-11-29 07:51:24.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:24 np0005539564 nova_compute[226295]: 2025-11-29 07:51:24.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:51:24 np0005539564 podman[238926]: 2025-11-29 07:51:24.551499367 +0000 UTC m=+0.090280515 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 02:51:24 np0005539564 podman[238927]: 2025-11-29 07:51:24.557238403 +0000 UTC m=+0.085349233 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:51:24 np0005539564 podman[238925]: 2025-11-29 07:51:24.596768983 +0000 UTC m=+0.136422755 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 02:51:25 np0005539564 nova_compute[226295]: 2025-11-29 07:51:25.299 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:25.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:25 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:25.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:26 np0005539564 nova_compute[226295]: 2025-11-29 07:51:26.241 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:26 np0005539564 nova_compute[226295]: 2025-11-29 07:51:26.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:26 np0005539564 nova_compute[226295]: 2025-11-29 07:51:26.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:26 np0005539564 nova_compute[226295]: 2025-11-29 07:51:26.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:51:27 np0005539564 nova_compute[226295]: 2025-11-29 07:51:27.275 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:51:27 np0005539564 nova_compute[226295]: 2025-11-29 07:51:27.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:27 np0005539564 nova_compute[226295]: 2025-11-29 07:51:27.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:27.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:51:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1613726453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:51:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:51:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1613726453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:51:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:51:28.651 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:29 np0005539564 nova_compute[226295]: 2025-11-29 07:51:29.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:29 np0005539564 nova_compute[226295]: 2025-11-29 07:51:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:29 np0005539564 nova_compute[226295]: 2025-11-29 07:51:29.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:29 np0005539564 nova_compute[226295]: 2025-11-29 07:51:29.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:29 np0005539564 nova_compute[226295]: 2025-11-29 07:51:29.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:29 np0005539564 nova_compute[226295]: 2025-11-29 07:51:29.372 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:51:29 np0005539564 nova_compute[226295]: 2025-11-29 07:51:29.372 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1765482989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:29 np0005539564 nova_compute[226295]: 2025-11-29 07:51:29.837 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:29.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:29.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:30 np0005539564 nova_compute[226295]: 2025-11-29 07:51:30.015 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:51:30 np0005539564 nova_compute[226295]: 2025-11-29 07:51:30.017 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4849MB free_disk=20.940109252929688GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:51:30 np0005539564 nova_compute[226295]: 2025-11-29 07:51:30.017 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:30 np0005539564 nova_compute[226295]: 2025-11-29 07:51:30.018 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:30 np0005539564 nova_compute[226295]: 2025-11-29 07:51:30.301 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:30 np0005539564 nova_compute[226295]: 2025-11-29 07:51:30.615 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:51:30 np0005539564 nova_compute[226295]: 2025-11-29 07:51:30.615 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:51:30 np0005539564 nova_compute[226295]: 2025-11-29 07:51:30.734 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/123180201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:31 np0005539564 nova_compute[226295]: 2025-11-29 07:51:31.229 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:31 np0005539564 nova_compute[226295]: 2025-11-29 07:51:31.236 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:31 np0005539564 nova_compute[226295]: 2025-11-29 07:51:31.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:31.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:31.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:32 np0005539564 nova_compute[226295]: 2025-11-29 07:51:32.854 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:32 np0005539564 nova_compute[226295]: 2025-11-29 07:51:32.933 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:51:32 np0005539564 nova_compute[226295]: 2025-11-29 07:51:32.934 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:32 np0005539564 nova_compute[226295]: 2025-11-29 07:51:32.935 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:32 np0005539564 nova_compute[226295]: 2025-11-29 07:51:32.936 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:51:32 np0005539564 nova_compute[226295]: 2025-11-29 07:51:32.996 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:51:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:33.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:33 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:33.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:33 np0005539564 nova_compute[226295]: 2025-11-29 07:51:33.998 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:51:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5160 writes, 27K keys, 5160 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 5160 writes, 5160 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1285 writes, 6362 keys, 1285 commit groups, 1.0 writes per commit group, ingest: 13.69 MB, 0.02 MB/s#012Interval WAL: 1285 writes, 1285 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.3      2.33              0.12        14    0.167       0      0       0.0       0.0#012  L6      1/0    9.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     27.0     22.3      5.08              0.36        13    0.391     64K   6935       0.0       0.0#012 Sum      1/0    9.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     18.5     19.8      7.41              0.48        27    0.275     64K   6935       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.8     25.6     25.6      1.67              0.13         8    0.209     22K   2061       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     27.0     22.3      5.08              0.36        13    0.391     64K   6935       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.3      2.33              0.12        13    0.179       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 7.4 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 13.73 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000131 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(733,13.21 MB,4.34556%) FilterBlock(27,188.42 KB,0.0605282%) IndexBlock(27,343.98 KB,0.110501%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:51:35 np0005539564 nova_compute[226295]: 2025-11-29 07:51:35.304 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 02:51:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:35.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:35 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:35.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:36 np0005539564 nova_compute[226295]: 2025-11-29 07:51:36.247 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:36 np0005539564 nova_compute[226295]: 2025-11-29 07:51:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:36 np0005539564 nova_compute[226295]: 2025-11-29 07:51:36.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:51:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:37.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:37 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:37.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:38 np0005539564 nova_compute[226295]: 2025-11-29 07:51:38.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:39 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:40 np0005539564 nova_compute[226295]: 2025-11-29 07:51:40.308 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:41 np0005539564 nova_compute[226295]: 2025-11-29 07:51:41.251 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:41 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:43.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:43 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:43.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:45 np0005539564 nova_compute[226295]: 2025-11-29 07:51:45.311 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:46 np0005539564 nova_compute[226295]: 2025-11-29 07:51:46.252 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 02:51:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:47 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:49.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:49 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:49.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:50 np0005539564 nova_compute[226295]: 2025-11-29 07:51:50.338 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:51 np0005539564 nova_compute[226295]: 2025-11-29 07:51:51.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:51 np0005539564 nova_compute[226295]: 2025-11-29 07:51:51.574 226310 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Creating tmpfile /var/lib/nova/instances/tmp5fbve2fu to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 02:51:51 np0005539564 nova_compute[226295]: 2025-11-29 07:51:51.863 226310 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5fbve2fu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 02:51:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:51.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:51.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:53.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:53 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:53.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:55 np0005539564 nova_compute[226295]: 2025-11-29 07:51:55.343 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:55 np0005539564 podman[239170]: 2025-11-29 07:51:55.549196691 +0000 UTC m=+0.089668078 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:51:55 np0005539564 podman[239171]: 2025-11-29 07:51:55.550962819 +0000 UTC m=+0.089485663 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:51:55 np0005539564 podman[239169]: 2025-11-29 07:51:55.591400964 +0000 UTC m=+0.137811042 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 29 02:51:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:51:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:51:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:51:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:55.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 02:51:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:55 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:55.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:56 np0005539564 nova_compute[226295]: 2025-11-29 07:51:56.258 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:56 np0005539564 nova_compute[226295]: 2025-11-29 07:51:56.780 226310 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5fbve2fu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 02:51:57 np0005539564 nova_compute[226295]: 2025-11-29 07:51:57.861 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:57 np0005539564 nova_compute[226295]: 2025-11-29 07:51:57.862 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquired lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:57 np0005539564 nova_compute[226295]: 2025-11-29 07:51:57.862 226310 DEBUG nova.network.neutron [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:51:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:57.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:57.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:58 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 02:51:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:59.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:51:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:59.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:00 np0005539564 nova_compute[226295]: 2025-11-29 07:52:00.345 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.016 226310 DEBUG nova.network.neutron [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.261 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.342 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Releasing lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.345 226310 DEBUG os_brick.utils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.347 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.360 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.361 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[3dffae77-4a12-4bf4-a403-cd7ba27541c0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.362 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.372 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.372 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4c5b39-483c-47af-b84d-18479e030ca1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.373 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.388 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.388 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[ae33feab-8625-458e-8bb1-a720df083119]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.390 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[29f1a698-7e42-4e1d-acff-cbddead9e8ea]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.390 226310 DEBUG oslo_concurrency.processutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.413 226310 DEBUG oslo_concurrency.processutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.415 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.416 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.416 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:52:01 np0005539564 nova_compute[226295]: 2025-11-29 07:52:01.417 226310 DEBUG os_brick.utils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:52:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:01.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:01.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:52:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.846 226310 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5fbve2fu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={4a9f4928-146a-4c56-bbea-7dd9c7945b0c='b199b9a2-3ce8-4c17-bca4-a1228e4d21e5'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.848 226310 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Creating instance directory: /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.848 226310 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Ensure instance console log exists: /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.849 226310 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.853 226310 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.855 226310 DEBUG nova.virt.libvirt.vif [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:51:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:51:45Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.856 226310 DEBUG nova.network.os_vif_util [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.857 226310 DEBUG nova.network.os_vif_util [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.858 226310 DEBUG os_vif [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.859 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.860 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.860 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.865 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.865 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32326edd-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.866 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32326edd-91, col_values=(('external_ids', {'iface-id': '32326edd-9157-4611-83ff-41c84380e739', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:17:72', 'vm-uuid': 'bae55d85-4263-4efe-895d-a762627b52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.868 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:02 np0005539564 NetworkManager[48997]: <info>  [1764402722.8696] manager: (tap32326edd-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.870 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.883 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.884 226310 INFO os_vif [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91')#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.888 226310 DEBUG nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 02:52:02 np0005539564 nova_compute[226295]: 2025-11-29 07:52:02.889 226310 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5fbve2fu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={4a9f4928-146a-4c56-bbea-7dd9c7945b0c='b199b9a2-3ce8-4c17-bca4-a1228e4d21e5'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 02:52:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:03.701 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:03.702 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:03.702 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:03.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:05 np0005539564 nova_compute[226295]: 2025-11-29 07:52:05.348 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:05 np0005539564 nova_compute[226295]: 2025-11-29 07:52:05.799 226310 DEBUG nova.network.neutron [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Port 32326edd-9157-4611-83ff-41c84380e739 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 02:52:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:05.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:06.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:06 np0005539564 nova_compute[226295]: 2025-11-29 07:52:06.790 226310 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5fbve2fu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={4a9f4928-146a-4c56-bbea-7dd9c7945b0c='b199b9a2-3ce8-4c17-bca4-a1228e4d21e5'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 02:52:06 np0005539564 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:52:06 np0005539564 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:52:07 np0005539564 kernel: tap32326edd-91: entered promiscuous mode
Nov 29 02:52:07 np0005539564 NetworkManager[48997]: <info>  [1764402727.1802] manager: (tap32326edd-91): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 02:52:07 np0005539564 nova_compute[226295]: 2025-11-29 07:52:07.179 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:07Z|00090|binding|INFO|Claiming lport 32326edd-9157-4611-83ff-41c84380e739 for this additional chassis.
Nov 29 02:52:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:07Z|00091|binding|INFO|32326edd-9157-4611-83ff-41c84380e739: Claiming fa:16:3e:05:17:72 10.100.0.4
Nov 29 02:52:07 np0005539564 nova_compute[226295]: 2025-11-29 07:52:07.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:07 np0005539564 systemd-machined[190128]: New machine qemu-14-instance-0000001f.
Nov 29 02:52:07 np0005539564 systemd-udevd[239324]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:52:07 np0005539564 NetworkManager[48997]: <info>  [1764402727.2374] device (tap32326edd-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:52:07 np0005539564 NetworkManager[48997]: <info>  [1764402727.2395] device (tap32326edd-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:52:07 np0005539564 systemd[1]: Started Virtual Machine qemu-14-instance-0000001f.
Nov 29 02:52:07 np0005539564 nova_compute[226295]: 2025-11-29 07:52:07.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:07 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:07Z|00092|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 ovn-installed in OVS
Nov 29 02:52:07 np0005539564 nova_compute[226295]: 2025-11-29 07:52:07.291 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:07 np0005539564 nova_compute[226295]: 2025-11-29 07:52:07.734 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402727.7334886, bae55d85-4263-4efe-895d-a762627b52ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:07 np0005539564 nova_compute[226295]: 2025-11-29 07:52:07.735 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Started (Lifecycle Event)#033[00m
Nov 29 02:52:07 np0005539564 nova_compute[226295]: 2025-11-29 07:52:07.868 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:07 np0005539564 nova_compute[226295]: 2025-11-29 07:52:07.946 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:08.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:08 np0005539564 nova_compute[226295]: 2025-11-29 07:52:08.242 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402728.2411427, bae55d85-4263-4efe-895d-a762627b52ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:08 np0005539564 nova_compute[226295]: 2025-11-29 07:52:08.243 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:52:08 np0005539564 nova_compute[226295]: 2025-11-29 07:52:08.476 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:08 np0005539564 nova_compute[226295]: 2025-11-29 07:52:08.481 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:52:08 np0005539564 nova_compute[226295]: 2025-11-29 07:52:08.606 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Nov 29 02:52:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:10.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:10 np0005539564 nova_compute[226295]: 2025-11-29 07:52:10.392 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:12.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:12.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:12 np0005539564 nova_compute[226295]: 2025-11-29 07:52:12.871 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:14.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:14.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:15 np0005539564 nova_compute[226295]: 2025-11-29 07:52:15.395 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:15 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:15Z|00093|binding|INFO|Claiming lport 32326edd-9157-4611-83ff-41c84380e739 for this chassis.
Nov 29 02:52:15 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:15Z|00094|binding|INFO|32326edd-9157-4611-83ff-41c84380e739: Claiming fa:16:3e:05:17:72 10.100.0.4
Nov 29 02:52:15 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:15Z|00095|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 up in Southbound
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.527 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:17:72 10.100.0.4'], port_security=['fa:16:3e:05:17:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bae55d85-4263-4efe-895d-a762627b52ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=32326edd-9157-4611-83ff-41c84380e739) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.529 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 32326edd-9157-4611-83ff-41c84380e739 in datapath b746034c-0143-4024-986c-673efea114a3 bound to our chassis#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.532 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.553 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c01e245b-e08d-4d0a-b37d-c0962ded0bbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.555 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb746034c-01 in ovnmeta-b746034c-0143-4024-986c-673efea114a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.559 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb746034c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.559 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d21f6a-24b1-4076-a203-41c4316a1f99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.560 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[44afebce-649f-4357-9bf8-a5f7291a41b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.580 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d529252e-2637-4590-a2d2-469a1bd1389d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.603 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[209ea6a0-2e65-4077-8cac-ac971823d4a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.647 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c87d78-1f48-4f93-9973-f91d47af351a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 NetworkManager[48997]: <info>  [1764402735.6574] manager: (tapb746034c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.656 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a420d1-9254-42e3-89a5-51ebc1443b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.691 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c484c459-f955-43f1-9ac1-d80a69885fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.694 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdcca06-131a-4bb9-92c6-ec6aa7c13a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 systemd-udevd[239383]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:52:15 np0005539564 NetworkManager[48997]: <info>  [1764402735.7192] device (tapb746034c-00): carrier: link connected
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.723 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[da919fec-6405-43bf-9a30-25ba7fdcd360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.744 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1f7c54-3a00-4dd5-95ce-4423d3d23c1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571040, 'reachable_time': 41447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239401, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.762 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[528c072f-d062-4a32-8da9-ec87951e0dfc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:8cc1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571040, 'tstamp': 571040}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239402, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.783 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8e57bcb3-90b8-4d02-aa4e-8374beec87af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571040, 'reachable_time': 41447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239403, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.814 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9bd4af-bcee-4e41-82cc-6c7a2e61007d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.880 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[430f7534-a77f-455c-95c8-dc33ca4be1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.882 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.882 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.882 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:15 np0005539564 nova_compute[226295]: 2025-11-29 07:52:15.884 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:15 np0005539564 NetworkManager[48997]: <info>  [1764402735.8852] manager: (tapb746034c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 02:52:15 np0005539564 kernel: tapb746034c-00: entered promiscuous mode
Nov 29 02:52:15 np0005539564 nova_compute[226295]: 2025-11-29 07:52:15.886 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.888 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:15 np0005539564 nova_compute[226295]: 2025-11-29 07:52:15.889 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:15 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:15Z|00096|binding|INFO|Releasing lport 193f2fed-77bd-4c35-9dcd-f198bbb1915e from this chassis (sb_readonly=0)
Nov 29 02:52:15 np0005539564 nova_compute[226295]: 2025-11-29 07:52:15.917 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.919 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.920 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[96fc3d69-f09f-480d-8008-3118f7f3d63d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.921 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-b746034c-0143-4024-986c-673efea114a3
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID b746034c-0143-4024-986c-673efea114a3
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:52:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:15.922 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'env', 'PROCESS_TAG=haproxy-b746034c-0143-4024-986c-673efea114a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b746034c-0143-4024-986c-673efea114a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:52:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:16.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:16.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:16 np0005539564 podman[239436]: 2025-11-29 07:52:16.339560656 +0000 UTC m=+0.041424192 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:52:16 np0005539564 podman[239436]: 2025-11-29 07:52:16.526693412 +0000 UTC m=+0.228556968 container create eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:52:16 np0005539564 systemd[1]: Started libpod-conmon-eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1.scope.
Nov 29 02:52:16 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:52:16 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e197ff300bf70d0b2ba6c3e958466f365a34ff5d7e84d10e2e0fe6d8647ef2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:16 np0005539564 podman[239436]: 2025-11-29 07:52:16.636179336 +0000 UTC m=+0.338042942 container init eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:52:16 np0005539564 podman[239436]: 2025-11-29 07:52:16.643139805 +0000 UTC m=+0.345003351 container start eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:52:16 np0005539564 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[239451]: [NOTICE]   (239455) : New worker (239457) forked
Nov 29 02:52:16 np0005539564 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[239451]: [NOTICE]   (239455) : Loading success.
Nov 29 02:52:17 np0005539564 nova_compute[226295]: 2025-11-29 07:52:17.874 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:18.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:18.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:20.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:20.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:20 np0005539564 nova_compute[226295]: 2025-11-29 07:52:20.398 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:22.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:22.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:22 np0005539564 nova_compute[226295]: 2025-11-29 07:52:22.877 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:23 np0005539564 nova_compute[226295]: 2025-11-29 07:52:23.414 226310 INFO nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Post operation of migration started#033[00m
Nov 29 02:52:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:24.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:24.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:24 np0005539564 nova_compute[226295]: 2025-11-29 07:52:24.647 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:24.647 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:52:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:24.648 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:52:24 np0005539564 nova_compute[226295]: 2025-11-29 07:52:24.659 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:52:24 np0005539564 nova_compute[226295]: 2025-11-29 07:52:24.660 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquired lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:52:24 np0005539564 nova_compute[226295]: 2025-11-29 07:52:24.661 226310 DEBUG nova.network.neutron [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:52:25 np0005539564 nova_compute[226295]: 2025-11-29 07:52:25.378 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:25 np0005539564 nova_compute[226295]: 2025-11-29 07:52:25.379 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:52:25 np0005539564 nova_compute[226295]: 2025-11-29 07:52:25.400 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:26.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:26.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:26 np0005539564 nova_compute[226295]: 2025-11-29 07:52:26.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:26 np0005539564 nova_compute[226295]: 2025-11-29 07:52:26.341 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:26 np0005539564 podman[239473]: 2025-11-29 07:52:26.540053392 +0000 UTC m=+0.076641877 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 02:52:26 np0005539564 podman[239467]: 2025-11-29 07:52:26.552861048 +0000 UTC m=+0.091733435 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:52:26 np0005539564 podman[239466]: 2025-11-29 07:52:26.564681668 +0000 UTC m=+0.120987146 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:52:27 np0005539564 nova_compute[226295]: 2025-11-29 07:52:27.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:27 np0005539564 nova_compute[226295]: 2025-11-29 07:52:27.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:52:27 np0005539564 nova_compute[226295]: 2025-11-29 07:52:27.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:52:27 np0005539564 nova_compute[226295]: 2025-11-29 07:52:27.384 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:52:27 np0005539564 nova_compute[226295]: 2025-11-29 07:52:27.385 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:27 np0005539564 nova_compute[226295]: 2025-11-29 07:52:27.884 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:28.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:28.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:28 np0005539564 nova_compute[226295]: 2025-11-29 07:52:28.368 226310 DEBUG nova.network.neutron [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:52:28 np0005539564 nova_compute[226295]: 2025-11-29 07:52:28.432 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Releasing lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:52:28 np0005539564 nova_compute[226295]: 2025-11-29 07:52:28.463 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:28 np0005539564 nova_compute[226295]: 2025-11-29 07:52:28.464 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:28 np0005539564 nova_compute[226295]: 2025-11-29 07:52:28.464 226310 DEBUG oslo_concurrency.lockutils [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:28 np0005539564 nova_compute[226295]: 2025-11-29 07:52:28.469 226310 INFO nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 02:52:28 np0005539564 virtqemud[225880]: Domain id=14 name='instance-0000001f' uuid=bae55d85-4263-4efe-895d-a762627b52ff is tainted: custom-monitor
Nov 29 02:52:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:28.650 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:29 np0005539564 nova_compute[226295]: 2025-11-29 07:52:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:29 np0005539564 nova_compute[226295]: 2025-11-29 07:52:29.480 226310 INFO nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 02:52:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:30.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:30.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.401 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.402 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.402 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.403 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.403 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.428 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.486 226310 INFO nova.virt.libvirt.driver [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.493 226310 DEBUG nova.compute.manager [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3376242000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:30 np0005539564 nova_compute[226295]: 2025-11-29 07:52:30.876 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.383 226310 DEBUG nova.objects.instance [None req-6ccc4d96-f0cf-4efb-bd1d-02cbb9275ea9 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.431 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.432 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.613 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.614 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4619MB free_disk=20.921886444091797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.614 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.615 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.684 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Applying migration context for instance bae55d85-4263-4efe-895d-a762627b52ff as it has an incoming, in-progress migration c0345181-2f6f-4518-ac40-055a95731c2a. Migration status is running _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.684 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.701 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.736 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance bae55d85-4263-4efe-895d-a762627b52ff actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.737 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.737 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.755 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.958 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.958 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:52:31 np0005539564 nova_compute[226295]: 2025-11-29 07:52:31.989 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:52:32 np0005539564 nova_compute[226295]: 2025-11-29 07:52:32.034 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:52:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:32.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:32.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:32 np0005539564 nova_compute[226295]: 2025-11-29 07:52:32.075 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1785374527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:32 np0005539564 nova_compute[226295]: 2025-11-29 07:52:32.524 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:32 np0005539564 nova_compute[226295]: 2025-11-29 07:52:32.535 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:32 np0005539564 nova_compute[226295]: 2025-11-29 07:52:32.570 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:32 np0005539564 nova_compute[226295]: 2025-11-29 07:52:32.603 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:52:32 np0005539564 nova_compute[226295]: 2025-11-29 07:52:32.604 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:32 np0005539564 nova_compute[226295]: 2025-11-29 07:52:32.887 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:33 np0005539564 nova_compute[226295]: 2025-11-29 07:52:33.313 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:33 np0005539564 nova_compute[226295]: 2025-11-29 07:52:33.314 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:33 np0005539564 nova_compute[226295]: 2025-11-29 07:52:33.348 226310 DEBUG nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:52:33 np0005539564 nova_compute[226295]: 2025-11-29 07:52:33.490 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:33 np0005539564 nova_compute[226295]: 2025-11-29 07:52:33.491 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:33 np0005539564 nova_compute[226295]: 2025-11-29 07:52:33.499 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:52:33 np0005539564 nova_compute[226295]: 2025-11-29 07:52:33.500 226310 INFO nova.compute.claims [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:52:33 np0005539564 nova_compute[226295]: 2025-11-29 07:52:33.673 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:34.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:34.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1095754472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.169 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.178 226310 DEBUG nova.compute.provider_tree [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.201 226310 DEBUG nova.scheduler.client.report [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.227 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.228 226310 DEBUG nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.344 226310 DEBUG nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.371 226310 INFO nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.398 226310 DEBUG nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.510 226310 DEBUG nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.511 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.512 226310 INFO nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating image(s)#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.536 226310 DEBUG nova.storage.rbd_utils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.566 226310 DEBUG nova.storage.rbd_utils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.592 226310 DEBUG nova.storage.rbd_utils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.596 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.651 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.652 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.653 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.653 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.696 226310 DEBUG nova.storage.rbd_utils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:34 np0005539564 nova_compute[226295]: 2025-11-29 07:52:34.700 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.077 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.175 226310 DEBUG nova.storage.rbd_utils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] resizing rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.306 226310 DEBUG nova.objects.instance [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.328 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.329 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Ensure instance console log exists: /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.329 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.330 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.330 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.332 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.337 226310 WARNING nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.341 226310 DEBUG nova.virt.libvirt.host [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.343 226310 DEBUG nova.virt.libvirt.host [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.346 226310 DEBUG nova.virt.libvirt.host [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.347 226310 DEBUG nova.virt.libvirt.host [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.348 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.348 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.349 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.349 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.350 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.350 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.350 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.350 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.351 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.351 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.351 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.352 226310 DEBUG nova.virt.hardware [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.355 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.404 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:52:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/723722729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.772 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.810 226310 DEBUG nova.storage.rbd_utils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:35 np0005539564 nova_compute[226295]: 2025-11-29 07:52:35.817 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:36.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:36.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:36 np0005539564 nova_compute[226295]: 2025-11-29 07:52:36.235 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Check if temp file /var/lib/nova/instances/tmprlccsjil exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:52:36 np0005539564 nova_compute[226295]: 2025-11-29 07:52:36.236 226310 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmprlccsjil',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:52:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:52:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1532400460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:52:36 np0005539564 nova_compute[226295]: 2025-11-29 07:52:36.393 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:36 np0005539564 nova_compute[226295]: 2025-11-29 07:52:36.395 226310 DEBUG nova.objects.instance [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:36 np0005539564 nova_compute[226295]: 2025-11-29 07:52:36.698 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <uuid>8d9a0a31-ea5e-4820-846a-57c5d8338b25</uuid>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <name>instance-00000022</name>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersAdmin275Test-server-2077152952</nova:name>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:52:35</nova:creationTime>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <nova:user uuid="ca7da72f49a74a5b9e3fff8d172e592b">tempest-ServersAdmin275Test-1236821265-project-member</nova:user>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <nova:project uuid="3d210fcdedbb4d709720bcce6eaf61e1">tempest-ServersAdmin275Test-1236821265</nova:project>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <entry name="serial">8d9a0a31-ea5e-4820-846a-57c5d8338b25</entry>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <entry name="uuid">8d9a0a31-ea5e-4820-846a-57c5d8338b25</entry>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/console.log" append="off"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:52:36 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:52:36 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:52:36 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:52:36 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.158 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.159 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.160 226310 INFO nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Using config drive#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.212 226310 DEBUG nova.storage.rbd_utils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.734 226310 INFO nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating config drive at /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.743 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6mv020_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.894 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6mv020_" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.947 226310 DEBUG nova.storage.rbd_utils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.954 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:37 np0005539564 nova_compute[226295]: 2025-11-29 07:52:37.993 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:38.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:38 np0005539564 nova_compute[226295]: 2025-11-29 07:52:38.787 226310 DEBUG oslo_concurrency.processutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.833s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:38 np0005539564 nova_compute[226295]: 2025-11-29 07:52:38.788 226310 INFO nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deleting local config drive /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config because it was imported into RBD.#033[00m
Nov 29 02:52:38 np0005539564 systemd-machined[190128]: New machine qemu-15-instance-00000022.
Nov 29 02:52:38 np0005539564 systemd[1]: Started Virtual Machine qemu-15-instance-00000022.
Nov 29 02:52:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.560 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402759.5596914, 8d9a0a31-ea5e-4820-846a-57c5d8338b25 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.561 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.564 226310 DEBUG nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.565 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.570 226310 INFO nova.virt.libvirt.driver [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance spawned successfully.#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.571 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.589 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.599 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.607 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.607 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.608 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.609 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.609 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.610 226310 DEBUG nova.virt.libvirt.driver [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.619 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.620 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402759.5610905, 8d9a0a31-ea5e-4820-846a-57c5d8338b25 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.620 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] VM Started (Lifecycle Event)#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.642 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.646 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.669 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.677 226310 INFO nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Took 5.17 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.678 226310 DEBUG nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.750 226310 INFO nova.compute.manager [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Took 6.29 seconds to build instance.#033[00m
Nov 29 02:52:39 np0005539564 nova_compute[226295]: 2025-11-29 07:52:39.777 226310 DEBUG oslo_concurrency.lockutils [None req-81e60748-30b5-4e7f-8c40-f8d00ff458a1 ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:40.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:40.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:40 np0005539564 nova_compute[226295]: 2025-11-29 07:52:40.406 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:42.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:42.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.131 226310 INFO nova.compute.manager [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Rebuilding instance#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.438 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.458 226310 DEBUG nova.compute.manager [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.521 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.577 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.591 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'resources' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.598 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.605 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.614 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.620 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:52:42 np0005539564 nova_compute[226295]: 2025-11-29 07:52:42.998 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:44.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:44.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:44 np0005539564 nova_compute[226295]: 2025-11-29 07:52:44.135 226310 DEBUG nova.compute.manager [req-89d8337a-8ae8-4d37-b0c2-5eb31261e61b req-b26c26d1-4ca3-4ab4-90c4-73742e841de4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:44 np0005539564 nova_compute[226295]: 2025-11-29 07:52:44.136 226310 DEBUG oslo_concurrency.lockutils [req-89d8337a-8ae8-4d37-b0c2-5eb31261e61b req-b26c26d1-4ca3-4ab4-90c4-73742e841de4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:44 np0005539564 nova_compute[226295]: 2025-11-29 07:52:44.136 226310 DEBUG oslo_concurrency.lockutils [req-89d8337a-8ae8-4d37-b0c2-5eb31261e61b req-b26c26d1-4ca3-4ab4-90c4-73742e841de4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:44 np0005539564 nova_compute[226295]: 2025-11-29 07:52:44.136 226310 DEBUG oslo_concurrency.lockutils [req-89d8337a-8ae8-4d37-b0c2-5eb31261e61b req-b26c26d1-4ca3-4ab4-90c4-73742e841de4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:44 np0005539564 nova_compute[226295]: 2025-11-29 07:52:44.137 226310 DEBUG nova.compute.manager [req-89d8337a-8ae8-4d37-b0c2-5eb31261e61b req-b26c26d1-4ca3-4ab4-90c4-73742e841de4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:44 np0005539564 nova_compute[226295]: 2025-11-29 07:52:44.137 226310 DEBUG nova.compute.manager [req-89d8337a-8ae8-4d37-b0c2-5eb31261e61b req-b26c26d1-4ca3-4ab4-90c4-73742e841de4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.409 226310 INFO nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Took 7.22 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.409 226310 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.411 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.450 226310 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmprlccsjil',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bae55d85-4263-4efe-895d-a762627b52ff',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(16c28f83-7e6d-4691-b078-3048583134a7),old_vol_attachment_ids={4a9f4928-146a-4c56-bbea-7dd9c7945b0c='e91a474d-25b3-4d61-89c1-080b5b4408d2'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.454 226310 DEBUG nova.objects.instance [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lazy-loading 'migration_context' on Instance uuid bae55d85-4263-4efe-895d-a762627b52ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.456 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.459 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.460 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.486 226310 DEBUG nova.virt.libvirt.migration [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Find same serial number: pos=1, serial=4a9f4928-146a-4c56-bbea-7dd9c7945b0c _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.488 226310 DEBUG nova.virt.libvirt.vif [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:51:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:52:31Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.489 226310 DEBUG nova.network.os_vif_util [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.490 226310 DEBUG nova.network.os_vif_util [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.491 226310 DEBUG nova.virt.libvirt.migration [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:52:45 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:05:17:72"/>
Nov 29 02:52:45 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 02:52:45 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:52:45 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 02:52:45 np0005539564 nova_compute[226295]:  <target dev="tap32326edd-91"/>
Nov 29 02:52:45 np0005539564 nova_compute[226295]: </interface>
Nov 29 02:52:45 np0005539564 nova_compute[226295]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.492 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.963 226310 DEBUG nova.virt.libvirt.migration [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:52:45 np0005539564 nova_compute[226295]: 2025-11-29 07:52:45.964 226310 INFO nova.virt.libvirt.migration [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.060 226310 INFO nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:52:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:46.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:46.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.235 226310 DEBUG nova.compute.manager [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.235 226310 DEBUG oslo_concurrency.lockutils [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.235 226310 DEBUG oslo_concurrency.lockutils [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.236 226310 DEBUG oslo_concurrency.lockutils [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.236 226310 DEBUG nova.compute.manager [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.236 226310 WARNING nova.compute.manager [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.237 226310 DEBUG nova.compute.manager [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-changed-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.237 226310 DEBUG nova.compute.manager [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Refreshing instance network info cache due to event network-changed-32326edd-9157-4611-83ff-41c84380e739. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.238 226310 DEBUG oslo_concurrency.lockutils [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.239 226310 DEBUG oslo_concurrency.lockutils [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.240 226310 DEBUG nova.network.neutron [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Refreshing network info cache for port 32326edd-9157-4611-83ff-41c84380e739 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.567 226310 DEBUG nova.virt.libvirt.migration [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:52:46 np0005539564 nova_compute[226295]: 2025-11-29 07:52:46.567 226310 DEBUG nova.virt.libvirt.migration [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.071 226310 DEBUG nova.virt.libvirt.migration [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.072 226310 DEBUG nova.virt.libvirt.migration [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.287 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402767.2875335, bae55d85-4263-4efe-895d-a762627b52ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.288 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.314 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.317 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.336 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:52:47 np0005539564 kernel: tap32326edd-91 (unregistering): left promiscuous mode
Nov 29 02:52:47 np0005539564 NetworkManager[48997]: <info>  [1764402767.6166] device (tap32326edd-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.628 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.631 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:47 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:47Z|00097|binding|INFO|Releasing lport 32326edd-9157-4611-83ff-41c84380e739 from this chassis (sb_readonly=0)
Nov 29 02:52:47 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:47Z|00098|binding|INFO|Setting lport 32326edd-9157-4611-83ff-41c84380e739 down in Southbound
Nov 29 02:52:47 np0005539564 ovn_controller[130591]: 2025-11-29T07:52:47Z|00099|binding|INFO|Removing iface tap32326edd-91 ovn-installed in OVS
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.653 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:47.672 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:17:72 10.100.0.4'], port_security=['fa:16:3e:05:17:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'c8abfd39-a629-4854-b6ed-e2d68f35f5fb'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bae55d85-4263-4efe-895d-a762627b52ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '18', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=32326edd-9157-4611-83ff-41c84380e739) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:52:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:47.673 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 32326edd-9157-4611-83ff-41c84380e739 in datapath b746034c-0143-4024-986c-673efea114a3 unbound from our chassis#033[00m
Nov 29 02:52:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:47.675 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b746034c-0143-4024-986c-673efea114a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:52:47 np0005539564 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 29 02:52:47 np0005539564 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001f.scope: Consumed 3.250s CPU time.
Nov 29 02:52:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:47.678 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b6216b33-78ff-4b3b-b222-e4211a7bf54b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:47.678 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b746034c-0143-4024-986c-673efea114a3 namespace which is not needed anymore#033[00m
Nov 29 02:52:47 np0005539564 systemd-machined[190128]: Machine qemu-14-instance-0000001f terminated.
Nov 29 02:52:47 np0005539564 virtqemud[225880]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-4a9f4928-146a-4c56-bbea-7dd9c7945b0c: No such file or directory
Nov 29 02:52:47 np0005539564 virtqemud[225880]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-4a9f4928-146a-4c56-bbea-7dd9c7945b0c: No such file or directory
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.880 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.889 226310 DEBUG nova.virt.libvirt.guest [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.890 226310 INFO nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration operation has completed#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.890 226310 INFO nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] _post_live_migration() is started..#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.891 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.892 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:52:47 np0005539564 nova_compute[226295]: 2025-11-29 07:52:47.892 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:52:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.358 226310 DEBUG nova.compute.manager [req-f686cf67-728b-4b31-812e-de97d3541830 req-0c2da897-9575-41b5-9795-37fbb5788ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.359 226310 DEBUG oslo_concurrency.lockutils [req-f686cf67-728b-4b31-812e-de97d3541830 req-0c2da897-9575-41b5-9795-37fbb5788ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.359 226310 DEBUG oslo_concurrency.lockutils [req-f686cf67-728b-4b31-812e-de97d3541830 req-0c2da897-9575-41b5-9795-37fbb5788ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.360 226310 DEBUG oslo_concurrency.lockutils [req-f686cf67-728b-4b31-812e-de97d3541830 req-0c2da897-9575-41b5-9795-37fbb5788ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.360 226310 DEBUG nova.compute.manager [req-f686cf67-728b-4b31-812e-de97d3541830 req-0c2da897-9575-41b5-9795-37fbb5788ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.361 226310 DEBUG nova.compute.manager [req-f686cf67-728b-4b31-812e-de97d3541830 req-0c2da897-9575-41b5-9795-37fbb5788ead 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.439 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:48 np0005539564 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[239451]: [NOTICE]   (239455) : haproxy version is 2.8.14-c23fe91
Nov 29 02:52:48 np0005539564 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[239451]: [NOTICE]   (239455) : path to executable is /usr/sbin/haproxy
Nov 29 02:52:48 np0005539564 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[239451]: [WARNING]  (239455) : Exiting Master process...
Nov 29 02:52:48 np0005539564 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[239451]: [WARNING]  (239455) : Exiting Master process...
Nov 29 02:52:48 np0005539564 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[239451]: [ALERT]    (239455) : Current worker (239457) exited with code 143 (Terminated)
Nov 29 02:52:48 np0005539564 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[239451]: [WARNING]  (239455) : All workers exited. Exiting... (0)
Nov 29 02:52:48 np0005539564 systemd[1]: libpod-eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1.scope: Deactivated successfully.
Nov 29 02:52:48 np0005539564 podman[239977]: 2025-11-29 07:52:48.821220174 +0000 UTC m=+1.022982804 container died eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:52:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.978 226310 DEBUG nova.compute.manager [req-be81e4c0-12bd-4cb9-a6b9-4846006233fb req-213c6456-9a97-4a5d-82ea-feed255ccc03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.979 226310 DEBUG oslo_concurrency.lockutils [req-be81e4c0-12bd-4cb9-a6b9-4846006233fb req-213c6456-9a97-4a5d-82ea-feed255ccc03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.980 226310 DEBUG oslo_concurrency.lockutils [req-be81e4c0-12bd-4cb9-a6b9-4846006233fb req-213c6456-9a97-4a5d-82ea-feed255ccc03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.980 226310 DEBUG oslo_concurrency.lockutils [req-be81e4c0-12bd-4cb9-a6b9-4846006233fb req-213c6456-9a97-4a5d-82ea-feed255ccc03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.981 226310 DEBUG nova.compute.manager [req-be81e4c0-12bd-4cb9-a6b9-4846006233fb req-213c6456-9a97-4a5d-82ea-feed255ccc03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:48 np0005539564 nova_compute[226295]: 2025-11-29 07:52:48.981 226310 DEBUG nova.compute.manager [req-be81e4c0-12bd-4cb9-a6b9-4846006233fb req-213c6456-9a97-4a5d-82ea-feed255ccc03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-unplugged-32326edd-9157-4611-83ff-41c84380e739 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:52:49 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1-userdata-shm.mount: Deactivated successfully.
Nov 29 02:52:49 np0005539564 systemd[1]: var-lib-containers-storage-overlay-d1e197ff300bf70d0b2ba6c3e958466f365a34ff5d7e84d10e2e0fe6d8647ef2-merged.mount: Deactivated successfully.
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.067 226310 DEBUG nova.network.neutron [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Activated binding for port 32326edd-9157-4611-83ff-41c84380e739 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.068 226310 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.070 226310 DEBUG nova.virt.libvirt.vif [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1291961647',display_name='tempest-LiveMigrationTest-server-1291961647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1291961647',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:51:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-48rnb3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:52:34Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=bae55d85-4263-4efe-895d-a762627b52ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.071 226310 DEBUG nova.network.os_vif_util [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.072 226310 DEBUG nova.network.os_vif_util [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.073 226310 DEBUG os_vif [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.077 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.077 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32326edd-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.098 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.103 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.107 226310 INFO os_vif [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:17:72,bridge_name='br-int',has_traffic_filtering=True,id=32326edd-9157-4611-83ff-41c84380e739,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32326edd-91')#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.107 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.108 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.109 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.109 226310 DEBUG nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.110 226310 INFO nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deleting instance files /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff_del#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.111 226310 INFO nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Deletion of /var/lib/nova/instances/bae55d85-4263-4efe-895d-a762627b52ff_del complete#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.116 226310 DEBUG nova.network.neutron [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updated VIF entry in instance network info cache for port 32326edd-9157-4611-83ff-41c84380e739. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.117 226310 DEBUG nova.network.neutron [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Updating instance_info_cache with network_info: [{"id": "32326edd-9157-4611-83ff-41c84380e739", "address": "fa:16:3e:05:17:72", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32326edd-91", "ovs_interfaceid": "32326edd-9157-4611-83ff-41c84380e739", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.144 226310 DEBUG oslo_concurrency.lockutils [req-f08cbe52-ae38-4fef-9154-ed4852cc6f8d req-f698fb37-b08c-4bc2-adf1-ee689073d552 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bae55d85-4263-4efe-895d-a762627b52ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:52:49 np0005539564 podman[239977]: 2025-11-29 07:52:49.33820117 +0000 UTC m=+1.539963790 container cleanup eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:52:49 np0005539564 systemd[1]: libpod-conmon-eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1.scope: Deactivated successfully.
Nov 29 02:52:49 np0005539564 podman[240018]: 2025-11-29 07:52:49.598137052 +0000 UTC m=+0.216436757 container remove eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.606 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c44e421b-2f15-4da1-8137-8b85d43fe460]: (4, ('Sat Nov 29 07:52:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3 (eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1)\neeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1\nSat Nov 29 07:52:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3 (eeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1)\neeb0362f62587ecaffea55a033bd54db7b3e587ac5740c2792af547a4c7970c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.609 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4326de-cb44-4995-a7f8-64c677cd90aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.610 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.612 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:49 np0005539564 kernel: tapb746034c-00: left promiscuous mode
Nov 29 02:52:49 np0005539564 nova_compute[226295]: 2025-11-29 07:52:49.631 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.638 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8e3bb3-15ff-4e9c-ae97-f7467dabdee1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.660 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c62d0224-a62b-4a19-b365-eee218f3eacc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.662 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[130fb544-75af-4267-9fb8-5f9577b2a855]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.681 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[14cbf4ed-687e-45f5-8e1d-9656d1885798]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571032, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240034, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:49 np0005539564 systemd[1]: run-netns-ovnmeta\x2db746034c\x2d0143\x2d4024\x2d986c\x2d673efea114a3.mount: Deactivated successfully.
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.688 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b746034c-0143-4024-986c-673efea114a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:52:49.688 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[9ede40a5-7c5b-433a-a367-44f3dc810c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:50.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.451 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.558 226310 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.558 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.559 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.559 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.560 226310 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.560 226310 WARNING nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.561 226310 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.561 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.562 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.562 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.563 226310 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.563 226310 WARNING nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.564 226310 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.564 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.564 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.565 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.565 226310 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.565 226310 WARNING nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.566 226310 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.566 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.566 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.566 226310 DEBUG oslo_concurrency.lockutils [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.567 226310 DEBUG nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] No waiting events found dispatching network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:50 np0005539564 nova_compute[226295]: 2025-11-29 07:52:50.567 226310 WARNING nova.compute.manager [req-1a34b834-f6a6-467d-83ec-3fec13b088c0 req-72199870-789f-45c0-ad23-209eb85baf47 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Received unexpected event network-vif-plugged-32326edd-9157-4611-83ff-41c84380e739 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:52:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:52.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:52.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:52 np0005539564 nova_compute[226295]: 2025-11-29 07:52:52.673 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:52:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.145 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.749 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "bae55d85-4263-4efe-895d-a762627b52ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.749 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.750 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "bae55d85-4263-4efe-895d-a762627b52ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.776 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.777 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.777 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.777 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:52:54 np0005539564 nova_compute[226295]: 2025-11-29 07:52:54.778 226310 DEBUG oslo_concurrency.processutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2659406463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.194 226310 DEBUG oslo_concurrency.processutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.268 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.269 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.408 226310 WARNING nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.409 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4617MB free_disk=20.858619689941406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.409 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.410 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.453 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.791 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Migration for instance bae55d85-4263-4efe-895d-a762627b52ff refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.816 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:52:55 np0005539564 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 29 02:52:55 np0005539564 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000022.scope: Consumed 13.917s CPU time.
Nov 29 02:52:55 np0005539564 systemd-machined[190128]: Machine qemu-15-instance-00000022 terminated.
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.856 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Instance 8d9a0a31-ea5e-4820-846a-57c5d8338b25 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.857 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Migration 16c28f83-7e6d-4691-b078-3048583134a7 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.857 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.857 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:52:55 np0005539564 nova_compute[226295]: 2025-11-29 07:52:55.911 226310 DEBUG oslo_concurrency.processutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:56.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3716945864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.361 226310 DEBUG oslo_concurrency.processutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.368 226310 DEBUG nova.compute.provider_tree [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.383 226310 DEBUG nova.scheduler.client.report [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.402 226310 DEBUG nova.compute.resource_tracker [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.403 226310 DEBUG oslo_concurrency.lockutils [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.408 226310 INFO nova.compute.manager [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.473 226310 INFO nova.scheduler.client.report [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Deleted allocation for migration 16c28f83-7e6d-4691-b078-3048583134a7#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.474 226310 DEBUG nova.virt.libvirt.driver [None req-263135d4-e359-4cf7-9ec3-6b7f65782ce5 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.692 226310 INFO nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance shutdown successfully after 14 seconds.#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.697 226310 INFO nova.virt.libvirt.driver [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance destroyed successfully.#033[00m
Nov 29 02:52:56 np0005539564 nova_compute[226295]: 2025-11-29 07:52:56.701 226310 INFO nova.virt.libvirt.driver [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance destroyed successfully.#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.140 226310 INFO nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deleting instance files /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25_del#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.141 226310 INFO nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deletion of /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25_del complete#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.287 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.288 226310 INFO nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating image(s)#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.320 226310 DEBUG nova.storage.rbd_utils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.354 226310 DEBUG nova.storage.rbd_utils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.383 226310 DEBUG nova.storage.rbd_utils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.388 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.448 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.449 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.450 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.450 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.482 226310 DEBUG nova.storage.rbd_utils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.487 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:57 np0005539564 podman[240161]: 2025-11-29 07:52:57.49888157 +0000 UTC m=+0.058450092 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:52:57 np0005539564 podman[240162]: 2025-11-29 07:52:57.498819178 +0000 UTC m=+0.052659905 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:52:57 np0005539564 podman[240160]: 2025-11-29 07:52:57.52773435 +0000 UTC m=+0.089482151 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.777 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:57 np0005539564 nova_compute[226295]: 2025-11-29 07:52:57.853 226310 DEBUG nova.storage.rbd_utils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] resizing rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:52:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:52:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.741 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.741 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Ensure instance console log exists: /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.742 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.742 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.742 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.743 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.746 226310 WARNING nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.893 226310 DEBUG nova.virt.libvirt.host [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.893 226310 DEBUG nova.virt.libvirt.host [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.897 226310 DEBUG nova.virt.libvirt.host [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.897 226310 DEBUG nova.virt.libvirt.host [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.898 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.899 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.899 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.899 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.900 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.900 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.900 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.900 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.900 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.900 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.901 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.901 226310 DEBUG nova.virt.hardware [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.901 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:58 np0005539564 nova_compute[226295]: 2025-11-29 07:52:58.915 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.149 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:52:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/506622480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.383 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.406 226310 DEBUG nova.storage.rbd_utils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.410 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:52:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1701659343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.838 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.841 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <uuid>8d9a0a31-ea5e-4820-846a-57c5d8338b25</uuid>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <name>instance-00000022</name>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersAdmin275Test-server-2077152952</nova:name>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:52:58</nova:creationTime>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <nova:user uuid="ca7da72f49a74a5b9e3fff8d172e592b">tempest-ServersAdmin275Test-1236821265-project-member</nova:user>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <nova:project uuid="3d210fcdedbb4d709720bcce6eaf61e1">tempest-ServersAdmin275Test-1236821265</nova:project>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <entry name="serial">8d9a0a31-ea5e-4820-846a-57c5d8338b25</entry>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <entry name="uuid">8d9a0a31-ea5e-4820-846a-57c5d8338b25</entry>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/console.log" append="off"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:52:59 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:52:59 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:52:59 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:52:59 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.912 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.913 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.913 226310 INFO nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Using config drive#033[00m
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.940 226310 DEBUG nova.storage.rbd_utils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:59 np0005539564 nova_compute[226295]: 2025-11-29 07:52:59.973 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.035 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'keypairs' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:00.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.455 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.471 226310 INFO nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating config drive at /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config#033[00m
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.475 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwoq6kgxh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.600 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwoq6kgxh" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.632 226310 DEBUG nova.storage.rbd_utils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.637 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.811 226310 DEBUG oslo_concurrency.processutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:00 np0005539564 nova_compute[226295]: 2025-11-29 07:53:00.813 226310 INFO nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deleting local config drive /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config because it was imported into RBD.#033[00m
Nov 29 02:53:00 np0005539564 systemd-machined[190128]: New machine qemu-16-instance-00000022.
Nov 29 02:53:00 np0005539564 systemd[1]: Started Virtual Machine qemu-16-instance-00000022.
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.445888) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781445957, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2701, "num_deletes": 514, "total_data_size": 5418557, "memory_usage": 5500760, "flush_reason": "Manual Compaction"}
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781464089, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 2410991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26156, "largest_seqno": 28852, "table_properties": {"data_size": 2402493, "index_size": 4352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 24255, "raw_average_key_size": 20, "raw_value_size": 2381918, "raw_average_value_size": 1973, "num_data_blocks": 192, "num_entries": 1207, "num_filter_entries": 1207, "num_deletions": 514, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402590, "oldest_key_time": 1764402590, "file_creation_time": 1764402781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18234 microseconds, and 6974 cpu microseconds.
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.464126) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 2410991 bytes OK
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.464145) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.465506) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.465518) EVENT_LOG_v1 {"time_micros": 1764402781465514, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.465534) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5405616, prev total WAL file size 5405616, number of live WAL files 2.
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.466647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353036' seq:72057594037927935, type:22 .. '6C6F676D00373630' seq:0, type:0; will stop at (end)
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(2354KB)], [51(10222KB)]
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781466682, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12878533, "oldest_snapshot_seqno": -1}
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.589 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 8d9a0a31-ea5e-4820-846a-57c5d8338b25 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.590 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402781.5892422, 8d9a0a31-ea5e-4820-846a-57c5d8338b25 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.591 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.593 226310 DEBUG nova.compute.manager [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.593 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.597 226310 INFO nova.virt.libvirt.driver [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance spawned successfully.#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.598 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5710 keys, 10029626 bytes, temperature: kUnknown
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781604553, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 10029626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9990962, "index_size": 23258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 146402, "raw_average_key_size": 25, "raw_value_size": 9887814, "raw_average_value_size": 1731, "num_data_blocks": 942, "num_entries": 5710, "num_filter_entries": 5710, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764402781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.604864) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10029626 bytes
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.606328) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.3 rd, 72.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(9.5) write-amplify(4.2) OK, records in: 6677, records dropped: 967 output_compression: NoCompression
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.606356) EVENT_LOG_v1 {"time_micros": 1764402781606343, "job": 30, "event": "compaction_finished", "compaction_time_micros": 137986, "compaction_time_cpu_micros": 23702, "output_level": 6, "num_output_files": 1, "total_output_size": 10029626, "num_input_records": 6677, "num_output_records": 5710, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781607149, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781609856, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.466564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.610061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.610066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.610067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.610069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:01.610070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.618 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.622 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.626 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.626 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.627 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.627 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.627 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.628 226310 DEBUG nova.virt.libvirt.driver [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.665 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.665 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402781.5903463, 8d9a0a31-ea5e-4820-846a-57c5d8338b25 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.665 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] VM Started (Lifecycle Event)#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.693 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.698 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.712 226310 DEBUG nova.compute.manager [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.747 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.794 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.796 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.796 226310 DEBUG nova.objects.instance [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:53:01 np0005539564 nova_compute[226295]: 2025-11-29 07:53:01.864 226310 DEBUG oslo_concurrency.lockutils [None req-31abb36a-522a-4878-b0b8-0ae1332b1fdd ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:02.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.462877) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782462902, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 276, "num_deletes": 251, "total_data_size": 73349, "memory_usage": 79888, "flush_reason": "Manual Compaction"}
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782470109, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 47910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28857, "largest_seqno": 29128, "table_properties": {"data_size": 46026, "index_size": 113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4910, "raw_average_key_size": 18, "raw_value_size": 42336, "raw_average_value_size": 158, "num_data_blocks": 5, "num_entries": 267, "num_filter_entries": 267, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402781, "oldest_key_time": 1764402781, "file_creation_time": 1764402782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 7281 microseconds, and 638 cpu microseconds.
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.470155) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 47910 bytes OK
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.470173) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.471163) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.471176) EVENT_LOG_v1 {"time_micros": 1764402782471171, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.471190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 71250, prev total WAL file size 71250, number of live WAL files 2.
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.471635) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(46KB)], [54(9794KB)]
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782471704, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 10077536, "oldest_snapshot_seqno": -1}
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5467 keys, 7732826 bytes, temperature: kUnknown
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782529688, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 7732826, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7697811, "index_size": 20249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 142058, "raw_average_key_size": 25, "raw_value_size": 7600811, "raw_average_value_size": 1390, "num_data_blocks": 808, "num_entries": 5467, "num_filter_entries": 5467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764402782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.529880) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7732826 bytes
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.531022) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 133.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.6 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(371.7) write-amplify(161.4) OK, records in: 5977, records dropped: 510 output_compression: NoCompression
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.531037) EVENT_LOG_v1 {"time_micros": 1764402782531030, "job": 32, "event": "compaction_finished", "compaction_time_micros": 58033, "compaction_time_cpu_micros": 19611, "output_level": 6, "num_output_files": 1, "total_output_size": 7732826, "num_input_records": 5977, "num_output_records": 5467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782531140, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782532839, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.471501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.532875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.532879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.532880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.532881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:53:02.533079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539564 nova_compute[226295]: 2025-11-29 07:53:02.889 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402767.8887568, bae55d85-4263-4efe-895d-a762627b52ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:02 np0005539564 nova_compute[226295]: 2025-11-29 07:53:02.890 226310 INFO nova.compute.manager [-] [instance: bae55d85-4263-4efe-895d-a762627b52ff] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:53:02 np0005539564 nova_compute[226295]: 2025-11-29 07:53:02.918 226310 DEBUG nova.compute.manager [None req-39be049a-3339-4017-9198-b9961245eabd - - - - - -] [instance: bae55d85-4263-4efe-895d-a762627b52ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:53:03.702 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:53:03.702 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:53:03.702 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:04.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:04 np0005539564 nova_compute[226295]: 2025-11-29 07:53:04.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.130 226310 INFO nova.compute.manager [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Rebuilding instance#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.456 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.476 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.498 226310 DEBUG nova.compute.manager [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.557 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.572 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.582 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lazy-loading 'resources' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.596 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.607 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:53:05 np0005539564 nova_compute[226295]: 2025-11-29 07:53:05.612 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:53:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:06.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:06.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:53:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:07 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:53:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:08.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:08.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:09 np0005539564 nova_compute[226295]: 2025-11-29 07:53:09.157 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:10.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:10 np0005539564 nova_compute[226295]: 2025-11-29 07:53:10.458 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:11 np0005539564 nova_compute[226295]: 2025-11-29 07:53:11.585 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:14.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:14.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:14 np0005539564 nova_compute[226295]: 2025-11-29 07:53:14.206 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:15 np0005539564 nova_compute[226295]: 2025-11-29 07:53:15.460 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:15 np0005539564 nova_compute[226295]: 2025-11-29 07:53:15.657 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:53:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:16.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:18.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:18.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:19 np0005539564 nova_compute[226295]: 2025-11-29 07:53:19.211 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:20 np0005539564 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 29 02:53:20 np0005539564 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000022.scope: Consumed 13.710s CPU time.
Nov 29 02:53:20 np0005539564 systemd-machined[190128]: Machine qemu-16-instance-00000022 terminated.
Nov 29 02:53:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:20.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:20.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:20 np0005539564 nova_compute[226295]: 2025-11-29 07:53:20.462 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:20 np0005539564 nova_compute[226295]: 2025-11-29 07:53:20.682 226310 INFO nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance shutdown successfully after 15 seconds.#033[00m
Nov 29 02:53:20 np0005539564 nova_compute[226295]: 2025-11-29 07:53:20.688 226310 INFO nova.virt.libvirt.driver [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance destroyed successfully.#033[00m
Nov 29 02:53:20 np0005539564 nova_compute[226295]: 2025-11-29 07:53:20.695 226310 INFO nova.virt.libvirt.driver [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance destroyed successfully.#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.481 226310 INFO nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deleting instance files /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25_del#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.482 226310 INFO nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deletion of /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25_del complete#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.630 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.631 226310 INFO nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating image(s)#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.670 226310 DEBUG nova.storage.rbd_utils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.705 226310 DEBUG nova.storage.rbd_utils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.744 226310 DEBUG nova.storage.rbd_utils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.751 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.843 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.845 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.845 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.846 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.876 226310 DEBUG nova.storage.rbd_utils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:21 np0005539564 nova_compute[226295]: 2025-11-29 07:53:21.881 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:22.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:22.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.572 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.667 226310 DEBUG nova.storage.rbd_utils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] resizing rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.792 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.793 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Ensure instance console log exists: /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.793 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.794 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.794 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.795 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.800 226310 WARNING nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.808 226310 DEBUG nova.virt.libvirt.host [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.809 226310 DEBUG nova.virt.libvirt.host [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.813 226310 DEBUG nova.virt.libvirt.host [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.813 226310 DEBUG nova.virt.libvirt.host [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.814 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.814 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.815 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.815 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.815 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.815 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.815 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.816 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.816 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.816 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.816 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.816 226310 DEBUG nova.virt.hardware [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.817 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:22 np0005539564 nova_compute[226295]: 2025-11-29 07:53:22.840 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:23 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2282278663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.325 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.361 226310 DEBUG nova.storage.rbd_utils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.365 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:23 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2049988182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.821 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.824 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <uuid>8d9a0a31-ea5e-4820-846a-57c5d8338b25</uuid>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <name>instance-00000022</name>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersAdmin275Test-server-2077152952</nova:name>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:53:22</nova:creationTime>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <nova:user uuid="ca7da72f49a74a5b9e3fff8d172e592b">tempest-ServersAdmin275Test-1236821265-project-member</nova:user>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <nova:project uuid="3d210fcdedbb4d709720bcce6eaf61e1">tempest-ServersAdmin275Test-1236821265</nova:project>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <entry name="serial">8d9a0a31-ea5e-4820-846a-57c5d8338b25</entry>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <entry name="uuid">8d9a0a31-ea5e-4820-846a-57c5d8338b25</entry>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/console.log" append="off"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:53:23 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:53:23 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:53:23 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:53:23 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.881 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.881 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.881 226310 INFO nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Using config drive#033[00m
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.906 226310 DEBUG nova.storage.rbd_utils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:23 np0005539564 nova_compute[226295]: 2025-11-29 07:53:23.927 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.012 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lazy-loading 'keypairs' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:24.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:24.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.378 226310 INFO nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Creating config drive at /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config#033[00m
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.390 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpac_ycf3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.528 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpac_ycf3x" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.580 226310 DEBUG nova.storage.rbd_utils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] rbd image 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.587 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.765 226310 DEBUG oslo_concurrency.processutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config 8d9a0a31-ea5e-4820-846a-57c5d8338b25_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:24 np0005539564 nova_compute[226295]: 2025-11-29 07:53:24.766 226310 INFO nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deleting local config drive /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25/disk.config because it was imported into RBD.#033[00m
Nov 29 02:53:24 np0005539564 systemd-machined[190128]: New machine qemu-17-instance-00000022.
Nov 29 02:53:24 np0005539564 systemd[1]: Started Virtual Machine qemu-17-instance-00000022.
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.465 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.961 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 8d9a0a31-ea5e-4820-846a-57c5d8338b25 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.962 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402805.960903, 8d9a0a31-ea5e-4820-846a-57c5d8338b25 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.962 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.965 226310 DEBUG nova.compute.manager [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.965 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.969 226310 INFO nova.virt.libvirt.driver [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance spawned successfully.#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.970 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.986 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:25 np0005539564 nova_compute[226295]: 2025-11-29 07:53:25.996 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.002 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.003 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.004 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.005 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.005 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.006 226310 DEBUG nova.virt.libvirt.driver [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.036 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.037 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402805.9664543, 8d9a0a31-ea5e-4820-846a-57c5d8338b25 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.037 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] VM Started (Lifecycle Event)#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.082 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.087 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.131 226310 DEBUG nova.compute.manager [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.133 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:53:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:26.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:26.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.201 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.202 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.202 226310 DEBUG nova.objects.instance [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.263 226310 DEBUG oslo_concurrency.lockutils [None req-7ab5c996-fb0b-4953-85df-828909e7907c a6836a8e726f4bed8101c3fc3809d4ee e64bdd40561d41ce91b39f78d29c3144 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:26 np0005539564 nova_compute[226295]: 2025-11-29 07:53:26.352 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.583 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.584 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.584 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.585 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.585 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.586 226310 INFO nova.compute.manager [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Terminating instance#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.587 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "refresh_cache-8d9a0a31-ea5e-4820-846a-57c5d8338b25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.587 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquired lock "refresh_cache-8d9a0a31-ea5e-4820-846a-57c5d8338b25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.588 226310 DEBUG nova.network.neutron [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:53:27 np0005539564 nova_compute[226295]: 2025-11-29 07:53:27.729 226310 DEBUG nova.network.neutron [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:53:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:53:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601275792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:53:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:53:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601275792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:53:28 np0005539564 nova_compute[226295]: 2025-11-29 07:53:28.124 226310 DEBUG nova.network.neutron [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:28.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:28 np0005539564 nova_compute[226295]: 2025-11-29 07:53:28.161 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Releasing lock "refresh_cache-8d9a0a31-ea5e-4820-846a-57c5d8338b25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:28 np0005539564 nova_compute[226295]: 2025-11-29 07:53:28.162 226310 DEBUG nova.compute.manager [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:53:28 np0005539564 nova_compute[226295]: 2025-11-29 07:53:28.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:28 np0005539564 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 29 02:53:28 np0005539564 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000022.scope: Consumed 3.416s CPU time.
Nov 29 02:53:28 np0005539564 systemd-machined[190128]: Machine qemu-17-instance-00000022 terminated.
Nov 29 02:53:28 np0005539564 nova_compute[226295]: 2025-11-29 07:53:28.393 226310 INFO nova.virt.libvirt.driver [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance destroyed successfully.#033[00m
Nov 29 02:53:28 np0005539564 nova_compute[226295]: 2025-11-29 07:53:28.394 226310 DEBUG nova.objects.instance [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lazy-loading 'resources' on Instance uuid 8d9a0a31-ea5e-4820-846a-57c5d8338b25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:28 np0005539564 podman[241184]: 2025-11-29 07:53:28.467234479 +0000 UTC m=+0.070152769 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 02:53:28 np0005539564 podman[241181]: 2025-11-29 07:53:28.476421278 +0000 UTC m=+0.099872963 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:53:28 np0005539564 podman[241182]: 2025-11-29 07:53:28.495087543 +0000 UTC m=+0.103520392 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 02:53:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.218 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.385 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.385 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.386 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.386 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.761 226310 INFO nova.virt.libvirt.driver [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deleting instance files /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25_del#033[00m
Nov 29 02:53:29 np0005539564 nova_compute[226295]: 2025-11-29 07:53:29.762 226310 INFO nova.virt.libvirt.driver [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deletion of /var/lib/nova/instances/8d9a0a31-ea5e-4820-846a-57c5d8338b25_del complete#033[00m
Nov 29 02:53:30 np0005539564 nova_compute[226295]: 2025-11-29 07:53:30.020 226310 INFO nova.compute.manager [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Took 1.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:53:30 np0005539564 nova_compute[226295]: 2025-11-29 07:53:30.022 226310 DEBUG oslo.service.loopingcall [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:53:30 np0005539564 nova_compute[226295]: 2025-11-29 07:53:30.022 226310 DEBUG nova.compute.manager [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:53:30 np0005539564 nova_compute[226295]: 2025-11-29 07:53:30.023 226310 DEBUG nova.network.neutron [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:53:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:30.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:30 np0005539564 nova_compute[226295]: 2025-11-29 07:53:30.508 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:30 np0005539564 nova_compute[226295]: 2025-11-29 07:53:30.788 226310 DEBUG nova.network.neutron [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:53:30 np0005539564 nova_compute[226295]: 2025-11-29 07:53:30.852 226310 DEBUG nova.network.neutron [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:30 np0005539564 nova_compute[226295]: 2025-11-29 07:53:30.943 226310 INFO nova.compute.manager [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Took 0.92 seconds to deallocate network for instance.#033[00m
Nov 29 02:53:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:53:31.066 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:53:31.068 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.172 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.173 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.219 226310 DEBUG oslo_concurrency.processutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2985886005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.656 226310 DEBUG oslo_concurrency.processutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.664 226310 DEBUG nova.compute.provider_tree [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.691 226310 DEBUG nova.scheduler.client.report [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.714 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.719 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.719 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.719 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.720 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.787 226310 INFO nova.scheduler.client.report [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Deleted allocations for instance 8d9a0a31-ea5e-4820-846a-57c5d8338b25#033[00m
Nov 29 02:53:31 np0005539564 nova_compute[226295]: 2025-11-29 07:53:31.857 226310 DEBUG oslo_concurrency.lockutils [None req-bc39d98d-960a-4ae7-9082-c75a9364985b ca7da72f49a74a5b9e3fff8d172e592b 3d210fcdedbb4d709720bcce6eaf61e1 - - default default] Lock "8d9a0a31-ea5e-4820-846a-57c5d8338b25" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:32.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:32.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1132834006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.190 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.357 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.358 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4764MB free_disk=20.913898468017578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.358 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.359 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.411 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.411 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.428 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.836 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.845 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.870 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.899 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:53:32 np0005539564 nova_compute[226295]: 2025-11-29 07:53:32.899 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:33 np0005539564 nova_compute[226295]: 2025-11-29 07:53:33.901 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:33 np0005539564 nova_compute[226295]: 2025-11-29 07:53:33.901 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:34.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:34.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:34 np0005539564 nova_compute[226295]: 2025-11-29 07:53:34.274 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:53:35.071 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:35 np0005539564 nova_compute[226295]: 2025-11-29 07:53:35.511 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:36.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:36.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:37 np0005539564 nova_compute[226295]: 2025-11-29 07:53:37.698 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:37 np0005539564 nova_compute[226295]: 2025-11-29 07:53:37.698 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:37 np0005539564 nova_compute[226295]: 2025-11-29 07:53:37.720 226310 DEBUG nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:53:37 np0005539564 nova_compute[226295]: 2025-11-29 07:53:37.817 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:37 np0005539564 nova_compute[226295]: 2025-11-29 07:53:37.817 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:37 np0005539564 nova_compute[226295]: 2025-11-29 07:53:37.824 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:53:37 np0005539564 nova_compute[226295]: 2025-11-29 07:53:37.824 226310 INFO nova.compute.claims [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:53:37 np0005539564 nova_compute[226295]: 2025-11-29 07:53:37.948 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:38.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:38.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3682029325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.431 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.437 226310 DEBUG nova.compute.provider_tree [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.458 226310 DEBUG nova.scheduler.client.report [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.480 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.481 226310 DEBUG nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.525 226310 DEBUG nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.525 226310 DEBUG nova.network.neutron [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.560 226310 INFO nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.597 226310 DEBUG nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.708 226310 DEBUG nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.710 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.710 226310 INFO nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Creating image(s)#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.744 226310 DEBUG nova.storage.rbd_utils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.781 226310 DEBUG nova.storage.rbd_utils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.810 226310 DEBUG nova.storage.rbd_utils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.816 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.882 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.884 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.884 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.885 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.913 226310 DEBUG nova.storage.rbd_utils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.917 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.978 226310 DEBUG nova.network.neutron [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:53:38 np0005539564 nova_compute[226295]: 2025-11-29 07:53:38.979 226310 DEBUG nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.221 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.292 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.297 226310 DEBUG nova.storage.rbd_utils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] resizing rbd image 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.421 226310 DEBUG nova.objects.instance [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3c6c02ba-1cb9-4b9b-9089-3a905adade58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.439 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.440 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Ensure instance console log exists: /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.441 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.442 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.442 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.445 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.451 226310 WARNING nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.457 226310 DEBUG nova.virt.libvirt.host [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.458 226310 DEBUG nova.virt.libvirt.host [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.463 226310 DEBUG nova.virt.libvirt.host [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.464 226310 DEBUG nova.virt.libvirt.host [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.465 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.466 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.466 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.467 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.467 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.468 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.468 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.468 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.469 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.469 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.469 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.470 226310 DEBUG nova.virt.hardware [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.474 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1946674025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:39 np0005539564 nova_compute[226295]: 2025-11-29 07:53:39.966 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.007 226310 DEBUG nova.storage.rbd_utils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.011 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:40.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:40.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2366711324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.497 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.501 226310 DEBUG nova.objects.instance [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3c6c02ba-1cb9-4b9b-9089-3a905adade58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.514 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.526 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <uuid>3c6c02ba-1cb9-4b9b-9089-3a905adade58</uuid>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <name>instance-00000026</name>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1747705600</nova:name>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:53:39</nova:creationTime>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <nova:user uuid="6ca0284fe9484539925c684d27654f2f">tempest-LiveMigrationNegativeTest-1757760000-project-member</nova:user>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <nova:project uuid="cb511b88d472452f9749846769c119a1">tempest-LiveMigrationNegativeTest-1757760000</nova:project>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <entry name="serial">3c6c02ba-1cb9-4b9b-9089-3a905adade58</entry>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <entry name="uuid">3c6c02ba-1cb9-4b9b-9089-3a905adade58</entry>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk.config">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58/console.log" append="off"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:53:40 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:53:40 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:53:40 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:53:40 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.592 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.593 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.594 226310 INFO nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Using config drive#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.634 226310 DEBUG nova.storage.rbd_utils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.844 226310 INFO nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Creating config drive at /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58/disk.config#033[00m
Nov 29 02:53:40 np0005539564 nova_compute[226295]: 2025-11-29 07:53:40.856 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyizl39et execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:41 np0005539564 nova_compute[226295]: 2025-11-29 07:53:41.003 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyizl39et" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:41 np0005539564 nova_compute[226295]: 2025-11-29 07:53:41.049 226310 DEBUG nova.storage.rbd_utils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] rbd image 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:41 np0005539564 nova_compute[226295]: 2025-11-29 07:53:41.054 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58/disk.config 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:41 np0005539564 nova_compute[226295]: 2025-11-29 07:53:41.348 226310 DEBUG oslo_concurrency.processutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58/disk.config 3c6c02ba-1cb9-4b9b-9089-3a905adade58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:41 np0005539564 nova_compute[226295]: 2025-11-29 07:53:41.350 226310 INFO nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Deleting local config drive /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58/disk.config because it was imported into RBD.#033[00m
Nov 29 02:53:41 np0005539564 systemd-machined[190128]: New machine qemu-18-instance-00000026.
Nov 29 02:53:41 np0005539564 systemd[1]: Started Virtual Machine qemu-18-instance-00000026.
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.108 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402822.107773, 3c6c02ba-1cb9-4b9b-9089-3a905adade58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.109 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.113 226310 DEBUG nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.113 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.117 226310 INFO nova.virt.libvirt.driver [-] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Instance spawned successfully.#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.118 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.149 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.155 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.156 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.157 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.157 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.157 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.158 226310 DEBUG nova.virt.libvirt.driver [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:42.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.163 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.195 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.195 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402822.1097574, 3c6c02ba-1cb9-4b9b-9089-3a905adade58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.196 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] VM Started (Lifecycle Event)#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.221 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.224 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.227 226310 INFO nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Took 3.52 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.227 226310 DEBUG nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.259 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.296 226310 INFO nova.compute.manager [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Took 4.51 seconds to build instance.#033[00m
Nov 29 02:53:42 np0005539564 nova_compute[226295]: 2025-11-29 07:53:42.319 226310 DEBUG oslo_concurrency.lockutils [None req-dad03be5-5634-444f-9119-7a0b6c82f67d 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:43 np0005539564 nova_compute[226295]: 2025-11-29 07:53:43.391 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402808.390104, 8d9a0a31-ea5e-4820-846a-57c5d8338b25 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:43 np0005539564 nova_compute[226295]: 2025-11-29 07:53:43.391 226310 INFO nova.compute.manager [-] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:53:43 np0005539564 nova_compute[226295]: 2025-11-29 07:53:43.414 226310 DEBUG nova.compute.manager [None req-897f94ea-60f9-4ea4-bd8c-97dfc530cdaf - - - - - -] [instance: 8d9a0a31-ea5e-4820-846a-57c5d8338b25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:44 np0005539564 nova_compute[226295]: 2025-11-29 07:53:44.117 226310 DEBUG nova.objects.instance [None req-e5e77a92-4ee1-4904-a2a9-448392efd0b6 aa1abd2a0b5e43b7b0076f858185b1e3 57675be7cc0a40a3ac5ae6b05aa344c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3c6c02ba-1cb9-4b9b-9089-3a905adade58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:44 np0005539564 nova_compute[226295]: 2025-11-29 07:53:44.143 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402824.1437047, 3c6c02ba-1cb9-4b9b-9089-3a905adade58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:44 np0005539564 nova_compute[226295]: 2025-11-29 07:53:44.144 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:53:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:44.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:44 np0005539564 nova_compute[226295]: 2025-11-29 07:53:44.161 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:44 np0005539564 nova_compute[226295]: 2025-11-29 07:53:44.165 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:44.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:44 np0005539564 nova_compute[226295]: 2025-11-29 07:53:44.187 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 02:53:44 np0005539564 nova_compute[226295]: 2025-11-29 07:53:44.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:45 np0005539564 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000026.scope: Deactivated successfully.
Nov 29 02:53:45 np0005539564 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000026.scope: Consumed 2.886s CPU time.
Nov 29 02:53:45 np0005539564 systemd-machined[190128]: Machine qemu-18-instance-00000026 terminated.
Nov 29 02:53:45 np0005539564 nova_compute[226295]: 2025-11-29 07:53:45.110 226310 DEBUG nova.compute.manager [None req-e5e77a92-4ee1-4904-a2a9-448392efd0b6 aa1abd2a0b5e43b7b0076f858185b1e3 57675be7cc0a40a3ac5ae6b05aa344c7 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:45 np0005539564 nova_compute[226295]: 2025-11-29 07:53:45.518 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:46.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.570 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.570 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.571 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.571 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.571 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.572 226310 INFO nova.compute.manager [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Terminating instance#033[00m
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.573 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "refresh_cache-3c6c02ba-1cb9-4b9b-9089-3a905adade58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.573 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquired lock "refresh_cache-3c6c02ba-1cb9-4b9b-9089-3a905adade58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:47 np0005539564 nova_compute[226295]: 2025-11-29 07:53:47.573 226310 DEBUG nova.network.neutron [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:53:48 np0005539564 nova_compute[226295]: 2025-11-29 07:53:48.080 226310 DEBUG nova.network.neutron [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:53:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:48.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:48 np0005539564 nova_compute[226295]: 2025-11-29 07:53:48.460 226310 DEBUG nova.network.neutron [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:48 np0005539564 nova_compute[226295]: 2025-11-29 07:53:48.476 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Releasing lock "refresh_cache-3c6c02ba-1cb9-4b9b-9089-3a905adade58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:48 np0005539564 nova_compute[226295]: 2025-11-29 07:53:48.477 226310 DEBUG nova.compute.manager [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:53:48 np0005539564 nova_compute[226295]: 2025-11-29 07:53:48.487 226310 INFO nova.virt.libvirt.driver [-] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Instance destroyed successfully.#033[00m
Nov 29 02:53:48 np0005539564 nova_compute[226295]: 2025-11-29 07:53:48.487 226310 DEBUG nova.objects.instance [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lazy-loading 'resources' on Instance uuid 3c6c02ba-1cb9-4b9b-9089-3a905adade58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:48 np0005539564 nova_compute[226295]: 2025-11-29 07:53:48.982 226310 INFO nova.virt.libvirt.driver [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Deleting instance files /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58_del#033[00m
Nov 29 02:53:48 np0005539564 nova_compute[226295]: 2025-11-29 07:53:48.983 226310 INFO nova.virt.libvirt.driver [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Deletion of /var/lib/nova/instances/3c6c02ba-1cb9-4b9b-9089-3a905adade58_del complete#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.072 226310 INFO nova.compute.manager [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.073 226310 DEBUG oslo.service.loopingcall [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.074 226310 DEBUG nova.compute.manager [-] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.074 226310 DEBUG nova.network.neutron [-] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.299 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.461 226310 DEBUG nova.network.neutron [-] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.475 226310 DEBUG nova.network.neutron [-] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.495 226310 INFO nova.compute.manager [-] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Took 0.42 seconds to deallocate network for instance.#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.551 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:49 np0005539564 nova_compute[226295]: 2025-11-29 07:53:49.552 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:53:49Z|00100|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 02:53:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:50.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:50 np0005539564 nova_compute[226295]: 2025-11-29 07:53:50.458 226310 DEBUG oslo_concurrency.processutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:50 np0005539564 nova_compute[226295]: 2025-11-29 07:53:50.519 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1634362885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:50 np0005539564 nova_compute[226295]: 2025-11-29 07:53:50.963 226310 DEBUG oslo_concurrency.processutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:50 np0005539564 nova_compute[226295]: 2025-11-29 07:53:50.973 226310 DEBUG nova.compute.provider_tree [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:50 np0005539564 nova_compute[226295]: 2025-11-29 07:53:50.993 226310 DEBUG nova.scheduler.client.report [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:51 np0005539564 nova_compute[226295]: 2025-11-29 07:53:51.031 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:51 np0005539564 nova_compute[226295]: 2025-11-29 07:53:51.094 226310 INFO nova.scheduler.client.report [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Deleted allocations for instance 3c6c02ba-1cb9-4b9b-9089-3a905adade58#033[00m
Nov 29 02:53:51 np0005539564 nova_compute[226295]: 2025-11-29 07:53:51.188 226310 DEBUG oslo_concurrency.lockutils [None req-a6b5e644-4964-42eb-bb08-dd65330cd6a1 6ca0284fe9484539925c684d27654f2f cb511b88d472452f9749846769c119a1 - - default default] Lock "3c6c02ba-1cb9-4b9b-9089-3a905adade58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:52.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:52.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:52 np0005539564 nova_compute[226295]: 2025-11-29 07:53:52.981 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "49c55245-ceba-4c0f-a044-8866ffa3b338" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:52 np0005539564 nova_compute[226295]: 2025-11-29 07:53:52.982 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.011 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.120 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.121 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.127 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.127 226310 INFO nova.compute.claims [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.275 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1864284547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.776 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.784 226310 DEBUG nova.compute.provider_tree [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.812 226310 DEBUG nova.scheduler.client.report [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.847 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.848 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.900 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.901 226310 DEBUG nova.network.neutron [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.922 226310 INFO nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:53:53 np0005539564 nova_compute[226295]: 2025-11-29 07:53:53.943 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:53:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:54.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:54.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.263 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.265 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.266 226310 INFO nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Creating image(s)#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.312 226310 DEBUG nova.storage.rbd_utils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 49c55245-ceba-4c0f-a044-8866ffa3b338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.358 226310 DEBUG nova.storage.rbd_utils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 49c55245-ceba-4c0f-a044-8866ffa3b338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.405 226310 DEBUG nova.storage.rbd_utils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 49c55245-ceba-4c0f-a044-8866ffa3b338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.412 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.448 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.513 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.514 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.516 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.516 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.552 226310 DEBUG nova.storage.rbd_utils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 49c55245-ceba-4c0f-a044-8866ffa3b338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.555 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 49c55245-ceba-4c0f-a044-8866ffa3b338_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:54 np0005539564 nova_compute[226295]: 2025-11-29 07:53:54.718 226310 DEBUG nova.policy [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8551065d65214410b616d2a71729df0a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '800c0f050e95457384eee582d6da0afa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.521 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.558 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 49c55245-ceba-4c0f-a044-8866ffa3b338_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.633 226310 DEBUG nova.storage.rbd_utils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] resizing rbd image 49c55245-ceba-4c0f-a044-8866ffa3b338_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.746 226310 DEBUG nova.objects.instance [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'migration_context' on Instance uuid 49c55245-ceba-4c0f-a044-8866ffa3b338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.775 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.776 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Ensure instance console log exists: /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.777 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.778 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:55 np0005539564 nova_compute[226295]: 2025-11-29 07:53:55.779 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:56.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:56 np0005539564 nova_compute[226295]: 2025-11-29 07:53:56.202 226310 DEBUG nova.network.neutron [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Successfully created port: 3031b707-3c60-4f9c-8619-c6937728baca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:53:57 np0005539564 nova_compute[226295]: 2025-11-29 07:53:57.897 226310 DEBUG nova.network.neutron [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Successfully updated port: 3031b707-3c60-4f9c-8619-c6937728baca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:53:57 np0005539564 nova_compute[226295]: 2025-11-29 07:53:57.918 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "refresh_cache-49c55245-ceba-4c0f-a044-8866ffa3b338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:57 np0005539564 nova_compute[226295]: 2025-11-29 07:53:57.919 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquired lock "refresh_cache-49c55245-ceba-4c0f-a044-8866ffa3b338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:57 np0005539564 nova_compute[226295]: 2025-11-29 07:53:57.919 226310 DEBUG nova.network.neutron [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:53:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:58.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:53:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:58 np0005539564 nova_compute[226295]: 2025-11-29 07:53:58.575 226310 DEBUG nova.compute.manager [req-72640fb1-d9a7-487f-bf67-3c0623cd518a req-942c5ff9-8dec-4070-9abd-dedc80355e46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received event network-changed-3031b707-3c60-4f9c-8619-c6937728baca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:58 np0005539564 nova_compute[226295]: 2025-11-29 07:53:58.575 226310 DEBUG nova.compute.manager [req-72640fb1-d9a7-487f-bf67-3c0623cd518a req-942c5ff9-8dec-4070-9abd-dedc80355e46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Refreshing instance network info cache due to event network-changed-3031b707-3c60-4f9c-8619-c6937728baca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:58 np0005539564 nova_compute[226295]: 2025-11-29 07:53:58.576 226310 DEBUG oslo_concurrency.lockutils [req-72640fb1-d9a7-487f-bf67-3c0623cd518a req-942c5ff9-8dec-4070-9abd-dedc80355e46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-49c55245-ceba-4c0f-a044-8866ffa3b338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:58 np0005539564 nova_compute[226295]: 2025-11-29 07:53:58.836 226310 DEBUG nova.network.neutron [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:53:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:59 np0005539564 nova_compute[226295]: 2025-11-29 07:53:59.452 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:59 np0005539564 podman[241934]: 2025-11-29 07:53:59.551439945 +0000 UTC m=+0.100720176 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:53:59 np0005539564 podman[241935]: 2025-11-29 07:53:59.567967231 +0000 UTC m=+0.110090199 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:59 np0005539564 podman[241933]: 2025-11-29 07:53:59.58823794 +0000 UTC m=+0.136944586 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.111 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402825.1100636, 3c6c02ba-1cb9-4b9b-9089-3a905adade58 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.112 226310 INFO nova.compute.manager [-] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.136 226310 DEBUG nova.compute.manager [None req-587778ca-48f8-4fdc-b7bb-43ca79599726 - - - - - -] [instance: 3c6c02ba-1cb9-4b9b-9089-3a905adade58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:00.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.258 226310 DEBUG nova.network.neutron [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Updating instance_info_cache with network_info: [{"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.285 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Releasing lock "refresh_cache-49c55245-ceba-4c0f-a044-8866ffa3b338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.285 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Instance network_info: |[{"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.287 226310 DEBUG oslo_concurrency.lockutils [req-72640fb1-d9a7-487f-bf67-3c0623cd518a req-942c5ff9-8dec-4070-9abd-dedc80355e46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-49c55245-ceba-4c0f-a044-8866ffa3b338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.287 226310 DEBUG nova.network.neutron [req-72640fb1-d9a7-487f-bf67-3c0623cd518a req-942c5ff9-8dec-4070-9abd-dedc80355e46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Refreshing network info cache for port 3031b707-3c60-4f9c-8619-c6937728baca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.293 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Start _get_guest_xml network_info=[{"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.300 226310 WARNING nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.311 226310 DEBUG nova.virt.libvirt.host [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.312 226310 DEBUG nova.virt.libvirt.host [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.317 226310 DEBUG nova.virt.libvirt.host [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.317 226310 DEBUG nova.virt.libvirt.host [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.319 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.320 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.320 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.321 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.321 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.322 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.322 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.323 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.323 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.323 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.324 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.324 226310 DEBUG nova.virt.hardware [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.329 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.563 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4110876370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.769 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.811 226310 DEBUG nova.storage.rbd_utils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 49c55245-ceba-4c0f-a044-8866ffa3b338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:00 np0005539564 nova_compute[226295]: 2025-11-29 07:54:00.818 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2958552636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.269 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.270 226310 DEBUG nova.virt.libvirt.vif [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:53:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-644957351',display_name='tempest-VolumesAdminNegativeTest-server-644957351',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-644957351',id=39,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='800c0f050e95457384eee582d6da0afa',ramdisk_id='',reservation_id='r-4hs6erl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1765250615',owner_user_name='tempest-VolumesAdminNegativeTest-1765250615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:53:53Z,user_data=None,user_id='8551065d65214410b616d2a71729df0a',uuid=49c55245-ceba-4c0f-a044-8866ffa3b338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.271 226310 DEBUG nova.network.os_vif_util [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converting VIF {"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.272 226310 DEBUG nova.network.os_vif_util [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:1b:9e,bridge_name='br-int',has_traffic_filtering=True,id=3031b707-3c60-4f9c-8619-c6937728baca,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3031b707-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.274 226310 DEBUG nova.objects.instance [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'pci_devices' on Instance uuid 49c55245-ceba-4c0f-a044-8866ffa3b338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.292 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <uuid>49c55245-ceba-4c0f-a044-8866ffa3b338</uuid>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <name>instance-00000027</name>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <nova:name>tempest-VolumesAdminNegativeTest-server-644957351</nova:name>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:54:00</nova:creationTime>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <nova:user uuid="8551065d65214410b616d2a71729df0a">tempest-VolumesAdminNegativeTest-1765250615-project-member</nova:user>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <nova:project uuid="800c0f050e95457384eee582d6da0afa">tempest-VolumesAdminNegativeTest-1765250615</nova:project>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <nova:port uuid="3031b707-3c60-4f9c-8619-c6937728baca">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <entry name="serial">49c55245-ceba-4c0f-a044-8866ffa3b338</entry>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <entry name="uuid">49c55245-ceba-4c0f-a044-8866ffa3b338</entry>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/49c55245-ceba-4c0f-a044-8866ffa3b338_disk">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/49c55245-ceba-4c0f-a044-8866ffa3b338_disk.config">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:50:1b:9e"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <target dev="tap3031b707-3c"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338/console.log" append="off"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:54:01 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:54:01 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:54:01 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:54:01 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.293 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Preparing to wait for external event network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.294 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.294 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.294 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.295 226310 DEBUG nova.virt.libvirt.vif [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:53:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-644957351',display_name='tempest-VolumesAdminNegativeTest-server-644957351',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-644957351',id=39,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='800c0f050e95457384eee582d6da0afa',ramdisk_id='',reservation_id='r-4hs6erl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1765250615',owner_user_name='tempest-VolumesAdminNegativeTest-1765250615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:53:53Z,user_data=None,user_id='8551065d65214410b616d2a71729df0a',uuid=49c55245-ceba-4c0f-a044-8866ffa3b338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.295 226310 DEBUG nova.network.os_vif_util [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converting VIF {"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.295 226310 DEBUG nova.network.os_vif_util [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:1b:9e,bridge_name='br-int',has_traffic_filtering=True,id=3031b707-3c60-4f9c-8619-c6937728baca,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3031b707-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.296 226310 DEBUG os_vif [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:1b:9e,bridge_name='br-int',has_traffic_filtering=True,id=3031b707-3c60-4f9c-8619-c6937728baca,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3031b707-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.297 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.297 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.300 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.300 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3031b707-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.301 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3031b707-3c, col_values=(('external_ids', {'iface-id': '3031b707-3c60-4f9c-8619-c6937728baca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:1b:9e', 'vm-uuid': '49c55245-ceba-4c0f-a044-8866ffa3b338'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.303 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:01 np0005539564 NetworkManager[48997]: <info>  [1764402841.3044] manager: (tap3031b707-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.304 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.311 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.312 226310 INFO os_vif [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:1b:9e,bridge_name='br-int',has_traffic_filtering=True,id=3031b707-3c60-4f9c-8619-c6937728baca,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3031b707-3c')#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.390 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.390 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.391 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No VIF found with MAC fa:16:3e:50:1b:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.391 226310 INFO nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Using config drive#033[00m
Nov 29 02:54:01 np0005539564 nova_compute[226295]: 2025-11-29 07:54:01.425 226310 DEBUG nova.storage.rbd_utils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 49c55245-ceba-4c0f-a044-8866ffa3b338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:02.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.389 226310 INFO nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Creating config drive at /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338/disk.config#033[00m
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.395 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsflcrznu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.525 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsflcrznu" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.569 226310 DEBUG nova.storage.rbd_utils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 49c55245-ceba-4c0f-a044-8866ffa3b338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.574 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338/disk.config 49c55245-ceba-4c0f-a044-8866ffa3b338_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.773 226310 DEBUG oslo_concurrency.processutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338/disk.config 49c55245-ceba-4c0f-a044-8866ffa3b338_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.775 226310 INFO nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Deleting local config drive /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:02 np0005539564 kernel: tap3031b707-3c: entered promiscuous mode
Nov 29 02:54:02 np0005539564 NetworkManager[48997]: <info>  [1764402842.8562] manager: (tap3031b707-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 02:54:02 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:02Z|00101|binding|INFO|Claiming lport 3031b707-3c60-4f9c-8619-c6937728baca for this chassis.
Nov 29 02:54:02 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:02Z|00102|binding|INFO|3031b707-3c60-4f9c-8619-c6937728baca: Claiming fa:16:3e:50:1b:9e 10.100.0.4
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.857 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.866 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.873 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:02 np0005539564 NetworkManager[48997]: <info>  [1764402842.8936] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 02:54:02 np0005539564 nova_compute[226295]: 2025-11-29 07:54:02.892 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:02 np0005539564 NetworkManager[48997]: <info>  [1764402842.8944] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 02:54:02 np0005539564 systemd-machined[190128]: New machine qemu-19-instance-00000027.
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.916 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:1b:9e 10.100.0.4'], port_security=['fa:16:3e:50:1b:9e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '49c55245-ceba-4c0f-a044-8866ffa3b338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbef03be-c0f0-4708-987b-5002a6990bb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '800c0f050e95457384eee582d6da0afa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6183da78-dbab-461e-ba18-f01392e434c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0e8f2a3-aeb4-4405-bc4f-8521ba3c2988, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=3031b707-3c60-4f9c-8619-c6937728baca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.919 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 3031b707-3c60-4f9c-8619-c6937728baca in datapath bbef03be-c0f0-4708-987b-5002a6990bb1 bound to our chassis#033[00m
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.923 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bbef03be-c0f0-4708-987b-5002a6990bb1#033[00m
Nov 29 02:54:02 np0005539564 systemd[1]: Started Virtual Machine qemu-19-instance-00000027.
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.937 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fd768141-38d5-4c81-a7c2-237fcd564eb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.940 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbbef03be-c1 in ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.944 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbbef03be-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.944 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab51eea-d7e8-4dd9-a409-96223de73fa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.946 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[da577fe2-8964-43eb-a15d-a2ab247ca812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:02 np0005539564 systemd-udevd[242133]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.960 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ad992b-b041-40bb-8afb-5c5f3040559c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:02 np0005539564 NetworkManager[48997]: <info>  [1764402842.9710] device (tap3031b707-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:02 np0005539564 NetworkManager[48997]: <info>  [1764402842.9725] device (tap3031b707-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:02.989 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd166c9-a120-407d-a936-49474145267e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.023 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5b27d8-d3f6-419e-bd6c-705627f989c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.035 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b208cb73-dca0-4098-98fd-c34e2a9c0180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 NetworkManager[48997]: <info>  [1764402843.0363] manager: (tapbbef03be-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.063 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ad65a39f-8274-4cd8-8760-a0d6c1d95c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.065 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[10347a35-d877-4446-ac6f-d5a4cdf1804b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 NetworkManager[48997]: <info>  [1764402843.0900] device (tapbbef03be-c0): carrier: link connected
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.098 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[12d7e652-915e-4a4f-a2db-015114b2b510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.114 226310 DEBUG nova.network.neutron [req-72640fb1-d9a7-487f-bf67-3c0623cd518a req-942c5ff9-8dec-4070-9abd-dedc80355e46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Updated VIF entry in instance network info cache for port 3031b707-3c60-4f9c-8619-c6937728baca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.115 226310 DEBUG nova.network.neutron [req-72640fb1-d9a7-487f-bf67-3c0623cd518a req-942c5ff9-8dec-4070-9abd-dedc80355e46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Updating instance_info_cache with network_info: [{"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.119 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c3600555-4a2c-4086-b53f-2bcb4e1be1f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbbef03be-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:3d:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581777, 'reachable_time': 26521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242165, 'error': None, 'target': 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.137 226310 DEBUG oslo_concurrency.lockutils [req-72640fb1-d9a7-487f-bf67-3c0623cd518a req-942c5ff9-8dec-4070-9abd-dedc80355e46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-49c55245-ceba-4c0f-a044-8866ffa3b338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.138 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21bb42dd-2fb9-4754-a5fc-24c993c946c5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:3deb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581777, 'tstamp': 581777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242166, 'error': None, 'target': 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.158 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb9c9fd-9166-4b2a-be59-2d8cd239007e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbbef03be-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:3d:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581777, 'reachable_time': 26521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242167, 'error': None, 'target': 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.178 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.187 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[315f605b-4545-46ea-87ac-2982ce74b32a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:03Z|00103|binding|INFO|Setting lport 3031b707-3c60-4f9c-8619-c6937728baca ovn-installed in OVS
Nov 29 02:54:03 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:03Z|00104|binding|INFO|Setting lport 3031b707-3c60-4f9c-8619-c6937728baca up in Southbound
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.191 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.260 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[989df98b-8dc9-4edc-9f8d-6627aa3c3560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.262 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbef03be-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.262 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.263 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbbef03be-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:03 np0005539564 NetworkManager[48997]: <info>  [1764402843.2675] manager: (tapbbef03be-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.267 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:03 np0005539564 kernel: tapbbef03be-c0: entered promiscuous mode
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.271 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.272 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbbef03be-c0, col_values=(('external_ids', {'iface-id': 'cf0990aa-411e-4273-b97e-3c36b1dfaef8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.274 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:03 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:03Z|00105|binding|INFO|Releasing lport cf0990aa-411e-4273-b97e-3c36b1dfaef8 from this chassis (sb_readonly=0)
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.303 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bbef03be-c0f0-4708-987b-5002a6990bb1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bbef03be-c0f0-4708-987b-5002a6990bb1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.304 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[74231187-441e-45e5-b080-ebb87abbdfcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.304 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-bbef03be-c0f0-4708-987b-5002a6990bb1
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/bbef03be-c0f0-4708-987b-5002a6990bb1.pid.haproxy
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID bbef03be-c0f0-4708-987b-5002a6990bb1
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.305 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'env', 'PROCESS_TAG=haproxy-bbef03be-c0f0-4708-987b-5002a6990bb1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bbef03be-c0f0-4708-987b-5002a6990bb1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.574 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402843.5743325, 49c55245-ceba-4c0f-a044-8866ffa3b338 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.575 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.594 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.599 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402843.5744586, 49c55245-ceba-4c0f-a044-8866ffa3b338 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.599 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.623 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.627 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.646 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.703 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.704 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:03.704 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:03 np0005539564 podman[242239]: 2025-11-29 07:54:03.709370574 +0000 UTC m=+0.060523459 container create 81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:54:03 np0005539564 systemd[1]: Started libpod-conmon-81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52.scope.
Nov 29 02:54:03 np0005539564 podman[242239]: 2025-11-29 07:54:03.673570145 +0000 UTC m=+0.024723120 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:54:03 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:54:03 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c84af3ad6abf888ee8cc5ffb075615558fa13d4a957c3f354a2e310ec5a6b2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:03 np0005539564 podman[242239]: 2025-11-29 07:54:03.794015583 +0000 UTC m=+0.145168558 container init 81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:54:03 np0005539564 podman[242239]: 2025-11-29 07:54:03.798984258 +0000 UTC m=+0.150137183 container start 81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:54:03 np0005539564 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[242254]: [NOTICE]   (242258) : New worker (242260) forked
Nov 29 02:54:03 np0005539564 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[242254]: [NOTICE]   (242258) : Loading success.
Nov 29 02:54:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.976 226310 DEBUG nova.compute.manager [req-73c78a19-0901-4ec6-9db7-22efa4896906 req-0f26435e-04c1-4f3a-a944-66216d632d85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received event network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.977 226310 DEBUG oslo_concurrency.lockutils [req-73c78a19-0901-4ec6-9db7-22efa4896906 req-0f26435e-04c1-4f3a-a944-66216d632d85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.978 226310 DEBUG oslo_concurrency.lockutils [req-73c78a19-0901-4ec6-9db7-22efa4896906 req-0f26435e-04c1-4f3a-a944-66216d632d85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.978 226310 DEBUG oslo_concurrency.lockutils [req-73c78a19-0901-4ec6-9db7-22efa4896906 req-0f26435e-04c1-4f3a-a944-66216d632d85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.979 226310 DEBUG nova.compute.manager [req-73c78a19-0901-4ec6-9db7-22efa4896906 req-0f26435e-04c1-4f3a-a944-66216d632d85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Processing event network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.980 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.985 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402843.9842002, 49c55245-ceba-4c0f-a044-8866ffa3b338 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.986 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.991 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.999 226310 INFO nova.virt.libvirt.driver [-] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Instance spawned successfully.#033[00m
Nov 29 02:54:03 np0005539564 nova_compute[226295]: 2025-11-29 07:54:03.999 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.024 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.033 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.039 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.040 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.041 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.042 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.043 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.044 226310 DEBUG nova.virt.libvirt.driver [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.068 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:04.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.437 226310 INFO nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Took 10.17 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.438 226310 DEBUG nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.517 226310 INFO nova.compute.manager [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Took 11.42 seconds to build instance.#033[00m
Nov 29 02:54:04 np0005539564 nova_compute[226295]: 2025-11-29 07:54:04.537 226310 DEBUG oslo_concurrency.lockutils [None req-d953b0ce-75f0-40a3-8cc6-3c1b6e410322 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:05 np0005539564 nova_compute[226295]: 2025-11-29 07:54:05.568 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:06 np0005539564 nova_compute[226295]: 2025-11-29 07:54:06.131 226310 DEBUG nova.compute.manager [req-b8ba63b2-ab6e-47c9-85ad-0dd5fdc3d2b7 req-04a3ff4e-e9ea-4549-9eba-6f1833af467e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received event network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:06 np0005539564 nova_compute[226295]: 2025-11-29 07:54:06.132 226310 DEBUG oslo_concurrency.lockutils [req-b8ba63b2-ab6e-47c9-85ad-0dd5fdc3d2b7 req-04a3ff4e-e9ea-4549-9eba-6f1833af467e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:06 np0005539564 nova_compute[226295]: 2025-11-29 07:54:06.132 226310 DEBUG oslo_concurrency.lockutils [req-b8ba63b2-ab6e-47c9-85ad-0dd5fdc3d2b7 req-04a3ff4e-e9ea-4549-9eba-6f1833af467e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:06 np0005539564 nova_compute[226295]: 2025-11-29 07:54:06.133 226310 DEBUG oslo_concurrency.lockutils [req-b8ba63b2-ab6e-47c9-85ad-0dd5fdc3d2b7 req-04a3ff4e-e9ea-4549-9eba-6f1833af467e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:06 np0005539564 nova_compute[226295]: 2025-11-29 07:54:06.133 226310 DEBUG nova.compute.manager [req-b8ba63b2-ab6e-47c9-85ad-0dd5fdc3d2b7 req-04a3ff4e-e9ea-4549-9eba-6f1833af467e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] No waiting events found dispatching network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:06 np0005539564 nova_compute[226295]: 2025-11-29 07:54:06.133 226310 WARNING nova.compute.manager [req-b8ba63b2-ab6e-47c9-85ad-0dd5fdc3d2b7 req-04a3ff4e-e9ea-4549-9eba-6f1833af467e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received unexpected event network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca for instance with vm_state active and task_state None.#033[00m
Nov 29 02:54:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:06.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:06.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:06 np0005539564 nova_compute[226295]: 2025-11-29 07:54:06.303 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:08.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:08.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:08 np0005539564 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:54:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:09 np0005539564 nova_compute[226295]: 2025-11-29 07:54:09.520 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:10.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:10.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.603 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.881 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "49c55245-ceba-4c0f-a044-8866ffa3b338" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.882 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.883 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.883 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.883 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.885 226310 INFO nova.compute.manager [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Terminating instance#033[00m
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.886 226310 DEBUG nova.compute.manager [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:54:10 np0005539564 kernel: tap3031b707-3c (unregistering): left promiscuous mode
Nov 29 02:54:10 np0005539564 NetworkManager[48997]: <info>  [1764402850.9368] device (tap3031b707-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.944 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:10Z|00106|binding|INFO|Releasing lport 3031b707-3c60-4f9c-8619-c6937728baca from this chassis (sb_readonly=0)
Nov 29 02:54:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:10Z|00107|binding|INFO|Setting lport 3031b707-3c60-4f9c-8619-c6937728baca down in Southbound
Nov 29 02:54:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:10Z|00108|binding|INFO|Removing iface tap3031b707-3c ovn-installed in OVS
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.946 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:10.955 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:1b:9e 10.100.0.4'], port_security=['fa:16:3e:50:1b:9e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '49c55245-ceba-4c0f-a044-8866ffa3b338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbef03be-c0f0-4708-987b-5002a6990bb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '800c0f050e95457384eee582d6da0afa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6183da78-dbab-461e-ba18-f01392e434c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0e8f2a3-aeb4-4405-bc4f-8521ba3c2988, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=3031b707-3c60-4f9c-8619-c6937728baca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:10.958 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 3031b707-3c60-4f9c-8619-c6937728baca in datapath bbef03be-c0f0-4708-987b-5002a6990bb1 unbound from our chassis#033[00m
Nov 29 02:54:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:10.960 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbef03be-c0f0-4708-987b-5002a6990bb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:54:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:10.961 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[319e6574-1bf7-4f4e-851c-f61b8b066efa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:10.962 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 namespace which is not needed anymore#033[00m
Nov 29 02:54:10 np0005539564 nova_compute[226295]: 2025-11-29 07:54:10.965 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:11 np0005539564 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000027.scope: Deactivated successfully.
Nov 29 02:54:11 np0005539564 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000027.scope: Consumed 7.690s CPU time.
Nov 29 02:54:11 np0005539564 systemd-machined[190128]: Machine qemu-19-instance-00000027 terminated.
Nov 29 02:54:11 np0005539564 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[242254]: [NOTICE]   (242258) : haproxy version is 2.8.14-c23fe91
Nov 29 02:54:11 np0005539564 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[242254]: [NOTICE]   (242258) : path to executable is /usr/sbin/haproxy
Nov 29 02:54:11 np0005539564 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[242254]: [WARNING]  (242258) : Exiting Master process...
Nov 29 02:54:11 np0005539564 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[242254]: [ALERT]    (242258) : Current worker (242260) exited with code 143 (Terminated)
Nov 29 02:54:11 np0005539564 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[242254]: [WARNING]  (242258) : All workers exited. Exiting... (0)
Nov 29 02:54:11 np0005539564 systemd[1]: libpod-81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52.scope: Deactivated successfully.
Nov 29 02:54:11 np0005539564 podman[242293]: 2025-11-29 07:54:11.095733848 +0000 UTC m=+0.044584646 container died 81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.107 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.112 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.120 226310 INFO nova.virt.libvirt.driver [-] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Instance destroyed successfully.#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.121 226310 DEBUG nova.objects.instance [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'resources' on Instance uuid 49c55245-ceba-4c0f-a044-8866ffa3b338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:11 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52-userdata-shm.mount: Deactivated successfully.
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.142 226310 DEBUG nova.virt.libvirt.vif [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-644957351',display_name='tempest-VolumesAdminNegativeTest-server-644957351',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-644957351',id=39,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='800c0f050e95457384eee582d6da0afa',ramdisk_id='',reservation_id='r-4hs6erl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-1765250615',owner_user_name='tempest-VolumesAdminNegativeTest-1765250615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:04Z,user_data=None,user_id='8551065d65214410b616d2a71729df0a',uuid=49c55245-ceba-4c0f-a044-8866ffa3b338,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.143 226310 DEBUG nova.network.os_vif_util [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converting VIF {"id": "3031b707-3c60-4f9c-8619-c6937728baca", "address": "fa:16:3e:50:1b:9e", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3031b707-3c", "ovs_interfaceid": "3031b707-3c60-4f9c-8619-c6937728baca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.143 226310 DEBUG nova.network.os_vif_util [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:1b:9e,bridge_name='br-int',has_traffic_filtering=True,id=3031b707-3c60-4f9c-8619-c6937728baca,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3031b707-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.144 226310 DEBUG os_vif [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:1b:9e,bridge_name='br-int',has_traffic_filtering=True,id=3031b707-3c60-4f9c-8619-c6937728baca,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3031b707-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:54:11 np0005539564 systemd[1]: var-lib-containers-storage-overlay-7c84af3ad6abf888ee8cc5ffb075615558fa13d4a957c3f354a2e310ec5a6b2d-merged.mount: Deactivated successfully.
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.145 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.146 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3031b707-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.149 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.152 226310 INFO os_vif [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:1b:9e,bridge_name='br-int',has_traffic_filtering=True,id=3031b707-3c60-4f9c-8619-c6937728baca,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3031b707-3c')#033[00m
Nov 29 02:54:11 np0005539564 podman[242293]: 2025-11-29 07:54:11.154504528 +0000 UTC m=+0.103355316 container cleanup 81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:54:11 np0005539564 systemd[1]: libpod-conmon-81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52.scope: Deactivated successfully.
Nov 29 02:54:11 np0005539564 podman[242345]: 2025-11-29 07:54:11.233416784 +0000 UTC m=+0.056116370 container remove 81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.239 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5a9f54-0f53-44a2-83a1-dab442277f14]: (4, ('Sat Nov 29 07:54:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 (81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52)\n81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52\nSat Nov 29 07:54:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 (81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52)\n81dd643b20de4a4090f482f2724d5e6520c009b8288c1a9dd3bc3ab2c0becf52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.241 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[41639880-730b-4138-a7d4-c1bfdfef6623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.242 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbef03be-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:11 np0005539564 kernel: tapbbef03be-c0: left promiscuous mode
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.244 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.260 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.263 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[30da3159-f5de-4d74-89a6-5d9ff4b4c8ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.279 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e544d117-d470-4b16-a164-22324388c278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.280 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6223158a-1f01-4946-a04d-7718b41ea4b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.294 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e60524-72cf-43b1-b2d7-9aac8f92cc1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581769, 'reachable_time': 20061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242362, 'error': None, 'target': 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:11 np0005539564 systemd[1]: run-netns-ovnmeta\x2dbbef03be\x2dc0f0\x2d4708\x2d987b\x2d5002a6990bb1.mount: Deactivated successfully.
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.297 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:54:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:11.297 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4c0278-abd4-4e25-b7d1-0d68ac0e96b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.692 226310 INFO nova.virt.libvirt.driver [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Deleting instance files /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338_del#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.693 226310 INFO nova.virt.libvirt.driver [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Deletion of /var/lib/nova/instances/49c55245-ceba-4c0f-a044-8866ffa3b338_del complete#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.752 226310 INFO nova.compute.manager [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.754 226310 DEBUG oslo.service.loopingcall [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.756 226310 DEBUG nova.compute.manager [-] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:54:11 np0005539564 nova_compute[226295]: 2025-11-29 07:54:11.756 226310 DEBUG nova.network.neutron [-] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:54:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:12.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:12.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:12 np0005539564 nova_compute[226295]: 2025-11-29 07:54:12.619 226310 DEBUG nova.compute.manager [req-8c55c3fa-ac1b-4f55-a13e-3984a891cf56 req-cad02c04-26c6-471c-8669-5144d1270934 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received event network-vif-unplugged-3031b707-3c60-4f9c-8619-c6937728baca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:12 np0005539564 nova_compute[226295]: 2025-11-29 07:54:12.620 226310 DEBUG oslo_concurrency.lockutils [req-8c55c3fa-ac1b-4f55-a13e-3984a891cf56 req-cad02c04-26c6-471c-8669-5144d1270934 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:12 np0005539564 nova_compute[226295]: 2025-11-29 07:54:12.620 226310 DEBUG oslo_concurrency.lockutils [req-8c55c3fa-ac1b-4f55-a13e-3984a891cf56 req-cad02c04-26c6-471c-8669-5144d1270934 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:12 np0005539564 nova_compute[226295]: 2025-11-29 07:54:12.620 226310 DEBUG oslo_concurrency.lockutils [req-8c55c3fa-ac1b-4f55-a13e-3984a891cf56 req-cad02c04-26c6-471c-8669-5144d1270934 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:12 np0005539564 nova_compute[226295]: 2025-11-29 07:54:12.621 226310 DEBUG nova.compute.manager [req-8c55c3fa-ac1b-4f55-a13e-3984a891cf56 req-cad02c04-26c6-471c-8669-5144d1270934 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] No waiting events found dispatching network-vif-unplugged-3031b707-3c60-4f9c-8619-c6937728baca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:12 np0005539564 nova_compute[226295]: 2025-11-29 07:54:12.621 226310 DEBUG nova.compute.manager [req-8c55c3fa-ac1b-4f55-a13e-3984a891cf56 req-cad02c04-26c6-471c-8669-5144d1270934 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received event network-vif-unplugged-3031b707-3c60-4f9c-8619-c6937728baca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:54:13 np0005539564 nova_compute[226295]: 2025-11-29 07:54:13.572 226310 DEBUG nova.network.neutron [-] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:13 np0005539564 nova_compute[226295]: 2025-11-29 07:54:13.592 226310 INFO nova.compute.manager [-] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Took 1.84 seconds to deallocate network for instance.#033[00m
Nov 29 02:54:13 np0005539564 nova_compute[226295]: 2025-11-29 07:54:13.655 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:13 np0005539564 nova_compute[226295]: 2025-11-29 07:54:13.656 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:13 np0005539564 nova_compute[226295]: 2025-11-29 07:54:13.717 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:13 np0005539564 nova_compute[226295]: 2025-11-29 07:54:13.790 226310 DEBUG oslo_concurrency.processutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:14.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:14 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2769963415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:14.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.240 226310 DEBUG oslo_concurrency.processutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.249 226310 DEBUG nova.compute.provider_tree [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.270 226310 DEBUG nova.scheduler.client.report [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.299 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.356 226310 INFO nova.scheduler.client.report [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Deleted allocations for instance 49c55245-ceba-4c0f-a044-8866ffa3b338#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.437 226310 DEBUG oslo_concurrency.lockutils [None req-d7e43daf-45a8-4f5a-bade-b7516b44038d 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.857 226310 DEBUG nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received event network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.858 226310 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.859 226310 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.859 226310 DEBUG oslo_concurrency.lockutils [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "49c55245-ceba-4c0f-a044-8866ffa3b338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.860 226310 DEBUG nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] No waiting events found dispatching network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.860 226310 WARNING nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received unexpected event network-vif-plugged-3031b707-3c60-4f9c-8619-c6937728baca for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:54:14 np0005539564 nova_compute[226295]: 2025-11-29 07:54:14.861 226310 DEBUG nova.compute.manager [req-3b9d1e6a-5c2b-4ce8-acfe-442f80fb6ad2 req-4a253b7b-9088-484f-aba5-fcd507dd86a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Received event network-vif-deleted-3031b707-3c60-4f9c-8619-c6937728baca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:15 np0005539564 nova_compute[226295]: 2025-11-29 07:54:15.605 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:54:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:54:16 np0005539564 nova_compute[226295]: 2025-11-29 07:54:16.148 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:16.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:16.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:18.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Nov 29 02:54:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Nov 29 02:54:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:20.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:20.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:20 np0005539564 nova_compute[226295]: 2025-11-29 07:54:20.607 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:21 np0005539564 nova_compute[226295]: 2025-11-29 07:54:21.150 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Nov 29 02:54:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:22.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:22.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Nov 29 02:54:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:24.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:24.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:25 np0005539564 nova_compute[226295]: 2025-11-29 07:54:25.610 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:26 np0005539564 nova_compute[226295]: 2025-11-29 07:54:26.119 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402851.1171713, 49c55245-ceba-4c0f-a044-8866ffa3b338 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:26 np0005539564 nova_compute[226295]: 2025-11-29 07:54:26.119 226310 INFO nova.compute.manager [-] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:54:26 np0005539564 nova_compute[226295]: 2025-11-29 07:54:26.152 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:26 np0005539564 nova_compute[226295]: 2025-11-29 07:54:26.157 226310 DEBUG nova.compute.manager [None req-9e2aa0f0-c311-444c-a2fe-6292ccc8c70f - - - - - -] [instance: 49c55245-ceba-4c0f-a044-8866ffa3b338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:26.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:26.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:26 np0005539564 nova_compute[226295]: 2025-11-29 07:54:26.945 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:26 np0005539564 nova_compute[226295]: 2025-11-29 07:54:26.946 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:26 np0005539564 nova_compute[226295]: 2025-11-29 07:54:26.973 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:27 np0005539564 nova_compute[226295]: 2025-11-29 07:54:27.103 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:27 np0005539564 nova_compute[226295]: 2025-11-29 07:54:27.104 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:27 np0005539564 nova_compute[226295]: 2025-11-29 07:54:27.114 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:27 np0005539564 nova_compute[226295]: 2025-11-29 07:54:27.114 226310 INFO nova.compute.claims [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:54:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Nov 29 02:54:27 np0005539564 nova_compute[226295]: 2025-11-29 07:54:27.278 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3628032023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:27 np0005539564 nova_compute[226295]: 2025-11-29 07:54:27.689 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:27 np0005539564 nova_compute[226295]: 2025-11-29 07:54:27.698 226310 DEBUG nova.compute.provider_tree [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:27 np0005539564 nova_compute[226295]: 2025-11-29 07:54:27.854 226310 DEBUG nova.scheduler.client.report [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:28.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:28.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.284 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.285 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.498 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.499 226310 DEBUG nova.network.neutron [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.601 226310 INFO nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.664 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.819 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.821 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.821 226310 INFO nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Creating image(s)#033[00m
Nov 29 02:54:28 np0005539564 nova_compute[226295]: 2025-11-29 07:54:28.968 226310 DEBUG nova.storage.rbd_utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] rbd image 0fc06623-7a89-42ac-8120-3786201532b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.023 226310 DEBUG nova.storage.rbd_utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] rbd image 0fc06623-7a89-42ac-8120-3786201532b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.103 226310 DEBUG nova.storage.rbd_utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] rbd image 0fc06623-7a89-42ac-8120-3786201532b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.107 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.108 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.113 226310 DEBUG nova.policy [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45437f208c5a4499acac789fee214724', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '666052d32183417982e59c456a19c744', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.389 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.389 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.390 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.391 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:29 np0005539564 nova_compute[226295]: 2025-11-29 07:54:29.988 226310 DEBUG nova.virt.libvirt.imagebackend [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/67b4b12c-5f96-4d5d-a734-669efbd0af6b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/67b4b12c-5f96-4d5d-a734-669efbd0af6b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:54:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:30.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:30.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:30 np0005539564 nova_compute[226295]: 2025-11-29 07:54:30.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:30 np0005539564 podman[242646]: 2025-11-29 07:54:30.529709584 +0000 UTC m=+0.069336577 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:54:30 np0005539564 nova_compute[226295]: 2025-11-29 07:54:30.568 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:30 np0005539564 podman[242645]: 2025-11-29 07:54:30.578607596 +0000 UTC m=+0.115657150 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:54:30 np0005539564 nova_compute[226295]: 2025-11-29 07:54:30.796 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:30 np0005539564 podman[242644]: 2025-11-29 07:54:30.807086097 +0000 UTC m=+0.345025444 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Nov 29 02:54:30 np0005539564 nova_compute[226295]: 2025-11-29 07:54:30.812 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539564 nova_compute[226295]: 2025-11-29 07:54:31.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:31.588 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:31.591 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:54:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:32.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:32.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.429 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.430 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.431 226310 DEBUG nova.network.neutron [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Successfully created port: b07d3798-fa8b-45a9-9bc7-ac50043bbe5b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:54:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.458 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.458 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.458 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.459 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.459 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:32.592 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.719 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.783 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9.part --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.784 226310 DEBUG nova.virt.images [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] 67b4b12c-5f96-4d5d-a734-669efbd0af6b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.785 226310 DEBUG nova.privsep.utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.786 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9.part /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512358792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2528106426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:32 np0005539564 nova_compute[226295]: 2025-11-29 07:54:32.876 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.179 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.180 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4767MB free_disk=20.964855194091797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.180 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.181 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.288 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 0fc06623-7a89-42ac-8120-3786201532b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.289 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.289 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.387 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.647 226310 DEBUG nova.network.neutron [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Successfully updated port: b07d3798-fa8b-45a9-9bc7-ac50043bbe5b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.683 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.684 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquired lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.684 226310 DEBUG nova.network.neutron [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.790 226310 DEBUG nova.compute.manager [req-51847291-955f-494b-a50a-ef91810d9501 req-0eb9acad-1ad5-4b50-b7db-00bbfa165d76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received event network-changed-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.790 226310 DEBUG nova.compute.manager [req-51847291-955f-494b-a50a-ef91810d9501 req-0eb9acad-1ad5-4b50-b7db-00bbfa165d76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Refreshing instance network info cache due to event network-changed-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.791 226310 DEBUG oslo_concurrency.lockutils [req-51847291-955f-494b-a50a-ef91810d9501 req-0eb9acad-1ad5-4b50-b7db-00bbfa165d76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.889 226310 DEBUG nova.network.neutron [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.978 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:33 np0005539564 nova_compute[226295]: 2025-11-29 07:54:33.984 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.001 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.028 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.029 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:34.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:34.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.661 226310 DEBUG nova.network.neutron [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updating instance_info_cache with network_info: [{"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.683 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Releasing lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.684 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Instance network_info: |[{"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.685 226310 DEBUG oslo_concurrency.lockutils [req-51847291-955f-494b-a50a-ef91810d9501 req-0eb9acad-1ad5-4b50-b7db-00bbfa165d76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.685 226310 DEBUG nova.network.neutron [req-51847291-955f-494b-a50a-ef91810d9501 req-0eb9acad-1ad5-4b50-b7db-00bbfa165d76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Refreshing network info cache for port b07d3798-fa8b-45a9-9bc7-ac50043bbe5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.909 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9.part /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9.converted" returned: 0 in 2.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.918 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.951 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:34 np0005539564 nova_compute[226295]: 2025-11-29 07:54:34.952 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:35 np0005539564 nova_compute[226295]: 2025-11-29 07:54:35.024 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9.converted --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:35 np0005539564 nova_compute[226295]: 2025-11-29 07:54:35.026 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:35 np0005539564 nova_compute[226295]: 2025-11-29 07:54:35.090 226310 DEBUG nova.storage.rbd_utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] rbd image 0fc06623-7a89-42ac-8120-3786201532b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:35 np0005539564 nova_compute[226295]: 2025-11-29 07:54:35.094 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9 0fc06623-7a89-42ac-8120-3786201532b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:35 np0005539564 nova_compute[226295]: 2025-11-29 07:54:35.801 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:36 np0005539564 nova_compute[226295]: 2025-11-29 07:54:36.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:36.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:36.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:36 np0005539564 nova_compute[226295]: 2025-11-29 07:54:36.611 226310 DEBUG nova.network.neutron [req-51847291-955f-494b-a50a-ef91810d9501 req-0eb9acad-1ad5-4b50-b7db-00bbfa165d76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updated VIF entry in instance network info cache for port b07d3798-fa8b-45a9-9bc7-ac50043bbe5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:36 np0005539564 nova_compute[226295]: 2025-11-29 07:54:36.612 226310 DEBUG nova.network.neutron [req-51847291-955f-494b-a50a-ef91810d9501 req-0eb9acad-1ad5-4b50-b7db-00bbfa165d76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updating instance_info_cache with network_info: [{"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:36 np0005539564 nova_compute[226295]: 2025-11-29 07:54:36.958 226310 DEBUG oslo_concurrency.lockutils [req-51847291-955f-494b-a50a-ef91810d9501 req-0eb9acad-1ad5-4b50-b7db-00bbfa165d76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:38.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:38.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Nov 29 02:54:38 np0005539564 nova_compute[226295]: 2025-11-29 07:54:38.797 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9 0fc06623-7a89-42ac-8120-3786201532b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.702s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:38 np0005539564 nova_compute[226295]: 2025-11-29 07:54:38.896 226310 DEBUG nova.storage.rbd_utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] resizing rbd image 0fc06623-7a89-42ac-8120-3786201532b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.662 226310 DEBUG nova.objects.instance [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lazy-loading 'migration_context' on Instance uuid 0fc06623-7a89-42ac-8120-3786201532b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.734 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.735 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Ensure instance console log exists: /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.735 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.736 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.736 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.739 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Start _get_guest_xml network_info=[{"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'scsi', 'cdrom_bus': 'scsi', 'mapping': {'root': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'scsi', 'dev': 'sdb', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:54:15Z,direct_url=<?>,disk_format='qcow2',id=67b4b12c-5f96-4d5d-a734-669efbd0af6b,min_disk=0,min_ram=0,name='',owner='badfd9b8c0284ad9ba375f3f5932ae19',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:54:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/sda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/sda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'scsi', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '67b4b12c-5f96-4d5d-a734-669efbd0af6b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.751 226310 WARNING nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.757 226310 DEBUG nova.virt.libvirt.host [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.758 226310 DEBUG nova.virt.libvirt.host [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.762 226310 DEBUG nova.virt.libvirt.host [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.762 226310 DEBUG nova.virt.libvirt.host [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.764 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.764 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:54:15Z,direct_url=<?>,disk_format='qcow2',id=67b4b12c-5f96-4d5d-a734-669efbd0af6b,min_disk=0,min_ram=0,name='',owner='badfd9b8c0284ad9ba375f3f5932ae19',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:54:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.765 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.765 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.765 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.765 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.766 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.766 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.766 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.767 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.767 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.767 226310 DEBUG nova.virt.hardware [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:39 np0005539564 nova_compute[226295]: 2025-11-29 07:54:39.771 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1238365819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.247 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:40.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:40.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.275 226310 DEBUG nova.storage.rbd_utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] rbd image 0fc06623-7a89-42ac-8120-3786201532b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.279 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/87760384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.700 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.703 226310 DEBUG nova.virt.libvirt.vif [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1512228979',display_name='tempest-AttachSCSIVolumeTestJSON-server-1512228979',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1512228979',id=41,image_ref='67b4b12c-5f96-4d5d-a734-669efbd0af6b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZUUCe3RDK0DJen2m5/HBgVzl5rjD15motncQ3j0OXWKfk1EME3vSc9BvXBGapFCDuUl1gx+O7/4X/ve29A0swdy0NAgBeAMNWPiFhtXD2weCKkNIzpOVmQNsHjVz0c1Q==',key_name='tempest-keypair-151253851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='666052d32183417982e59c456a19c744',ramdisk_id='',reservation_id='r-hhs0rtxm',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='67b4b12c-5f96-4d5d-a734-669efbd0af6b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1222333833',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1222333833-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45437f208c5a4499acac789fee214724',uuid=0fc06623-7a89-42ac-8120-3786201532b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.704 226310 DEBUG nova.network.os_vif_util [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Converting VIF {"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.706 226310 DEBUG nova.network.os_vif_util [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b,network=Network(7513bae3-8124-4a28-bf1d-caeeb9cf3823),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d3798-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.708 226310 DEBUG nova.objects.instance [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fc06623-7a89-42ac-8120-3786201532b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.803 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.812 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <uuid>0fc06623-7a89-42ac-8120-3786201532b9</uuid>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <name>instance-00000029</name>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <nova:name>tempest-AttachSCSIVolumeTestJSON-server-1512228979</nova:name>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:54:39</nova:creationTime>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <nova:user uuid="45437f208c5a4499acac789fee214724">tempest-AttachSCSIVolumeTestJSON-1222333833-project-member</nova:user>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <nova:project uuid="666052d32183417982e59c456a19c744">tempest-AttachSCSIVolumeTestJSON-1222333833</nova:project>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="67b4b12c-5f96-4d5d-a734-669efbd0af6b"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <nova:port uuid="b07d3798-fa8b-45a9-9bc7-ac50043bbe5b">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <entry name="serial">0fc06623-7a89-42ac-8120-3786201532b9</entry>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <entry name="uuid">0fc06623-7a89-42ac-8120-3786201532b9</entry>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/0fc06623-7a89-42ac-8120-3786201532b9_disk">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <target dev="sda" bus="scsi"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <address type="drive" controller="0" unit="0"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/0fc06623-7a89-42ac-8120-3786201532b9_disk.config">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <target dev="sdb" bus="scsi"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <address type="drive" controller="0" unit="1"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="scsi" index="0" model="virtio-scsi"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:05:69:ab"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <target dev="tapb07d3798-fa"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9/console.log" append="off"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:54:40 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:54:40 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:54:40 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:54:40 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.815 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Preparing to wait for external event network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.815 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.816 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.816 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.817 226310 DEBUG nova.virt.libvirt.vif [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1512228979',display_name='tempest-AttachSCSIVolumeTestJSON-server-1512228979',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1512228979',id=41,image_ref='67b4b12c-5f96-4d5d-a734-669efbd0af6b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZUUCe3RDK0DJen2m5/HBgVzl5rjD15motncQ3j0OXWKfk1EME3vSc9BvXBGapFCDuUl1gx+O7/4X/ve29A0swdy0NAgBeAMNWPiFhtXD2weCKkNIzpOVmQNsHjVz0c1Q==',key_name='tempest-keypair-151253851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='666052d32183417982e59c456a19c744',ramdisk_id='',reservation_id='r-hhs0rtxm',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='67b4b12c-5f96-4d5d-a734-669efbd0af6b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1222333833',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1222333833-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45437f208c5a4499acac789fee214724',uuid=0fc06623-7a89-42ac-8120-3786201532b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.818 226310 DEBUG nova.network.os_vif_util [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Converting VIF {"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.819 226310 DEBUG nova.network.os_vif_util [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b,network=Network(7513bae3-8124-4a28-bf1d-caeeb9cf3823),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d3798-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.820 226310 DEBUG os_vif [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b,network=Network(7513bae3-8124-4a28-bf1d-caeeb9cf3823),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d3798-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.821 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.822 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.822 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.828 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.829 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d3798-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.830 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb07d3798-fa, col_values=(('external_ids', {'iface-id': 'b07d3798-fa8b-45a9-9bc7-ac50043bbe5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:69:ab', 'vm-uuid': '0fc06623-7a89-42ac-8120-3786201532b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:40 np0005539564 NetworkManager[48997]: <info>  [1764402880.8336] manager: (tapb07d3798-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.832 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.836 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.840 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.842 226310 INFO os_vif [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b,network=Network(7513bae3-8124-4a28-bf1d-caeeb9cf3823),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d3798-fa')#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.906 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.907 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.908 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] No VIF found with MAC fa:16:3e:05:69:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.909 226310 INFO nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Using config drive#033[00m
Nov 29 02:54:40 np0005539564 nova_compute[226295]: 2025-11-29 07:54:40.965 226310 DEBUG nova.storage.rbd_utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] rbd image 0fc06623-7a89-42ac-8120-3786201532b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:42.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:42.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Nov 29 02:54:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:44.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:44 np0005539564 nova_compute[226295]: 2025-11-29 07:54:44.698 226310 INFO nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Creating config drive at /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9/disk.config#033[00m
Nov 29 02:54:44 np0005539564 nova_compute[226295]: 2025-11-29 07:54:44.704 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2s2dbjjt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:44 np0005539564 nova_compute[226295]: 2025-11-29 07:54:44.832 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2s2dbjjt" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:44 np0005539564 nova_compute[226295]: 2025-11-29 07:54:44.863 226310 DEBUG nova.storage.rbd_utils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] rbd image 0fc06623-7a89-42ac-8120-3786201532b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:44 np0005539564 nova_compute[226295]: 2025-11-29 07:54:44.867 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9/disk.config 0fc06623-7a89-42ac-8120-3786201532b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:45 np0005539564 nova_compute[226295]: 2025-11-29 07:54:45.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:45 np0005539564 nova_compute[226295]: 2025-11-29 07:54:45.841 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:45 np0005539564 nova_compute[226295]: 2025-11-29 07:54:45.845 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:45 np0005539564 nova_compute[226295]: 2025-11-29 07:54:45.845 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 02:54:45 np0005539564 nova_compute[226295]: 2025-11-29 07:54:45.846 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:54:45 np0005539564 nova_compute[226295]: 2025-11-29 07:54:45.851 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:45 np0005539564 nova_compute[226295]: 2025-11-29 07:54:45.852 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:54:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:46.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.324 226310 DEBUG oslo_concurrency.processutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9/disk.config 0fc06623-7a89-42ac-8120-3786201532b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.326 226310 INFO nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Deleting local config drive /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:46 np0005539564 kernel: tapb07d3798-fa: entered promiscuous mode
Nov 29 02:54:46 np0005539564 NetworkManager[48997]: <info>  [1764402886.4111] manager: (tapb07d3798-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 02:54:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:46Z|00109|binding|INFO|Claiming lport b07d3798-fa8b-45a9-9bc7-ac50043bbe5b for this chassis.
Nov 29 02:54:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:46Z|00110|binding|INFO|b07d3798-fa8b-45a9-9bc7-ac50043bbe5b: Claiming fa:16:3e:05:69:ab 10.100.0.11
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.411 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.446 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:69:ab 10.100.0.11'], port_security=['fa:16:3e:05:69:ab 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0fc06623-7a89-42ac-8120-3786201532b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7513bae3-8124-4a28-bf1d-caeeb9cf3823', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '666052d32183417982e59c456a19c744', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa104abb-c6c8-4a6d-8122-2bec4113f6cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af0f5638-6290-4a5b-b24b-dd776f548dd7, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.449 139780 INFO neutron.agent.ovn.metadata.agent [-] Port b07d3798-fa8b-45a9-9bc7-ac50043bbe5b in datapath 7513bae3-8124-4a28-bf1d-caeeb9cf3823 bound to our chassis#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.452 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7513bae3-8124-4a28-bf1d-caeeb9cf3823#033[00m
Nov 29 02:54:46 np0005539564 systemd-udevd[243016]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:46 np0005539564 systemd-machined[190128]: New machine qemu-20-instance-00000029.
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.472 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ff377100-e258-47be-add4-c09711a5facb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.473 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7513bae3-81 in ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.475 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7513bae3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.475 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e67bfdc7-3be4-4d89-836e-956bb54b4332]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.476 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8abb8dc4-bd08-48e7-a299-7b847e2b5849]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 NetworkManager[48997]: <info>  [1764402886.4851] device (tapb07d3798-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:46 np0005539564 NetworkManager[48997]: <info>  [1764402886.4872] device (tapb07d3798-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.490 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[40e7510b-26ce-4f3b-b851-389d536e8bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 systemd[1]: Started Virtual Machine qemu-20-instance-00000029.
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.508 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.512 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:46Z|00111|binding|INFO|Setting lport b07d3798-fa8b-45a9-9bc7-ac50043bbe5b ovn-installed in OVS
Nov 29 02:54:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:46Z|00112|binding|INFO|Setting lport b07d3798-fa8b-45a9-9bc7-ac50043bbe5b up in Southbound
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.516 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.516 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49e4e39e-d89c-4df4-b5fd-72fd4b5735e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.550 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[129b0a80-c134-4d57-a104-33f4e972048f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.556 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[afdf48f3-1c63-4f3f-ad24-8d1a1ba67a5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 systemd-udevd[243021]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:46 np0005539564 NetworkManager[48997]: <info>  [1764402886.5573] manager: (tap7513bae3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.588 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[331dd3e9-3c99-4c62-96de-8017fc0a89c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.591 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e5889395-b397-40b5-ba0e-784cd045d614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 NetworkManager[48997]: <info>  [1764402886.6122] device (tap7513bae3-80): carrier: link connected
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.617 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7383b2-099f-43d1-a54b-ee4583ef2c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.631 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a45d80-980f-40f2-9fe2-75c68b336448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7513bae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:83:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586129, 'reachable_time': 36901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243050, 'error': None, 'target': 'ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.645 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb2b343-f1a0-4a34-9366-71cdad0c8a63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:837f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586129, 'tstamp': 586129}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243051, 'error': None, 'target': 'ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.657 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[62439d9b-330d-4cdf-ab11-bfa17aea374d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7513bae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:83:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586129, 'reachable_time': 36901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243052, 'error': None, 'target': 'ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.682 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dcca4584-49a8-44ae-8b8e-95baf0e97648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.741 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aca87b99-2220-45a6-964f-7e22a95be62b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.743 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7513bae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.743 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.744 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7513bae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.746 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 kernel: tap7513bae3-80: entered promiscuous mode
Nov 29 02:54:46 np0005539564 NetworkManager[48997]: <info>  [1764402886.7478] manager: (tap7513bae3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.752 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.755 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7513bae3-80, col_values=(('external_ids', {'iface-id': '96217e85-d12b-4f5a-a2d2-6dd9bf35a899'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.756 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:46Z|00113|binding|INFO|Releasing lport 96217e85-d12b-4f5a-a2d2-6dd9bf35a899 from this chassis (sb_readonly=0)
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.758 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.758 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7513bae3-8124-4a28-bf1d-caeeb9cf3823.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7513bae3-8124-4a28-bf1d-caeeb9cf3823.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.759 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[85da3b83-b606-4f8b-b397-f444f74f764f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.759 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-7513bae3-8124-4a28-bf1d-caeeb9cf3823
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/7513bae3-8124-4a28-bf1d-caeeb9cf3823.pid.haproxy
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 7513bae3-8124-4a28-bf1d-caeeb9cf3823
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:54:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:54:46.760 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823', 'env', 'PROCESS_TAG=haproxy-7513bae3-8124-4a28-bf1d-caeeb9cf3823', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7513bae3-8124-4a28-bf1d-caeeb9cf3823.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.780 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.964 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402886.9634871, 0fc06623-7a89-42ac-8120-3786201532b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.965 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:46 np0005539564 nova_compute[226295]: 2025-11-29 07:54:46.997 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.001 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402886.9636583, 0fc06623-7a89-42ac-8120-3786201532b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.001 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.018 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.022 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.039 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:47 np0005539564 podman[243126]: 2025-11-29 07:54:47.169327604 +0000 UTC m=+0.056744325 container create 9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 02:54:47 np0005539564 systemd[1]: Started libpod-conmon-9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74.scope.
Nov 29 02:54:47 np0005539564 podman[243126]: 2025-11-29 07:54:47.138041798 +0000 UTC m=+0.025458539 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:54:47 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:54:47 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0a3ebbe08a4461ac522e1af82e78f1e988b9243e322c58133e7063495533b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:47 np0005539564 podman[243126]: 2025-11-29 07:54:47.260758928 +0000 UTC m=+0.148175679 container init 9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:54:47 np0005539564 podman[243126]: 2025-11-29 07:54:47.266796391 +0000 UTC m=+0.154213102 container start 9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:54:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Nov 29 02:54:47 np0005539564 neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823[243141]: [NOTICE]   (243145) : New worker (243147) forked
Nov 29 02:54:47 np0005539564 neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823[243141]: [NOTICE]   (243145) : Loading success.
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.698 226310 DEBUG nova.compute.manager [req-b7bd6366-837f-45b4-b7eb-eeeb2ff821b1 req-4236cfa6-e63c-4061-80b4-ea665d0d7bd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received event network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.699 226310 DEBUG oslo_concurrency.lockutils [req-b7bd6366-837f-45b4-b7eb-eeeb2ff821b1 req-4236cfa6-e63c-4061-80b4-ea665d0d7bd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.699 226310 DEBUG oslo_concurrency.lockutils [req-b7bd6366-837f-45b4-b7eb-eeeb2ff821b1 req-4236cfa6-e63c-4061-80b4-ea665d0d7bd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.699 226310 DEBUG oslo_concurrency.lockutils [req-b7bd6366-837f-45b4-b7eb-eeeb2ff821b1 req-4236cfa6-e63c-4061-80b4-ea665d0d7bd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.699 226310 DEBUG nova.compute.manager [req-b7bd6366-837f-45b4-b7eb-eeeb2ff821b1 req-4236cfa6-e63c-4061-80b4-ea665d0d7bd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Processing event network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.700 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.703 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402887.7031097, 0fc06623-7a89-42ac-8120-3786201532b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.703 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.704 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.707 226310 INFO nova.virt.libvirt.driver [-] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Instance spawned successfully.#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.707 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Attempting to register defaults for the following image properties: ['hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.713 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.714 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.715 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.716 226310 DEBUG nova.virt.libvirt.driver [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.724 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.729 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.759 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.810 226310 INFO nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Took 18.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.811 226310 DEBUG nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.872 226310 INFO nova.compute.manager [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Took 20.83 seconds to build instance.#033[00m
Nov 29 02:54:47 np0005539564 nova_compute[226295]: 2025-11-29 07:54:47.890 226310 DEBUG oslo_concurrency.lockutils [None req-33e889c9-7062-46c2-b253-309f68b1af87 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:48.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:48.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Nov 29 02:54:49 np0005539564 nova_compute[226295]: 2025-11-29 07:54:49.881 226310 DEBUG nova.compute.manager [req-3465dc1a-f593-4476-814c-21b30e16922b req-2528f75e-9676-4aab-a397-985d886615ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received event network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:49 np0005539564 nova_compute[226295]: 2025-11-29 07:54:49.881 226310 DEBUG oslo_concurrency.lockutils [req-3465dc1a-f593-4476-814c-21b30e16922b req-2528f75e-9676-4aab-a397-985d886615ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:49 np0005539564 nova_compute[226295]: 2025-11-29 07:54:49.882 226310 DEBUG oslo_concurrency.lockutils [req-3465dc1a-f593-4476-814c-21b30e16922b req-2528f75e-9676-4aab-a397-985d886615ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:49 np0005539564 nova_compute[226295]: 2025-11-29 07:54:49.882 226310 DEBUG oslo_concurrency.lockutils [req-3465dc1a-f593-4476-814c-21b30e16922b req-2528f75e-9676-4aab-a397-985d886615ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:49 np0005539564 nova_compute[226295]: 2025-11-29 07:54:49.882 226310 DEBUG nova.compute.manager [req-3465dc1a-f593-4476-814c-21b30e16922b req-2528f75e-9676-4aab-a397-985d886615ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] No waiting events found dispatching network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:49 np0005539564 nova_compute[226295]: 2025-11-29 07:54:49.882 226310 WARNING nova.compute.manager [req-3465dc1a-f593-4476-814c-21b30e16922b req-2528f75e-9676-4aab-a397-985d886615ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received unexpected event network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b for instance with vm_state active and task_state None.#033[00m
Nov 29 02:54:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:50.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:50 np0005539564 nova_compute[226295]: 2025-11-29 07:54:50.853 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:52 np0005539564 NetworkManager[48997]: <info>  [1764402892.1196] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 02:54:52 np0005539564 NetworkManager[48997]: <info>  [1764402892.1205] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 02:54:52 np0005539564 nova_compute[226295]: 2025-11-29 07:54:52.119 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:52 np0005539564 ovn_controller[130591]: 2025-11-29T07:54:52Z|00114|binding|INFO|Releasing lport 96217e85-d12b-4f5a-a2d2-6dd9bf35a899 from this chassis (sb_readonly=0)
Nov 29 02:54:52 np0005539564 nova_compute[226295]: 2025-11-29 07:54:52.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:52 np0005539564 nova_compute[226295]: 2025-11-29 07:54:52.275 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:52.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:53 np0005539564 nova_compute[226295]: 2025-11-29 07:54:53.125 226310 DEBUG nova.compute.manager [req-4ee30f7e-175f-4e2c-a682-154d3277a31b req-3495b0de-d38c-4d93-ab56-4e814749980a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received event network-changed-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:53 np0005539564 nova_compute[226295]: 2025-11-29 07:54:53.125 226310 DEBUG nova.compute.manager [req-4ee30f7e-175f-4e2c-a682-154d3277a31b req-3495b0de-d38c-4d93-ab56-4e814749980a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Refreshing instance network info cache due to event network-changed-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:53 np0005539564 nova_compute[226295]: 2025-11-29 07:54:53.126 226310 DEBUG oslo_concurrency.lockutils [req-4ee30f7e-175f-4e2c-a682-154d3277a31b req-3495b0de-d38c-4d93-ab56-4e814749980a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:53 np0005539564 nova_compute[226295]: 2025-11-29 07:54:53.126 226310 DEBUG oslo_concurrency.lockutils [req-4ee30f7e-175f-4e2c-a682-154d3277a31b req-3495b0de-d38c-4d93-ab56-4e814749980a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:53 np0005539564 nova_compute[226295]: 2025-11-29 07:54:53.126 226310 DEBUG nova.network.neutron [req-4ee30f7e-175f-4e2c-a682-154d3277a31b req-3495b0de-d38c-4d93-ab56-4e814749980a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Refreshing network info cache for port b07d3798-fa8b-45a9-9bc7-ac50043bbe5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:54.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:54.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:55 np0005539564 nova_compute[226295]: 2025-11-29 07:54:55.180 226310 DEBUG nova.network.neutron [req-4ee30f7e-175f-4e2c-a682-154d3277a31b req-3495b0de-d38c-4d93-ab56-4e814749980a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updated VIF entry in instance network info cache for port b07d3798-fa8b-45a9-9bc7-ac50043bbe5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:55 np0005539564 nova_compute[226295]: 2025-11-29 07:54:55.181 226310 DEBUG nova.network.neutron [req-4ee30f7e-175f-4e2c-a682-154d3277a31b req-3495b0de-d38c-4d93-ab56-4e814749980a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updating instance_info_cache with network_info: [{"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:55 np0005539564 nova_compute[226295]: 2025-11-29 07:54:55.201 226310 DEBUG oslo_concurrency.lockutils [req-4ee30f7e-175f-4e2c-a682-154d3277a31b req-3495b0de-d38c-4d93-ab56-4e814749980a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Nov 29 02:54:55 np0005539564 nova_compute[226295]: 2025-11-29 07:54:55.855 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:56.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:56.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Nov 29 02:54:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:58.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:54:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:58.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:00.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:00 np0005539564 nova_compute[226295]: 2025-11-29 07:55:00.858 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:01 np0005539564 podman[243169]: 2025-11-29 07:55:01.540625955 +0000 UTC m=+0.073918401 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 02:55:01 np0005539564 podman[243162]: 2025-11-29 07:55:01.542549266 +0000 UTC m=+0.095706529 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:55:01 np0005539564 podman[243161]: 2025-11-29 07:55:01.542860315 +0000 UTC m=+0.100625734 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:55:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:02.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Nov 29 02:55:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:03.705 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:03.706 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:03.706 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:04 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:04Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:69:ab 10.100.0.11
Nov 29 02:55:04 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:04Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:69:ab 10.100.0.11
Nov 29 02:55:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.017000452s ======
Nov 29 02:55:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:04.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.017000452s
Nov 29 02:55:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:05 np0005539564 nova_compute[226295]: 2025-11-29 07:55:05.860 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:05 np0005539564 nova_compute[226295]: 2025-11-29 07:55:05.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:05 np0005539564 nova_compute[226295]: 2025-11-29 07:55:05.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 02:55:05 np0005539564 nova_compute[226295]: 2025-11-29 07:55:05.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:55:05 np0005539564 nova_compute[226295]: 2025-11-29 07:55:05.902 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:05 np0005539564 nova_compute[226295]: 2025-11-29 07:55:05.904 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:55:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Nov 29 02:55:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:08.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:10.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:10.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:10 np0005539564 nova_compute[226295]: 2025-11-29 07:55:10.904 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539564 nova_compute[226295]: 2025-11-29 07:55:10.905 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:12.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:14.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:15 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:15Z|00115|binding|INFO|Releasing lport 96217e85-d12b-4f5a-a2d2-6dd9bf35a899 from this chassis (sb_readonly=0)
Nov 29 02:55:15 np0005539564 nova_compute[226295]: 2025-11-29 07:55:15.589 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:15 np0005539564 nova_compute[226295]: 2025-11-29 07:55:15.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:16.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:17 np0005539564 nova_compute[226295]: 2025-11-29 07:55:17.116 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:18.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:18.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:20.221 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:20 np0005539564 nova_compute[226295]: 2025-11-29 07:55:20.222 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:20.223 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:55:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:20.224 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:20.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:20.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:20 np0005539564 nova_compute[226295]: 2025-11-29 07:55:20.909 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:21 np0005539564 nova_compute[226295]: 2025-11-29 07:55:21.858 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:22.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:22.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:23 np0005539564 nova_compute[226295]: 2025-11-29 07:55:23.709 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:24.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:55:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:55:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:55:25 np0005539564 nova_compute[226295]: 2025-11-29 07:55:25.913 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:26.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:26 np0005539564 nova_compute[226295]: 2025-11-29 07:55:26.772 226310 DEBUG oslo_concurrency.lockutils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:26 np0005539564 nova_compute[226295]: 2025-11-29 07:55:26.773 226310 DEBUG oslo_concurrency.lockutils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:26 np0005539564 nova_compute[226295]: 2025-11-29 07:55:26.789 226310 DEBUG nova.objects.instance [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lazy-loading 'flavor' on Instance uuid 0fc06623-7a89-42ac-8120-3786201532b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:26 np0005539564 nova_compute[226295]: 2025-11-29 07:55:26.828 226310 DEBUG oslo_concurrency.lockutils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.043 226310 DEBUG oslo_concurrency.lockutils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.043 226310 DEBUG oslo_concurrency.lockutils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.045 226310 INFO nova.compute.manager [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Attaching volume bd8dea43-45df-40f8-8674-dea0f07a1d34 to /dev/sdc#033[00m
Nov 29 02:55:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.256 226310 DEBUG os_brick.utils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.258 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.270 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.271 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[0e92be50-9de8-451b-935b-b872025aa917]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.272 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.279 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.280 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7602da-b1a4-4341-bdd7-e0002189503e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.281 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.288 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.288 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd49f75-55e4-484d-ae5f-47505bfa99e1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.289 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[575e47be-29e5-4aed-86a7-fbfe49bb0f0d]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.289 226310 DEBUG oslo_concurrency.processutils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.310 226310 DEBUG oslo_concurrency.processutils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.313 226310 DEBUG os_brick.initiator.connectors.lightos [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.314 226310 DEBUG os_brick.initiator.connectors.lightos [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.314 226310 DEBUG os_brick.initiator.connectors.lightos [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.314 226310 DEBUG os_brick.utils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.315 226310 DEBUG nova.virt.block_device [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updating existing volume attachment record: e8680d66-168c-4035-83b3-eacfa77a6035 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:55:27 np0005539564 nova_compute[226295]: 2025-11-29 07:55:27.318 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:28 np0005539564 nova_compute[226295]: 2025-11-29 07:55:28.203 226310 DEBUG nova.objects.instance [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lazy-loading 'flavor' on Instance uuid 0fc06623-7a89-42ac-8120-3786201532b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:28 np0005539564 nova_compute[226295]: 2025-11-29 07:55:28.244 226310 DEBUG nova.virt.libvirt.guest [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 02:55:28 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-bd8dea43-45df-40f8-8674-dea0f07a1d34">
Nov 29 02:55:28 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 02:55:28 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:  </auth>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:  <target dev="sdc" bus="scsi"/>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:  <serial>bd8dea43-45df-40f8-8674-dea0f07a1d34</serial>
Nov 29 02:55:28 np0005539564 nova_compute[226295]:  <address type="drive" controller="0" unit="2"/>
Nov 29 02:55:28 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:55:28 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 02:55:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:28.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:28 np0005539564 nova_compute[226295]: 2025-11-29 07:55:28.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:28 np0005539564 nova_compute[226295]: 2025-11-29 07:55:28.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:28 np0005539564 nova_compute[226295]: 2025-11-29 07:55:28.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:55:29 np0005539564 nova_compute[226295]: 2025-11-29 07:55:29.239 226310 DEBUG nova.virt.libvirt.driver [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:29 np0005539564 nova_compute[226295]: 2025-11-29 07:55:29.240 226310 DEBUG nova.virt.libvirt.driver [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:29 np0005539564 nova_compute[226295]: 2025-11-29 07:55:29.241 226310 DEBUG nova.virt.libvirt.driver [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] No BDM found with device name sdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:29 np0005539564 nova_compute[226295]: 2025-11-29 07:55:29.242 226310 DEBUG nova.virt.libvirt.driver [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] No VIF found with MAC fa:16:3e:05:69:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:55:29 np0005539564 nova_compute[226295]: 2025-11-29 07:55:29.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:29 np0005539564 nova_compute[226295]: 2025-11-29 07:55:29.444 226310 DEBUG oslo_concurrency.lockutils [None req-79f44bb0-ce13-4d3f-8dfd-29b83f1826c4 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:30.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:30 np0005539564 nova_compute[226295]: 2025-11-29 07:55:30.859 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "a97f7448-7bd5-4505-abcd-8f4e247b9469" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:30 np0005539564 nova_compute[226295]: 2025-11-29 07:55:30.860 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:30 np0005539564 nova_compute[226295]: 2025-11-29 07:55:30.893 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:55:30 np0005539564 nova_compute[226295]: 2025-11-29 07:55:30.916 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.007 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.008 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.013 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.014 226310 INFO nova.compute.claims [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.148 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.371 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.458 226310 DEBUG oslo_concurrency.lockutils [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.459 226310 DEBUG oslo_concurrency.lockutils [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.474 226310 INFO nova.compute.manager [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Detaching volume bd8dea43-45df-40f8-8674-dea0f07a1d34#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.547 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.548 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.548 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.549 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0fc06623-7a89-42ac-8120-3786201532b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2649230686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.601 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.610 226310 DEBUG nova.compute.provider_tree [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.619 226310 INFO nova.virt.block_device [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Attempting to driver detach volume bd8dea43-45df-40f8-8674-dea0f07a1d34 from mountpoint /dev/sdc#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.624 226310 DEBUG nova.scheduler.client.report [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.631 226310 DEBUG nova.virt.libvirt.driver [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Attempting to detach device sdc from instance 0fc06623-7a89-42ac-8120-3786201532b9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.631 226310 DEBUG nova.virt.libvirt.guest [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-bd8dea43-45df-40f8-8674-dea0f07a1d34">
Nov 29 02:55:31 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <target dev="sdc" bus="scsi"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <serial>bd8dea43-45df-40f8-8674-dea0f07a1d34</serial>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:55:31 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.650 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.651 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.695 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.695 226310 DEBUG nova.network.neutron [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.709 226310 INFO nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.740 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.826 226310 INFO nova.virt.libvirt.driver [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Successfully detached device sdc from instance 0fc06623-7a89-42ac-8120-3786201532b9 from the persistent domain config.#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.827 226310 DEBUG nova.virt.libvirt.driver [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] (1/8): Attempting to detach device sdc with device alias scsi0-0-0-2 from instance 0fc06623-7a89-42ac-8120-3786201532b9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.827 226310 DEBUG nova.virt.libvirt.guest [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-bd8dea43-45df-40f8-8674-dea0f07a1d34">
Nov 29 02:55:31 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  </source>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <target dev="sdc" bus="scsi"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <serial>bd8dea43-45df-40f8-8674-dea0f07a1d34</serial>
Nov 29 02:55:31 np0005539564 nova_compute[226295]:  <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Nov 29 02:55:31 np0005539564 nova_compute[226295]: </disk>
Nov 29 02:55:31 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.841 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.843 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.843 226310 INFO nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Creating image(s)#033[00m
Nov 29 02:55:31 np0005539564 nova_compute[226295]: 2025-11-29 07:55:31.877 226310 DEBUG nova.storage.rbd_utils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image a97f7448-7bd5-4505-abcd-8f4e247b9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:32.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:32 np0005539564 podman[243437]: 2025-11-29 07:55:32.537847092 +0000 UTC m=+0.073281814 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:55:32 np0005539564 podman[243436]: 2025-11-29 07:55:32.547136103 +0000 UTC m=+0.090809038 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:55:32 np0005539564 podman[243435]: 2025-11-29 07:55:32.584720889 +0000 UTC m=+0.126578065 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:55:33 np0005539564 nova_compute[226295]: 2025-11-29 07:55:33.418 226310 DEBUG nova.storage.rbd_utils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image a97f7448-7bd5-4505-abcd-8f4e247b9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:34.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:34.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978654288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.466 226310 DEBUG nova.storage.rbd_utils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image a97f7448-7bd5-4505-abcd-8f4e247b9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.470 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.514 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764402931.9430752, 0fc06623-7a89-42ac-8120-3786201532b9 => scsi0-0-0-2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.518 226310 DEBUG nova.policy [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dbeeaca97c3e4a1b9417ab3e996f721f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '219d722e6a2c4164be5a30e9565f13a0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.522 226310 DEBUG nova.virt.libvirt.driver [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Start waiting for the detach event from libvirt for device sdc with device alias scsi0-0-0-2 for instance 0fc06623-7a89-42ac-8120-3786201532b9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.526 226310 INFO nova.virt.libvirt.driver [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Successfully detached device sdc from instance 0fc06623-7a89-42ac-8120-3786201532b9 from the live domain config.#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.564 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.565 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.565 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.566 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.751 226310 DEBUG nova.storage.rbd_utils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image a97f7448-7bd5-4505-abcd-8f4e247b9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.757 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a97f7448-7bd5-4505-abcd-8f4e247b9469_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.810 226310 DEBUG nova.objects.instance [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lazy-loading 'flavor' on Instance uuid 0fc06623-7a89-42ac-8120-3786201532b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:34 np0005539564 nova_compute[226295]: 2025-11-29 07:55:34.855 226310 DEBUG oslo_concurrency.lockutils [None req-3930a76f-b653-4b8a-a6dd-36791732f6c8 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 3.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.538 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.538 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.539 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.539 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.539 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.541 226310 INFO nova.compute.manager [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Terminating instance#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.542 226310 DEBUG nova.compute.manager [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.590 226310 DEBUG nova.network.neutron [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Successfully created port: 884285d4-3cba-4999-a7f4-109bc9191410 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.595 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updating instance_info_cache with network_info: [{"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.610 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-0fc06623-7a89-42ac-8120-3786201532b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.611 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.611 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.611 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.611 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.612 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.612 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.633 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.634 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.634 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.635 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.636 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:35 np0005539564 nova_compute[226295]: 2025-11-29 07:55:35.920 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:36.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:36.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/48570772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:36 np0005539564 nova_compute[226295]: 2025-11-29 07:55:36.485 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.848s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:37 np0005539564 nova_compute[226295]: 2025-11-29 07:55:37.232 226310 DEBUG nova.network.neutron [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Successfully updated port: 884285d4-3cba-4999-a7f4-109bc9191410 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:55:37 np0005539564 nova_compute[226295]: 2025-11-29 07:55:37.282 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "refresh_cache-a97f7448-7bd5-4505-abcd-8f4e247b9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:37 np0005539564 nova_compute[226295]: 2025-11-29 07:55:37.283 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquired lock "refresh_cache-a97f7448-7bd5-4505-abcd-8f4e247b9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:37 np0005539564 nova_compute[226295]: 2025-11-29 07:55:37.283 226310 DEBUG nova.network.neutron [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:37 np0005539564 kernel: tapb07d3798-fa (unregistering): left promiscuous mode
Nov 29 02:55:37 np0005539564 NetworkManager[48997]: <info>  [1764402937.6541] device (tapb07d3798-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:55:37 np0005539564 nova_compute[226295]: 2025-11-29 07:55:37.662 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:37Z|00116|binding|INFO|Releasing lport b07d3798-fa8b-45a9-9bc7-ac50043bbe5b from this chassis (sb_readonly=0)
Nov 29 02:55:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:37Z|00117|binding|INFO|Setting lport b07d3798-fa8b-45a9-9bc7-ac50043bbe5b down in Southbound
Nov 29 02:55:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:37Z|00118|binding|INFO|Removing iface tapb07d3798-fa ovn-installed in OVS
Nov 29 02:55:37 np0005539564 nova_compute[226295]: 2025-11-29 07:55:37.666 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:37.670 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:69:ab 10.100.0.11'], port_security=['fa:16:3e:05:69:ab 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0fc06623-7a89-42ac-8120-3786201532b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7513bae3-8124-4a28-bf1d-caeeb9cf3823', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '666052d32183417982e59c456a19c744', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa104abb-c6c8-4a6d-8122-2bec4113f6cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af0f5638-6290-4a5b-b24b-dd776f548dd7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:37.672 139780 INFO neutron.agent.ovn.metadata.agent [-] Port b07d3798-fa8b-45a9-9bc7-ac50043bbe5b in datapath 7513bae3-8124-4a28-bf1d-caeeb9cf3823 unbound from our chassis#033[00m
Nov 29 02:55:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:37.673 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7513bae3-8124-4a28-bf1d-caeeb9cf3823, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:55:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:37.674 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21b9c320-2a50-4791-9da8-a8aa8ef2ae3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:37.675 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823 namespace which is not needed anymore#033[00m
Nov 29 02:55:37 np0005539564 nova_compute[226295]: 2025-11-29 07:55:37.698 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539564 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000029.scope: Deactivated successfully.
Nov 29 02:55:37 np0005539564 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000029.scope: Consumed 16.028s CPU time.
Nov 29 02:55:37 np0005539564 systemd-machined[190128]: Machine qemu-20-instance-00000029 terminated.
Nov 29 02:55:37 np0005539564 nova_compute[226295]: 2025-11-29 07:55:37.764 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a97f7448-7bd5-4505-abcd-8f4e247b9469_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:38 np0005539564 nova_compute[226295]: 2025-11-29 07:55:38.289 226310 DEBUG nova.network.neutron [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:55:38 np0005539564 nova_compute[226295]: 2025-11-29 07:55:38.292 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:38 np0005539564 nova_compute[226295]: 2025-11-29 07:55:38.333 226310 INFO nova.virt.libvirt.driver [-] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Instance destroyed successfully.#033[00m
Nov 29 02:55:38 np0005539564 nova_compute[226295]: 2025-11-29 07:55:38.334 226310 DEBUG nova.objects.instance [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lazy-loading 'resources' on Instance uuid 0fc06623-7a89-42ac-8120-3786201532b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:38.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:38 np0005539564 nova_compute[226295]: 2025-11-29 07:55:38.339 226310 DEBUG nova.storage.rbd_utils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] resizing rbd image a97f7448-7bd5-4505-abcd-8f4e247b9469_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:55:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:38.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:38 np0005539564 neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823[243141]: [NOTICE]   (243145) : haproxy version is 2.8.14-c23fe91
Nov 29 02:55:38 np0005539564 neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823[243141]: [NOTICE]   (243145) : path to executable is /usr/sbin/haproxy
Nov 29 02:55:38 np0005539564 neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823[243141]: [WARNING]  (243145) : Exiting Master process...
Nov 29 02:55:38 np0005539564 neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823[243141]: [WARNING]  (243145) : Exiting Master process...
Nov 29 02:55:38 np0005539564 neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823[243141]: [ALERT]    (243145) : Current worker (243147) exited with code 143 (Terminated)
Nov 29 02:55:38 np0005539564 neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823[243141]: [WARNING]  (243145) : All workers exited. Exiting... (0)
Nov 29 02:55:38 np0005539564 systemd[1]: libpod-9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74.scope: Deactivated successfully.
Nov 29 02:55:38 np0005539564 podman[243613]: 2025-11-29 07:55:38.934713607 +0000 UTC m=+1.164865542 container died 9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:55:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:40.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:40.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.369 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.383 226310 DEBUG nova.compute.manager [req-0c39c728-32f5-49fe-98f6-dab18f9be11d req-194dd717-7d5a-49b0-b290-23e660ac21cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received event network-changed-884285d4-3cba-4999-a7f4-109bc9191410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.384 226310 DEBUG nova.compute.manager [req-0c39c728-32f5-49fe-98f6-dab18f9be11d req-194dd717-7d5a-49b0-b290-23e660ac21cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Refreshing instance network info cache due to event network-changed-884285d4-3cba-4999-a7f4-109bc9191410. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.384 226310 DEBUG oslo_concurrency.lockutils [req-0c39c728-32f5-49fe-98f6-dab18f9be11d req-194dd717-7d5a-49b0-b290-23e660ac21cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a97f7448-7bd5-4505-abcd-8f4e247b9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.390 226310 DEBUG nova.compute.manager [req-9722383f-e5e3-49d1-af1f-897ac3748c68 req-0ccd9969-b13b-47ce-8eff-11994c47a00e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received event network-vif-unplugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.390 226310 DEBUG oslo_concurrency.lockutils [req-9722383f-e5e3-49d1-af1f-897ac3748c68 req-0ccd9969-b13b-47ce-8eff-11994c47a00e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.390 226310 DEBUG oslo_concurrency.lockutils [req-9722383f-e5e3-49d1-af1f-897ac3748c68 req-0ccd9969-b13b-47ce-8eff-11994c47a00e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.391 226310 DEBUG oslo_concurrency.lockutils [req-9722383f-e5e3-49d1-af1f-897ac3748c68 req-0ccd9969-b13b-47ce-8eff-11994c47a00e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.391 226310 DEBUG nova.compute.manager [req-9722383f-e5e3-49d1-af1f-897ac3748c68 req-0ccd9969-b13b-47ce-8eff-11994c47a00e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] No waiting events found dispatching network-vif-unplugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.391 226310 DEBUG nova.compute.manager [req-9722383f-e5e3-49d1-af1f-897ac3748c68 req-0ccd9969-b13b-47ce-8eff-11994c47a00e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received event network-vif-unplugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.393 226310 DEBUG nova.compute.manager [req-e8ea64ed-cb4c-45b2-9e47-166d82ae3143 req-6ae76e72-8642-4e6b-bbc6-d38007b2522b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received event network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.394 226310 DEBUG oslo_concurrency.lockutils [req-e8ea64ed-cb4c-45b2-9e47-166d82ae3143 req-6ae76e72-8642-4e6b-bbc6-d38007b2522b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fc06623-7a89-42ac-8120-3786201532b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.394 226310 DEBUG oslo_concurrency.lockutils [req-e8ea64ed-cb4c-45b2-9e47-166d82ae3143 req-6ae76e72-8642-4e6b-bbc6-d38007b2522b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.395 226310 DEBUG oslo_concurrency.lockutils [req-e8ea64ed-cb4c-45b2-9e47-166d82ae3143 req-6ae76e72-8642-4e6b-bbc6-d38007b2522b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.395 226310 DEBUG nova.compute.manager [req-e8ea64ed-cb4c-45b2-9e47-166d82ae3143 req-6ae76e72-8642-4e6b-bbc6-d38007b2522b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] No waiting events found dispatching network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.396 226310 WARNING nova.compute.manager [req-e8ea64ed-cb4c-45b2-9e47-166d82ae3143 req-6ae76e72-8642-4e6b-bbc6-d38007b2522b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received unexpected event network-vif-plugged-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.403 226310 DEBUG nova.virt.libvirt.vif [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1512228979',display_name='tempest-AttachSCSIVolumeTestJSON-server-1512228979',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1512228979',id=41,image_ref='67b4b12c-5f96-4d5d-a734-669efbd0af6b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZUUCe3RDK0DJen2m5/HBgVzl5rjD15motncQ3j0OXWKfk1EME3vSc9BvXBGapFCDuUl1gx+O7/4X/ve29A0swdy0NAgBeAMNWPiFhtXD2weCKkNIzpOVmQNsHjVz0c1Q==',key_name='tempest-keypair-151253851',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='666052d32183417982e59c456a19c744',ramdisk_id='',reservation_id='r-hhs0rtxm',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='67b4b12c-5f96-4d5d-a734-669efbd0af6b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_scsi_model='virtio-scsi',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1222333833',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1222333833-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45437f208c5a4499acac789fee214724',uuid=0fc06623-7a89-42ac-8120-3786201532b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.404 226310 DEBUG nova.network.os_vif_util [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Converting VIF {"id": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "address": "fa:16:3e:05:69:ab", "network": {"id": "7513bae3-8124-4a28-bf1d-caeeb9cf3823", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-574630212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666052d32183417982e59c456a19c744", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d3798-fa", "ovs_interfaceid": "b07d3798-fa8b-45a9-9bc7-ac50043bbe5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.405 226310 DEBUG nova.network.os_vif_util [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b,network=Network(7513bae3-8124-4a28-bf1d-caeeb9cf3823),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d3798-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.406 226310 DEBUG os_vif [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b,network=Network(7513bae3-8124-4a28-bf1d-caeeb9cf3823),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d3798-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.409 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.409 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d3798-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.413 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.414 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:41 np0005539564 nova_compute[226295]: 2025-11-29 07:55:41.418 226310 INFO os_vif [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=b07d3798-fa8b-45a9-9bc7-ac50043bbe5b,network=Network(7513bae3-8124-4a28-bf1d-caeeb9cf3823),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d3798-fa')#033[00m
Nov 29 02:55:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:42.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:42 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74-userdata-shm.mount: Deactivated successfully.
Nov 29 02:55:42 np0005539564 systemd[1]: var-lib-containers-storage-overlay-0d0a3ebbe08a4461ac522e1af82e78f1e988b9243e322c58133e7063495533b1-merged.mount: Deactivated successfully.
Nov 29 02:55:42 np0005539564 nova_compute[226295]: 2025-11-29 07:55:42.841 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:42 np0005539564 nova_compute[226295]: 2025-11-29 07:55:42.842 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.053 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.055 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4783MB free_disk=20.880298614501953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.055 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.055 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.144 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 0fc06623-7a89-42ac-8120-3786201532b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.145 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance a97f7448-7bd5-4505-abcd-8f4e247b9469 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.145 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.145 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.212 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.398 226310 DEBUG nova.network.neutron [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Updating instance_info_cache with network_info: [{"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.415 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Releasing lock "refresh_cache-a97f7448-7bd5-4505-abcd-8f4e247b9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.416 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Instance network_info: |[{"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.416 226310 DEBUG oslo_concurrency.lockutils [req-0c39c728-32f5-49fe-98f6-dab18f9be11d req-194dd717-7d5a-49b0-b290-23e660ac21cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a97f7448-7bd5-4505-abcd-8f4e247b9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:43 np0005539564 nova_compute[226295]: 2025-11-29 07:55:43.417 226310 DEBUG nova.network.neutron [req-0c39c728-32f5-49fe-98f6-dab18f9be11d req-194dd717-7d5a-49b0-b290-23e660ac21cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Refreshing network info cache for port 884285d4-3cba-4999-a7f4-109bc9191410 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:55:43 np0005539564 podman[243613]: 2025-11-29 07:55:43.567242636 +0000 UTC m=+5.797394551 container cleanup 9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:55:43 np0005539564 systemd[1]: libpod-conmon-9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74.scope: Deactivated successfully.
Nov 29 02:55:44 np0005539564 podman[243739]: 2025-11-29 07:55:44.169441877 +0000 UTC m=+0.572156000 container remove 9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.176 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6cab239c-ae9e-4831-960f-82b2ac9e4fce]: (4, ('Sat Nov 29 07:55:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823 (9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74)\n9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74\nSat Nov 29 07:55:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823 (9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74)\n9f986f40250bdae82a838fb0bbe56616154c4cc1cbfbd24085060c1e988b6d74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.178 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a0da925c-4229-44e7-9d12-d19b6a0ae12c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.179 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7513bae3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.182 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539564 kernel: tap7513bae3-80: left promiscuous mode
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.199 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4bd2d4-60d9-498b-b057-d45a2803dca5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.216 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[86da7de3-ed80-4065-8679-a924521595a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.217 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3787d9ee-80e6-447b-b161-b335d8a1c5e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.233 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ae30a405-9251-4422-845f-9d0c3f3fd321]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586122, 'reachable_time': 35200, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243776, 'error': None, 'target': 'ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539564 systemd[1]: run-netns-ovnmeta\x2d7513bae3\x2d8124\x2d4a28\x2dbf1d\x2dcaeeb9cf3823.mount: Deactivated successfully.
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.236 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7513bae3-8124-4a28-bf1d-caeeb9cf3823 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:55:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:44.236 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[76be49c7-a343-4f26-ae05-4448c768c98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.240 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.249 226310 DEBUG nova.objects.instance [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'migration_context' on Instance uuid a97f7448-7bd5-4505-abcd-8f4e247b9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2395800221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.343 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:44.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.349 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.452 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.452 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Ensure instance console log exists: /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.453 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.453 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.453 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.455 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Start _get_guest_xml network_info=[{"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.460 226310 WARNING nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.465 226310 DEBUG nova.virt.libvirt.host [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.466 226310 DEBUG nova.virt.libvirt.host [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.469 226310 DEBUG nova.virt.libvirt.host [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.470 226310 DEBUG nova.virt.libvirt.host [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.471 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.471 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.471 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.472 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.472 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.472 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.472 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.473 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.473 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.473 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.473 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.474 226310 DEBUG nova.virt.hardware [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.477 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.497 226310 DEBUG nova.network.neutron [req-0c39c728-32f5-49fe-98f6-dab18f9be11d req-194dd717-7d5a-49b0-b290-23e660ac21cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Updated VIF entry in instance network info cache for port 884285d4-3cba-4999-a7f4-109bc9191410. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.498 226310 DEBUG nova.network.neutron [req-0c39c728-32f5-49fe-98f6-dab18f9be11d req-194dd717-7d5a-49b0-b290-23e660ac21cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Updating instance_info_cache with network_info: [{"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.524 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.603 226310 DEBUG oslo_concurrency.lockutils [req-0c39c728-32f5-49fe-98f6-dab18f9be11d req-194dd717-7d5a-49b0-b290-23e660ac21cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a97f7448-7bd5-4505-abcd-8f4e247b9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2440601012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.946 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.981 226310 DEBUG nova.storage.rbd_utils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image a97f7448-7bd5-4505-abcd-8f4e247b9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:44 np0005539564 nova_compute[226295]: 2025-11-29 07:55:44.987 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.009 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.010 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.379 226310 INFO nova.virt.libvirt.driver [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Deleting instance files /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9_del#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.380 226310 INFO nova.virt.libvirt.driver [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Deletion of /var/lib/nova/instances/0fc06623-7a89-42ac-8120-3786201532b9_del complete#033[00m
Nov 29 02:55:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1167763174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.424 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.426 226310 DEBUG nova.virt.libvirt.vif [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:55:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1484859336',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1484859336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1484859336',id=46,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-isgkjphp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:31Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=a97f7448-7bd5-4505-abcd-8f4e247b9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.427 226310 DEBUG nova.network.os_vif_util [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.428 226310 DEBUG nova.network.os_vif_util [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:bd:9c,bridge_name='br-int',has_traffic_filtering=True,id=884285d4-3cba-4999-a7f4-109bc9191410,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap884285d4-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.430 226310 DEBUG nova.objects.instance [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid a97f7448-7bd5-4505-abcd-8f4e247b9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.446 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <uuid>a97f7448-7bd5-4505-abcd-8f4e247b9469</uuid>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <name>instance-0000002e</name>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1484859336</nova:name>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:55:44</nova:creationTime>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <nova:user uuid="dbeeaca97c3e4a1b9417ab3e996f721f">tempest-ImagesOneServerNegativeTestJSON-267959441-project-member</nova:user>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <nova:project uuid="219d722e6a2c4164be5a30e9565f13a0">tempest-ImagesOneServerNegativeTestJSON-267959441</nova:project>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <nova:port uuid="884285d4-3cba-4999-a7f4-109bc9191410">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <entry name="serial">a97f7448-7bd5-4505-abcd-8f4e247b9469</entry>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <entry name="uuid">a97f7448-7bd5-4505-abcd-8f4e247b9469</entry>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/a97f7448-7bd5-4505-abcd-8f4e247b9469_disk">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/a97f7448-7bd5-4505-abcd-8f4e247b9469_disk.config">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:d7:bd:9c"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <target dev="tap884285d4-3c"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469/console.log" append="off"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:55:45 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:55:45 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:55:45 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:55:45 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.448 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Preparing to wait for external event network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.449 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.449 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.449 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.450 226310 DEBUG nova.virt.libvirt.vif [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:55:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1484859336',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1484859336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1484859336',id=46,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-isgkjphp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:31Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=a97f7448-7bd5-4505-abcd-8f4e247b9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.450 226310 DEBUG nova.network.os_vif_util [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.451 226310 DEBUG nova.network.os_vif_util [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:bd:9c,bridge_name='br-int',has_traffic_filtering=True,id=884285d4-3cba-4999-a7f4-109bc9191410,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap884285d4-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.451 226310 DEBUG os_vif [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:bd:9c,bridge_name='br-int',has_traffic_filtering=True,id=884285d4-3cba-4999-a7f4-109bc9191410,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap884285d4-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.452 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.453 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.453 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.455 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.456 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap884285d4-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.456 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap884285d4-3c, col_values=(('external_ids', {'iface-id': '884285d4-3cba-4999-a7f4-109bc9191410', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:bd:9c', 'vm-uuid': 'a97f7448-7bd5-4505-abcd-8f4e247b9469'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.460 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:45 np0005539564 NetworkManager[48997]: <info>  [1764402945.4619] manager: (tap884285d4-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.463 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.470 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.471 226310 INFO os_vif [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:bd:9c,bridge_name='br-int',has_traffic_filtering=True,id=884285d4-3cba-4999-a7f4-109bc9191410,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap884285d4-3c')#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.516 226310 INFO nova.compute.manager [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Took 9.97 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.517 226310 DEBUG oslo.service.loopingcall [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.517 226310 DEBUG nova.compute.manager [-] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.517 226310 DEBUG nova.network.neutron [-] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.535 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.535 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.535 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No VIF found with MAC fa:16:3e:d7:bd:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.536 226310 INFO nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Using config drive#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.561 226310 DEBUG nova.storage.rbd_utils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image a97f7448-7bd5-4505-abcd-8f4e247b9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.923 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.961 226310 INFO nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Creating config drive at /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469/disk.config#033[00m
Nov 29 02:55:45 np0005539564 nova_compute[226295]: 2025-11-29 07:55:45.966 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfyv8i55 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.098 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfyv8i55" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.130 226310 DEBUG nova.storage.rbd_utils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image a97f7448-7bd5-4505-abcd-8f4e247b9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.134 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469/disk.config a97f7448-7bd5-4505-abcd-8f4e247b9469_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:46 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:46 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.302 226310 DEBUG oslo_concurrency.processutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469/disk.config a97f7448-7bd5-4505-abcd-8f4e247b9469_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.303 226310 INFO nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Deleting local config drive /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469/disk.config because it was imported into RBD.#033[00m
Nov 29 02:55:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:46.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:46 np0005539564 kernel: tap884285d4-3c: entered promiscuous mode
Nov 29 02:55:46 np0005539564 NetworkManager[48997]: <info>  [1764402946.3660] manager: (tap884285d4-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 02:55:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:46Z|00119|binding|INFO|Claiming lport 884285d4-3cba-4999-a7f4-109bc9191410 for this chassis.
Nov 29 02:55:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:46Z|00120|binding|INFO|884285d4-3cba-4999-a7f4-109bc9191410: Claiming fa:16:3e:d7:bd:9c 10.100.0.10
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.366 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:46.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:46 np0005539564 systemd-udevd[243779]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:55:46 np0005539564 NetworkManager[48997]: <info>  [1764402946.3827] device (tap884285d4-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:55:46 np0005539564 NetworkManager[48997]: <info>  [1764402946.3835] device (tap884285d4-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.384 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:46Z|00121|binding|INFO|Setting lport 884285d4-3cba-4999-a7f4-109bc9191410 ovn-installed in OVS
Nov 29 02:55:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:46Z|00122|binding|INFO|Setting lport 884285d4-3cba-4999-a7f4-109bc9191410 up in Southbound
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.386 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.388 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:bd:9c 10.100.0.10'], port_security=['fa:16:3e:d7:bd:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a97f7448-7bd5-4505-abcd-8f4e247b9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57fc634c-e12c-411b-a4cb-47f24328da03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '219d722e6a2c4164be5a30e9565f13a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '402e53f5-f525-45fd-8980-0b9445b1b6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7363dfa8-cf81-4433-ae14-b180682ce437, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=884285d4-3cba-4999-a7f4-109bc9191410) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.389 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 884285d4-3cba-4999-a7f4-109bc9191410 in datapath 57fc634c-e12c-411b-a4cb-47f24328da03 bound to our chassis#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.390 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 57fc634c-e12c-411b-a4cb-47f24328da03#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.401 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[187153e3-68db-4643-94fc-1c8cce62daf7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.402 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap57fc634c-e1 in ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.405 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap57fc634c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.405 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e8a13e-8832-44c1-ae13-03dfacd5a1cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.405 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7822972f-6057-4a87-8307-b518e49021e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 systemd-machined[190128]: New machine qemu-21-instance-0000002e.
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.418 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9283f5-b42c-4ad3-ae2c-7cd4e562df32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 systemd[1]: Started Virtual Machine qemu-21-instance-0000002e.
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.434 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d86210d0-3b33-410d-9ff7-46f8b5b14a17]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.460 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c07f253f-f75f-4038-9ea3-5dcdcc884f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.465 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[934924d2-6fe9-4fe6-8de3-f89bebd85fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 NetworkManager[48997]: <info>  [1764402946.4669] manager: (tap57fc634c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.494 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[48f777d4-6462-409b-947a-9d14e2b9adb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.498 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c3db71a2-fe22-4a39-8c92-07f7066ce2df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 NetworkManager[48997]: <info>  [1764402946.5201] device (tap57fc634c-e0): carrier: link connected
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.525 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b391d398-29b3-412e-9311-7b2e24e25d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.552 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4fdefc-8fe6-4d64-8a86-eb3c47196b0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57fc634c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:45:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592120, 'reachable_time': 33445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244003, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.567 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ce04994e-7870-44fc-9365-42a42d224752]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:458f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592120, 'tstamp': 592120}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244004, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.588 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7a59287b-5f39-4933-9e67-c61c722c4ed5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57fc634c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:45:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592120, 'reachable_time': 33445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244005, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.619 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[73d48054-c92d-49ef-b2d4-342dbf132719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.686 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[02d40acf-bac1-48fd-a19b-302d8560b6c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.688 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57fc634c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.688 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.689 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57fc634c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539564 kernel: tap57fc634c-e0: entered promiscuous mode
Nov 29 02:55:46 np0005539564 NetworkManager[48997]: <info>  [1764402946.6929] manager: (tap57fc634c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.692 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.693 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap57fc634c-e0, col_values=(('external_ids', {'iface-id': '423869da-3f5f-4bc2-bc87-d86bc5a0c7ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:46Z|00123|binding|INFO|Releasing lport 423869da-3f5f-4bc2-bc87-d86bc5a0c7ce from this chassis (sb_readonly=0)
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.711 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.712 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.713 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e5288c83-702b-4c07-ab6f-ee1f0cef3149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.713 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-57fc634c-e12c-411b-a4cb-47f24328da03
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 57fc634c-e12c-411b-a4cb-47f24328da03
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:55:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:55:46.714 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'env', 'PROCESS_TAG=haproxy-57fc634c-e12c-411b-a4cb-47f24328da03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/57fc634c-e12c-411b-a4cb-47f24328da03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.758 226310 DEBUG nova.network.neutron [-] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.786 226310 INFO nova.compute.manager [-] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Took 1.27 seconds to deallocate network for instance.#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.793 226310 DEBUG nova.compute.manager [req-8fa71334-cb49-4916-a1c4-5e7d48f38bf8 req-225ec37e-3264-4c99-aceb-cf977860b851 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received event network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.794 226310 DEBUG oslo_concurrency.lockutils [req-8fa71334-cb49-4916-a1c4-5e7d48f38bf8 req-225ec37e-3264-4c99-aceb-cf977860b851 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.794 226310 DEBUG oslo_concurrency.lockutils [req-8fa71334-cb49-4916-a1c4-5e7d48f38bf8 req-225ec37e-3264-4c99-aceb-cf977860b851 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.795 226310 DEBUG oslo_concurrency.lockutils [req-8fa71334-cb49-4916-a1c4-5e7d48f38bf8 req-225ec37e-3264-4c99-aceb-cf977860b851 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.795 226310 DEBUG nova.compute.manager [req-8fa71334-cb49-4916-a1c4-5e7d48f38bf8 req-225ec37e-3264-4c99-aceb-cf977860b851 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Processing event network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.837 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.838 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:46 np0005539564 nova_compute[226295]: 2025-11-29 07:55:46.890 226310 DEBUG nova.compute.manager [req-08c2f0ba-3cfb-433a-99bd-55b98711057b req-39a73d68-4c5c-4cba-b997-ed13740725f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Received event network-vif-deleted-b07d3798-fa8b-45a9-9bc7-ac50043bbe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.671 226310 DEBUG oslo_concurrency.processutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:47 np0005539564 podman[244056]: 2025-11-29 07:55:47.755257599 +0000 UTC m=+0.065732220 container create aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:55:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:47 np0005539564 systemd[1]: Started libpod-conmon-aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a.scope.
Nov 29 02:55:47 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:55:47 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0642390ad11eac377b76e99486cfbeeb070cd89e61983baace1ce24441953827/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:47 np0005539564 podman[244056]: 2025-11-29 07:55:47.709785788 +0000 UTC m=+0.020260429 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:55:47 np0005539564 podman[244056]: 2025-11-29 07:55:47.817506833 +0000 UTC m=+0.127981474 container init aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:55:47 np0005539564 podman[244056]: 2025-11-29 07:55:47.825104439 +0000 UTC m=+0.135579060 container start aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.824 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402947.824101, a97f7448-7bd5-4505-abcd-8f4e247b9469 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.825 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] VM Started (Lifecycle Event)#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.828 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.832 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.837 226310 INFO nova.virt.libvirt.driver [-] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Instance spawned successfully.#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.837 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:55:47 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[244095]: [NOTICE]   (244119) : New worker (244121) forked
Nov 29 02:55:47 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[244095]: [NOTICE]   (244119) : Loading success.
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.891 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.899 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.909 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.909 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.909 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.910 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.910 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.911 226310 DEBUG nova.virt.libvirt.driver [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.944 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.948 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402947.8243842, a97f7448-7bd5-4505-abcd-8f4e247b9469 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.948 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.986 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.989 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402947.8318336, a97f7448-7bd5-4505-abcd-8f4e247b9469 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.989 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.997 226310 INFO nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Took 16.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:55:47 np0005539564 nova_compute[226295]: 2025-11-29 07:55:47.998 226310 DEBUG nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.007 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.011 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.033 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.087 226310 INFO nova.compute.manager [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Took 17.11 seconds to build instance.#033[00m
Nov 29 02:55:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1755307174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.109 226310 DEBUG oslo_concurrency.lockutils [None req-3c06c01b-a379-40b4-bca9-3728bb4c55e7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.120 226310 DEBUG oslo_concurrency.processutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.127 226310 DEBUG nova.compute.provider_tree [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.151 226310 DEBUG nova.scheduler.client.report [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.172 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.202 226310 INFO nova.scheduler.client.report [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Deleted allocations for instance 0fc06623-7a89-42ac-8120-3786201532b9#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.264 226310 DEBUG oslo_concurrency.lockutils [None req-84761493-e00d-434f-8813-184b7d07d937 45437f208c5a4499acac789fee214724 666052d32183417982e59c456a19c744 - - default default] Lock "0fc06623-7a89-42ac-8120-3786201532b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:48.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.956 226310 DEBUG nova.compute.manager [req-b3760a68-33e6-4430-8eb4-4d7e4a0623b0 req-93e51c1b-a43e-49d5-9105-2c01ddfd133b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received event network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.957 226310 DEBUG oslo_concurrency.lockutils [req-b3760a68-33e6-4430-8eb4-4d7e4a0623b0 req-93e51c1b-a43e-49d5-9105-2c01ddfd133b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.958 226310 DEBUG oslo_concurrency.lockutils [req-b3760a68-33e6-4430-8eb4-4d7e4a0623b0 req-93e51c1b-a43e-49d5-9105-2c01ddfd133b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.958 226310 DEBUG oslo_concurrency.lockutils [req-b3760a68-33e6-4430-8eb4-4d7e4a0623b0 req-93e51c1b-a43e-49d5-9105-2c01ddfd133b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.959 226310 DEBUG nova.compute.manager [req-b3760a68-33e6-4430-8eb4-4d7e4a0623b0 req-93e51c1b-a43e-49d5-9105-2c01ddfd133b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] No waiting events found dispatching network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:48 np0005539564 nova_compute[226295]: 2025-11-29 07:55:48.959 226310 WARNING nova.compute.manager [req-b3760a68-33e6-4430-8eb4-4d7e4a0623b0 req-93e51c1b-a43e-49d5-9105-2c01ddfd133b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received unexpected event network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:55:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:50.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:50.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:50 np0005539564 nova_compute[226295]: 2025-11-29 07:55:50.461 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e184 e184: 3 total, 3 up, 3 in
Nov 29 02:55:50 np0005539564 nova_compute[226295]: 2025-11-29 07:55:50.925 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:52.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:52.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:53 np0005539564 nova_compute[226295]: 2025-11-29 07:55:53.293 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402937.775314, 0fc06623-7a89-42ac-8120-3786201532b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:53 np0005539564 nova_compute[226295]: 2025-11-29 07:55:53.293 226310 INFO nova.compute.manager [-] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:53 np0005539564 nova_compute[226295]: 2025-11-29 07:55:53.323 226310 DEBUG nova.compute.manager [None req-c0e07793-9f46-48b1-8618-ea9eb600f49b - - - - - -] [instance: 0fc06623-7a89-42ac-8120-3786201532b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:54.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:54.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:55 np0005539564 nova_compute[226295]: 2025-11-29 07:55:55.464 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:55 np0005539564 nova_compute[226295]: 2025-11-29 07:55:55.927 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:56.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:56.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:57 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:57Z|00124|binding|INFO|Releasing lport 423869da-3f5f-4bc2-bc87-d86bc5a0c7ce from this chassis (sb_readonly=0)
Nov 29 02:55:57 np0005539564 nova_compute[226295]: 2025-11-29 07:55:57.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:57 np0005539564 ovn_controller[130591]: 2025-11-29T07:55:57Z|00125|binding|INFO|Releasing lport 423869da-3f5f-4bc2-bc87-d86bc5a0c7ce from this chassis (sb_readonly=0)
Nov 29 02:55:57 np0005539564 nova_compute[226295]: 2025-11-29 07:55:57.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:57 np0005539564 nova_compute[226295]: 2025-11-29 07:55:57.300 226310 DEBUG nova.compute.manager [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:57 np0005539564 nova_compute[226295]: 2025-11-29 07:55:57.340 226310 INFO nova.compute.manager [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] instance snapshotting#033[00m
Nov 29 02:55:57 np0005539564 nova_compute[226295]: 2025-11-29 07:55:57.617 226310 INFO nova.virt.libvirt.driver [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Beginning live snapshot process#033[00m
Nov 29 02:55:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:57 np0005539564 nova_compute[226295]: 2025-11-29 07:55:57.807 226310 DEBUG nova.virt.libvirt.imagebackend [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 02:55:58 np0005539564 nova_compute[226295]: 2025-11-29 07:55:58.095 226310 DEBUG nova.storage.rbd_utils [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] creating snapshot(8b7a9e01bbb5404aa9defc53df5823a3) on rbd image(a97f7448-7bd5-4505-abcd-8f4e247b9469_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:55:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:55:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:55:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:55:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:58.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e185 e185: 3 total, 3 up, 3 in
Nov 29 02:55:58 np0005539564 nova_compute[226295]: 2025-11-29 07:55:58.748 226310 DEBUG nova.storage.rbd_utils [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] cloning vms/a97f7448-7bd5-4505-abcd-8f4e247b9469_disk@8b7a9e01bbb5404aa9defc53df5823a3 to images/58caa074-39ec-4b44-8c78-60814a120dd3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:55:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e186 e186: 3 total, 3 up, 3 in
Nov 29 02:55:59 np0005539564 nova_compute[226295]: 2025-11-29 07:55:59.197 226310 DEBUG nova.storage.rbd_utils [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] flattening images/58caa074-39ec-4b44-8c78-60814a120dd3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:55:59 np0005539564 nova_compute[226295]: 2025-11-29 07:55:59.477 226310 DEBUG nova.storage.rbd_utils [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] removing snapshot(8b7a9e01bbb5404aa9defc53df5823a3) on rbd image(a97f7448-7bd5-4505-abcd-8f4e247b9469_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:56:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:00.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:00 np0005539564 nova_compute[226295]: 2025-11-29 07:56:00.466 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539564 nova_compute[226295]: 2025-11-29 07:56:00.929 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:01 np0005539564 nova_compute[226295]: 2025-11-29 07:56:01.429 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:01.430 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:01.432 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:02.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e187 e187: 3 total, 3 up, 3 in
Nov 29 02:56:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:03 np0005539564 nova_compute[226295]: 2025-11-29 07:56:03.327 226310 DEBUG nova.storage.rbd_utils [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] creating snapshot(snap) on rbd image(58caa074-39ec-4b44-8c78-60814a120dd3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:56:03 np0005539564 podman[244279]: 2025-11-29 07:56:03.533059927 +0000 UTC m=+0.063501809 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:56:03 np0005539564 podman[244278]: 2025-11-29 07:56:03.544061585 +0000 UTC m=+0.077622631 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 02:56:03 np0005539564 podman[244277]: 2025-11-29 07:56:03.559654246 +0000 UTC m=+0.093289964 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 02:56:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:03.705 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:03.706 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:03.706 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:04.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:04.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e188 e188: 3 total, 3 up, 3 in
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 58caa074-39ec-4b44-8c78-60814a120dd3 could not be found.
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 58caa074-39ec-4b44-8c78-60814a120dd3
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver 
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 58caa074-39ec-4b44-8c78-60814a120dd3 could not be found.
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.315 226310 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 02:56:05 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:05Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:bd:9c 10.100.0.10
Nov 29 02:56:05 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:05Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:bd:9c 10.100.0.10
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.469 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.573 226310 DEBUG nova.storage.rbd_utils [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] removing snapshot(snap) on rbd image(58caa074-39ec-4b44-8c78-60814a120dd3) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:56:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e189 e189: 3 total, 3 up, 3 in
Nov 29 02:56:05 np0005539564 nova_compute[226295]: 2025-11-29 07:56:05.931 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:06.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:06.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:08.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:08.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e190 e190: 3 total, 3 up, 3 in
Nov 29 02:56:08 np0005539564 nova_compute[226295]: 2025-11-29 07:56:08.994 226310 WARNING nova.compute.manager [None req-54893735-f517-427f-9c66-d0e76b203cff dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Image not found during snapshot: nova.exception.ImageNotFound: Image 58caa074-39ec-4b44-8c78-60814a120dd3 could not be found.#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.819 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "a97f7448-7bd5-4505-abcd-8f4e247b9469" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.820 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.821 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.821 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.822 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.824 226310 INFO nova.compute.manager [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Terminating instance#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.825 226310 DEBUG nova.compute.manager [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:09 np0005539564 kernel: tap884285d4-3c (unregistering): left promiscuous mode
Nov 29 02:56:09 np0005539564 NetworkManager[48997]: <info>  [1764402969.8918] device (tap884285d4-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:09 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:09Z|00126|binding|INFO|Releasing lport 884285d4-3cba-4999-a7f4-109bc9191410 from this chassis (sb_readonly=0)
Nov 29 02:56:09 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:09Z|00127|binding|INFO|Setting lport 884285d4-3cba-4999-a7f4-109bc9191410 down in Southbound
Nov 29 02:56:09 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:09Z|00128|binding|INFO|Removing iface tap884285d4-3c ovn-installed in OVS
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.905 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.908 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:09.913 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:bd:9c 10.100.0.10'], port_security=['fa:16:3e:d7:bd:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a97f7448-7bd5-4505-abcd-8f4e247b9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57fc634c-e12c-411b-a4cb-47f24328da03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '219d722e6a2c4164be5a30e9565f13a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '402e53f5-f525-45fd-8980-0b9445b1b6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7363dfa8-cf81-4433-ae14-b180682ce437, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=884285d4-3cba-4999-a7f4-109bc9191410) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:09.916 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 884285d4-3cba-4999-a7f4-109bc9191410 in datapath 57fc634c-e12c-411b-a4cb-47f24328da03 unbound from our chassis#033[00m
Nov 29 02:56:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:09.919 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57fc634c-e12c-411b-a4cb-47f24328da03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:09.921 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f138f4bd-2193-449b-a815-2ed9f54d0c64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:09.922 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 namespace which is not needed anymore#033[00m
Nov 29 02:56:09 np0005539564 nova_compute[226295]: 2025-11-29 07:56:09.947 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:09 np0005539564 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Nov 29 02:56:09 np0005539564 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002e.scope: Consumed 14.520s CPU time.
Nov 29 02:56:09 np0005539564 systemd-machined[190128]: Machine qemu-21-instance-0000002e terminated.
Nov 29 02:56:10 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[244095]: [NOTICE]   (244119) : haproxy version is 2.8.14-c23fe91
Nov 29 02:56:10 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[244095]: [NOTICE]   (244119) : path to executable is /usr/sbin/haproxy
Nov 29 02:56:10 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[244095]: [WARNING]  (244119) : Exiting Master process...
Nov 29 02:56:10 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[244095]: [ALERT]    (244119) : Current worker (244121) exited with code 143 (Terminated)
Nov 29 02:56:10 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[244095]: [WARNING]  (244119) : All workers exited. Exiting... (0)
Nov 29 02:56:10 np0005539564 systemd[1]: libpod-aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a.scope: Deactivated successfully.
Nov 29 02:56:10 np0005539564 podman[244400]: 2025-11-29 07:56:10.05326464 +0000 UTC m=+0.049965333 container died aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.060 226310 INFO nova.virt.libvirt.driver [-] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Instance destroyed successfully.#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.060 226310 DEBUG nova.objects.instance [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'resources' on Instance uuid a97f7448-7bd5-4505-abcd-8f4e247b9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.076 226310 DEBUG nova.virt.libvirt.vif [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:55:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1484859336',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1484859336',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1484859336',id=46,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:55:48Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-isgkjphp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:56:08Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=a97f7448-7bd5-4505-abcd-8f4e247b9469,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.076 226310 DEBUG nova.network.os_vif_util [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "884285d4-3cba-4999-a7f4-109bc9191410", "address": "fa:16:3e:d7:bd:9c", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap884285d4-3c", "ovs_interfaceid": "884285d4-3cba-4999-a7f4-109bc9191410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.077 226310 DEBUG nova.network.os_vif_util [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:bd:9c,bridge_name='br-int',has_traffic_filtering=True,id=884285d4-3cba-4999-a7f4-109bc9191410,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap884285d4-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.077 226310 DEBUG os_vif [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:bd:9c,bridge_name='br-int',has_traffic_filtering=True,id=884285d4-3cba-4999-a7f4-109bc9191410,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap884285d4-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.078 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.079 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap884285d4-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.085 226310 INFO os_vif [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:bd:9c,bridge_name='br-int',has_traffic_filtering=True,id=884285d4-3cba-4999-a7f4-109bc9191410,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap884285d4-3c')#033[00m
Nov 29 02:56:10 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a-userdata-shm.mount: Deactivated successfully.
Nov 29 02:56:10 np0005539564 systemd[1]: var-lib-containers-storage-overlay-0642390ad11eac377b76e99486cfbeeb070cd89e61983baace1ce24441953827-merged.mount: Deactivated successfully.
Nov 29 02:56:10 np0005539564 podman[244400]: 2025-11-29 07:56:10.103356955 +0000 UTC m=+0.100057638 container cleanup aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:10 np0005539564 systemd[1]: libpod-conmon-aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a.scope: Deactivated successfully.
Nov 29 02:56:10 np0005539564 podman[244458]: 2025-11-29 07:56:10.159801822 +0000 UTC m=+0.036921369 container remove aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.165 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21d651da-c34b-40dc-9fc8-9527c4295c6e]: (4, ('Sat Nov 29 07:56:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 (aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a)\naeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a\nSat Nov 29 07:56:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 (aeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a)\naeee77d98229a1527debfe7dd33b86948dc82bbc8338ed7a652452c785ea313a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.167 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0a57cb-5eb4-4519-97da-cca0b77bf97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.168 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57fc634c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.171 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.182 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539564 kernel: tap57fc634c-e0: left promiscuous mode
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.187 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7908af-f121-4e35-a96d-267e6e653e80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.205 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7760b103-e653-4c2f-a59e-f9208412dafd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.206 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9110a80a-e619-45a6-94bc-c3c61e8f7bbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.222 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[79c605d6-a1c9-4911-ae66-c7bd7cfda728]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592113, 'reachable_time': 21274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244476, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.225 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.225 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[1c33d0fe-8690-4c2f-b4ce-00291d28a5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:10 np0005539564 systemd[1]: run-netns-ovnmeta\x2d57fc634c\x2de12c\x2d411b\x2da4cb\x2d47f24328da03.mount: Deactivated successfully.
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.300 226310 DEBUG nova.compute.manager [req-5f0f5c15-2a0a-4a7f-8ed2-15f490e0578d req-17c3713f-7eba-4a68-bdde-5d2c43065b60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received event network-vif-unplugged-884285d4-3cba-4999-a7f4-109bc9191410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.301 226310 DEBUG oslo_concurrency.lockutils [req-5f0f5c15-2a0a-4a7f-8ed2-15f490e0578d req-17c3713f-7eba-4a68-bdde-5d2c43065b60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.301 226310 DEBUG oslo_concurrency.lockutils [req-5f0f5c15-2a0a-4a7f-8ed2-15f490e0578d req-17c3713f-7eba-4a68-bdde-5d2c43065b60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.301 226310 DEBUG oslo_concurrency.lockutils [req-5f0f5c15-2a0a-4a7f-8ed2-15f490e0578d req-17c3713f-7eba-4a68-bdde-5d2c43065b60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.302 226310 DEBUG nova.compute.manager [req-5f0f5c15-2a0a-4a7f-8ed2-15f490e0578d req-17c3713f-7eba-4a68-bdde-5d2c43065b60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] No waiting events found dispatching network-vif-unplugged-884285d4-3cba-4999-a7f4-109bc9191410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.302 226310 DEBUG nova.compute.manager [req-5f0f5c15-2a0a-4a7f-8ed2-15f490e0578d req-17c3713f-7eba-4a68-bdde-5d2c43065b60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received event network-vif-unplugged-884285d4-3cba-4999-a7f4-109bc9191410 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:56:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:10.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:10.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:10.435 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.933 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.938 226310 INFO nova.virt.libvirt.driver [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Deleting instance files /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469_del#033[00m
Nov 29 02:56:10 np0005539564 nova_compute[226295]: 2025-11-29 07:56:10.940 226310 INFO nova.virt.libvirt.driver [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Deletion of /var/lib/nova/instances/a97f7448-7bd5-4505-abcd-8f4e247b9469_del complete#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.005 226310 INFO nova.compute.manager [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Took 1.18 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.006 226310 DEBUG oslo.service.loopingcall [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.006 226310 DEBUG nova.compute.manager [-] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.006 226310 DEBUG nova.network.neutron [-] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.797 226310 DEBUG nova.network.neutron [-] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.822 226310 INFO nova.compute.manager [-] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Took 0.82 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.890 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.891 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:11 np0005539564 nova_compute[226295]: 2025-11-29 07:56:11.954 226310 DEBUG oslo_concurrency.processutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1247941767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:12.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.394 226310 DEBUG oslo_concurrency.processutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.412 226310 DEBUG nova.compute.manager [req-4fb28f78-5674-4ca3-9b4a-abbee96637de req-3347cbd4-91b1-408d-9671-031348153335 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received event network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.413 226310 DEBUG oslo_concurrency.lockutils [req-4fb28f78-5674-4ca3-9b4a-abbee96637de req-3347cbd4-91b1-408d-9671-031348153335 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.413 226310 DEBUG oslo_concurrency.lockutils [req-4fb28f78-5674-4ca3-9b4a-abbee96637de req-3347cbd4-91b1-408d-9671-031348153335 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.414 226310 DEBUG oslo_concurrency.lockutils [req-4fb28f78-5674-4ca3-9b4a-abbee96637de req-3347cbd4-91b1-408d-9671-031348153335 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.414 226310 DEBUG nova.compute.manager [req-4fb28f78-5674-4ca3-9b4a-abbee96637de req-3347cbd4-91b1-408d-9671-031348153335 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] No waiting events found dispatching network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.415 226310 WARNING nova.compute.manager [req-4fb28f78-5674-4ca3-9b4a-abbee96637de req-3347cbd4-91b1-408d-9671-031348153335 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received unexpected event network-vif-plugged-884285d4-3cba-4999-a7f4-109bc9191410 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.419 226310 DEBUG nova.compute.provider_tree [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.440 226310 DEBUG nova.scheduler.client.report [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.473 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.495 226310 INFO nova.scheduler.client.report [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Deleted allocations for instance a97f7448-7bd5-4505-abcd-8f4e247b9469#033[00m
Nov 29 02:56:12 np0005539564 nova_compute[226295]: 2025-11-29 07:56:12.563 226310 DEBUG oslo_concurrency.lockutils [None req-f0771e6a-53be-4c60-b132-33f75475d5c4 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "a97f7448-7bd5-4505-abcd-8f4e247b9469" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:13 np0005539564 nova_compute[226295]: 2025-11-29 07:56:13.617 226310 DEBUG nova.compute.manager [req-277d9ce5-b33f-4edc-8084-7f3ca38e98c1 req-c2fc296a-ccdc-493c-a458-eb89a062e088 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Received event network-vif-deleted-884285d4-3cba-4999-a7f4-109bc9191410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e191 e191: 3 total, 3 up, 3 in
Nov 29 02:56:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:56:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:14.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:56:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:14.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:15 np0005539564 nova_compute[226295]: 2025-11-29 07:56:15.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:15 np0005539564 nova_compute[226295]: 2025-11-29 07:56:15.937 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:16.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:18.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:18.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:20 np0005539564 nova_compute[226295]: 2025-11-29 07:56:20.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:20.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:20 np0005539564 nova_compute[226295]: 2025-11-29 07:56:20.937 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:22.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:22.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:23.784800) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983784901, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2538, "num_deletes": 259, "total_data_size": 5840196, "memory_usage": 5906160, "flush_reason": "Manual Compaction"}
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983852342, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3805713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29133, "largest_seqno": 31666, "table_properties": {"data_size": 3795240, "index_size": 6711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22599, "raw_average_key_size": 21, "raw_value_size": 3773958, "raw_average_value_size": 3527, "num_data_blocks": 289, "num_entries": 1070, "num_filter_entries": 1070, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402783, "oldest_key_time": 1764402783, "file_creation_time": 1764402983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 67595 microseconds, and 10286 cpu microseconds.
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:23.852406) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3805713 bytes OK
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:23.852433) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:23.860059) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:23.860101) EVENT_LOG_v1 {"time_micros": 1764402983860092, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:23.860126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5828807, prev total WAL file size 5828807, number of live WAL files 2.
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:23.861389) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3716KB)], [57(7551KB)]
Nov 29 02:56:23 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983861449, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 11538539, "oldest_snapshot_seqno": -1}
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6001 keys, 9552505 bytes, temperature: kUnknown
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402984010547, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 9552505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9512321, "index_size": 24074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 154328, "raw_average_key_size": 25, "raw_value_size": 9404277, "raw_average_value_size": 1567, "num_data_blocks": 966, "num_entries": 6001, "num_filter_entries": 6001, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764402983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:24.011106) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9552505 bytes
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:24.072642) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.3 rd, 64.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 6537, records dropped: 536 output_compression: NoCompression
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:24.072688) EVENT_LOG_v1 {"time_micros": 1764402984072670, "job": 34, "event": "compaction_finished", "compaction_time_micros": 149286, "compaction_time_cpu_micros": 44723, "output_level": 6, "num_output_files": 1, "total_output_size": 9552505, "num_input_records": 6537, "num_output_records": 6001, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402984074203, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402984076453, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:23.861216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:24.076611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:24.076616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:24.076618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:24.076619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:56:24.076620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:24.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:24.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:24 np0005539564 nova_compute[226295]: 2025-11-29 07:56:24.781 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "4cc38edb-3892-4cbc-ad22-7206fb37109d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:24 np0005539564 nova_compute[226295]: 2025-11-29 07:56:24.782 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:24 np0005539564 nova_compute[226295]: 2025-11-29 07:56:24.811 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:56:24 np0005539564 nova_compute[226295]: 2025-11-29 07:56:24.913 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:24 np0005539564 nova_compute[226295]: 2025-11-29 07:56:24.913 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:24 np0005539564 nova_compute[226295]: 2025-11-29 07:56:24.921 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:56:24 np0005539564 nova_compute[226295]: 2025-11-29 07:56:24.922 226310 INFO nova.compute.claims [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.011 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.057 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402970.0555944, a97f7448-7bd5-4505-abcd-8f4e247b9469 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.057 226310 INFO nova.compute.manager [-] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.077 226310 DEBUG nova.compute.manager [None req-f792362b-2fd8-488a-ab0c-98a7b575fcef - - - - - -] [instance: a97f7448-7bd5-4505-abcd-8f4e247b9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.085 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:25 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1529461897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.445 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.452 226310 DEBUG nova.compute.provider_tree [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.467 226310 DEBUG nova.scheduler.client.report [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.490 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.491 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.531 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.532 226310 DEBUG nova.network.neutron [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.552 226310 INFO nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.570 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.652 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.654 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.655 226310 INFO nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Creating image(s)#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.688 226310 DEBUG nova.storage.rbd_utils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] rbd image 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.724 226310 DEBUG nova.storage.rbd_utils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] rbd image 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.762 226310 DEBUG nova.storage.rbd_utils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] rbd image 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.767 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.795 226310 DEBUG nova.policy [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b2176b32c3ee4e8fb0a6fa804f37c849', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bbde6bd5a35f4b9ea2c82fe2a2ad72fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.830 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.831 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.832 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.832 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.869 226310 DEBUG nova.storage.rbd_utils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] rbd image 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.873 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:25 np0005539564 nova_compute[226295]: 2025-11-29 07:56:25.939 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539564 nova_compute[226295]: 2025-11-29 07:56:26.397 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:26.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:26 np0005539564 nova_compute[226295]: 2025-11-29 07:56:26.481 226310 DEBUG nova.storage.rbd_utils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] resizing rbd image 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:56:26 np0005539564 nova_compute[226295]: 2025-11-29 07:56:26.676 226310 DEBUG nova.network.neutron [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Successfully created port: 15f7dd7c-2638-443f-9e11-42032d9a683b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:56:27 np0005539564 nova_compute[226295]: 2025-11-29 07:56:27.008 226310 DEBUG nova.objects.instance [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lazy-loading 'migration_context' on Instance uuid 4cc38edb-3892-4cbc-ad22-7206fb37109d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:27 np0005539564 nova_compute[226295]: 2025-11-29 07:56:27.022 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:56:27 np0005539564 nova_compute[226295]: 2025-11-29 07:56:27.022 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Ensure instance console log exists: /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:56:27 np0005539564 nova_compute[226295]: 2025-11-29 07:56:27.023 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:27 np0005539564 nova_compute[226295]: 2025-11-29 07:56:27.023 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:27 np0005539564 nova_compute[226295]: 2025-11-29 07:56:27.023 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:56:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3663190925' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:56:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:56:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3663190925' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:56:28 np0005539564 nova_compute[226295]: 2025-11-29 07:56:28.051 226310 DEBUG nova.network.neutron [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Successfully updated port: 15f7dd7c-2638-443f-9e11-42032d9a683b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:56:28 np0005539564 nova_compute[226295]: 2025-11-29 07:56:28.078 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "refresh_cache-4cc38edb-3892-4cbc-ad22-7206fb37109d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:28 np0005539564 nova_compute[226295]: 2025-11-29 07:56:28.079 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquired lock "refresh_cache-4cc38edb-3892-4cbc-ad22-7206fb37109d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:28 np0005539564 nova_compute[226295]: 2025-11-29 07:56:28.079 226310 DEBUG nova.network.neutron [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:56:28 np0005539564 nova_compute[226295]: 2025-11-29 07:56:28.252 226310 DEBUG nova.compute.manager [req-c8c5b3be-ec34-4b6f-bf5d-80129fca8a3d req-d95422be-15f5-4382-962a-95ceb10c593a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received event network-changed-15f7dd7c-2638-443f-9e11-42032d9a683b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:28 np0005539564 nova_compute[226295]: 2025-11-29 07:56:28.253 226310 DEBUG nova.compute.manager [req-c8c5b3be-ec34-4b6f-bf5d-80129fca8a3d req-d95422be-15f5-4382-962a-95ceb10c593a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Refreshing instance network info cache due to event network-changed-15f7dd7c-2638-443f-9e11-42032d9a683b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:56:28 np0005539564 nova_compute[226295]: 2025-11-29 07:56:28.253 226310 DEBUG oslo_concurrency.lockutils [req-c8c5b3be-ec34-4b6f-bf5d-80129fca8a3d req-d95422be-15f5-4382-962a-95ceb10c593a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-4cc38edb-3892-4cbc-ad22-7206fb37109d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:28 np0005539564 nova_compute[226295]: 2025-11-29 07:56:28.340 226310 DEBUG nova.network.neutron [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:56:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:28.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.295 226310 DEBUG nova.network.neutron [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Updating instance_info_cache with network_info: [{"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.320 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Releasing lock "refresh_cache-4cc38edb-3892-4cbc-ad22-7206fb37109d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.321 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Instance network_info: |[{"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.322 226310 DEBUG oslo_concurrency.lockutils [req-c8c5b3be-ec34-4b6f-bf5d-80129fca8a3d req-d95422be-15f5-4382-962a-95ceb10c593a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-4cc38edb-3892-4cbc-ad22-7206fb37109d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.323 226310 DEBUG nova.network.neutron [req-c8c5b3be-ec34-4b6f-bf5d-80129fca8a3d req-d95422be-15f5-4382-962a-95ceb10c593a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Refreshing network info cache for port 15f7dd7c-2638-443f-9e11-42032d9a683b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.326 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Start _get_guest_xml network_info=[{"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.330 226310 WARNING nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.335 226310 DEBUG nova.virt.libvirt.host [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.336 226310 DEBUG nova.virt.libvirt.host [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.341 226310 DEBUG nova.virt.libvirt.host [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.341 226310 DEBUG nova.virt.libvirt.host [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.344 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.345 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.345 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.346 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.346 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.346 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.347 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.347 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.347 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.348 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.348 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.348 226310 DEBUG nova.virt.hardware [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.352 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.776 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.815 226310 DEBUG nova.storage.rbd_utils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] rbd image 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:29 np0005539564 nova_compute[226295]: 2025-11-29 07:56:29.820 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.087 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4230568782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.286 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.288 226310 DEBUG nova.virt.libvirt.vif [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-208473173',display_name='tempest-ImagesNegativeTestJSON-server-208473173',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-208473173',id=49,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bbde6bd5a35f4b9ea2c82fe2a2ad72fe',ramdisk_id='',reservation_id='r-v5ymn0cy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1920320020',owner_user_name='tempest-ImagesNegativeTestJSON-1920320020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:25Z,user_data=None,user_id='b2176b32c3ee4e8fb0a6fa804f37c849',uuid=4cc38edb-3892-4cbc-ad22-7206fb37109d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.288 226310 DEBUG nova.network.os_vif_util [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Converting VIF {"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.289 226310 DEBUG nova.network.os_vif_util [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:d8:c1,bridge_name='br-int',has_traffic_filtering=True,id=15f7dd7c-2638-443f-9e11-42032d9a683b,network=Network(664396be-cab9-4a01-a67d-9d46d772620f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f7dd7c-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.290 226310 DEBUG nova.objects.instance [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cc38edb-3892-4cbc-ad22-7206fb37109d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.312 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <uuid>4cc38edb-3892-4cbc-ad22-7206fb37109d</uuid>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <name>instance-00000031</name>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <nova:name>tempest-ImagesNegativeTestJSON-server-208473173</nova:name>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:56:29</nova:creationTime>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <nova:user uuid="b2176b32c3ee4e8fb0a6fa804f37c849">tempest-ImagesNegativeTestJSON-1920320020-project-member</nova:user>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <nova:project uuid="bbde6bd5a35f4b9ea2c82fe2a2ad72fe">tempest-ImagesNegativeTestJSON-1920320020</nova:project>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <nova:port uuid="15f7dd7c-2638-443f-9e11-42032d9a683b">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <entry name="serial">4cc38edb-3892-4cbc-ad22-7206fb37109d</entry>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <entry name="uuid">4cc38edb-3892-4cbc-ad22-7206fb37109d</entry>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/4cc38edb-3892-4cbc-ad22-7206fb37109d_disk">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/4cc38edb-3892-4cbc-ad22-7206fb37109d_disk.config">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:77:d8:c1"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <target dev="tap15f7dd7c-26"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d/console.log" append="off"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:56:30 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:56:30 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:56:30 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:56:30 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.315 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Preparing to wait for external event network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.316 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.317 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.317 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.319 226310 DEBUG nova.virt.libvirt.vif [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-208473173',display_name='tempest-ImagesNegativeTestJSON-server-208473173',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-208473173',id=49,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bbde6bd5a35f4b9ea2c82fe2a2ad72fe',ramdisk_id='',reservation_id='r-v5ymn0cy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1920320020',owner_user_name='tempest-ImagesNegativeTestJSON-1920320020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:25Z,user_data=None,user_id='b2176b32c3ee4e8fb0a6fa804f37c849',uuid=4cc38edb-3892-4cbc-ad22-7206fb37109d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.319 226310 DEBUG nova.network.os_vif_util [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Converting VIF {"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.320 226310 DEBUG nova.network.os_vif_util [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:d8:c1,bridge_name='br-int',has_traffic_filtering=True,id=15f7dd7c-2638-443f-9e11-42032d9a683b,network=Network(664396be-cab9-4a01-a67d-9d46d772620f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f7dd7c-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.324 226310 DEBUG os_vif [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:d8:c1,bridge_name='br-int',has_traffic_filtering=True,id=15f7dd7c-2638-443f-9e11-42032d9a683b,network=Network(664396be-cab9-4a01-a67d-9d46d772620f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f7dd7c-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.325 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.325 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.326 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.330 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.330 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15f7dd7c-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.331 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15f7dd7c-26, col_values=(('external_ids', {'iface-id': '15f7dd7c-2638-443f-9e11-42032d9a683b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:d8:c1', 'vm-uuid': '4cc38edb-3892-4cbc-ad22-7206fb37109d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:30 np0005539564 NetworkManager[48997]: <info>  [1764402990.3609] manager: (tap15f7dd7c-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.361 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.364 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.366 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.367 226310 INFO os_vif [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:d8:c1,bridge_name='br-int',has_traffic_filtering=True,id=15f7dd7c-2638-443f-9e11-42032d9a683b,network=Network(664396be-cab9-4a01-a67d-9d46d772620f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f7dd7c-26')#033[00m
Nov 29 02:56:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.433 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.434 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.434 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] No VIF found with MAC fa:16:3e:77:d8:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.434 226310 INFO nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Using config drive#033[00m
Nov 29 02:56:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:30.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.467 226310 DEBUG nova.storage.rbd_utils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] rbd image 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.623 226310 DEBUG nova.network.neutron [req-c8c5b3be-ec34-4b6f-bf5d-80129fca8a3d req-d95422be-15f5-4382-962a-95ceb10c593a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Updated VIF entry in instance network info cache for port 15f7dd7c-2638-443f-9e11-42032d9a683b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.624 226310 DEBUG nova.network.neutron [req-c8c5b3be-ec34-4b6f-bf5d-80129fca8a3d req-d95422be-15f5-4382-962a-95ceb10c593a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Updating instance_info_cache with network_info: [{"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.641 226310 DEBUG oslo_concurrency.lockutils [req-c8c5b3be-ec34-4b6f-bf5d-80129fca8a3d req-d95422be-15f5-4382-962a-95ceb10c593a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-4cc38edb-3892-4cbc-ad22-7206fb37109d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.941 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.985 226310 INFO nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Creating config drive at /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d/disk.config#033[00m
Nov 29 02:56:30 np0005539564 nova_compute[226295]: 2025-11-29 07:56:30.991 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpugt_10mv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:31 np0005539564 nova_compute[226295]: 2025-11-29 07:56:31.119 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpugt_10mv" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:31 np0005539564 nova_compute[226295]: 2025-11-29 07:56:31.157 226310 DEBUG nova.storage.rbd_utils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] rbd image 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:31 np0005539564 nova_compute[226295]: 2025-11-29 07:56:31.160 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d/disk.config 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:31 np0005539564 nova_compute[226295]: 2025-11-29 07:56:31.747 226310 DEBUG oslo_concurrency.processutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d/disk.config 4cc38edb-3892-4cbc-ad22-7206fb37109d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:31 np0005539564 nova_compute[226295]: 2025-11-29 07:56:31.748 226310 INFO nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Deleting local config drive /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d/disk.config because it was imported into RBD.#033[00m
Nov 29 02:56:31 np0005539564 kernel: tap15f7dd7c-26: entered promiscuous mode
Nov 29 02:56:31 np0005539564 NetworkManager[48997]: <info>  [1764402991.8000] manager: (tap15f7dd7c-26): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 02:56:31 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:31Z|00129|binding|INFO|Claiming lport 15f7dd7c-2638-443f-9e11-42032d9a683b for this chassis.
Nov 29 02:56:31 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:31Z|00130|binding|INFO|15f7dd7c-2638-443f-9e11-42032d9a683b: Claiming fa:16:3e:77:d8:c1 10.100.0.4
Nov 29 02:56:31 np0005539564 nova_compute[226295]: 2025-11-29 07:56:31.830 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539564 nova_compute[226295]: 2025-11-29 07:56:31.839 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.849 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:d8:c1 10.100.0.4'], port_security=['fa:16:3e:77:d8:c1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4cc38edb-3892-4cbc-ad22-7206fb37109d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-664396be-cab9-4a01-a67d-9d46d772620f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bbde6bd5a35f4b9ea2c82fe2a2ad72fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bc4520f-600a-42e7-9efe-48278863e446', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5663cb46-aa4a-49db-ac84-48413043d4dd, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=15f7dd7c-2638-443f-9e11-42032d9a683b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.851 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 15f7dd7c-2638-443f-9e11-42032d9a683b in datapath 664396be-cab9-4a01-a67d-9d46d772620f bound to our chassis#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.854 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 664396be-cab9-4a01-a67d-9d46d772620f#033[00m
Nov 29 02:56:31 np0005539564 systemd-udevd[244824]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:56:31 np0005539564 systemd-machined[190128]: New machine qemu-22-instance-00000031.
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.867 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[de9455dd-9220-484a-bc81-e822f0606aa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539564 NetworkManager[48997]: <info>  [1764402991.8685] device (tap15f7dd7c-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.868 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap664396be-c1 in ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:56:31 np0005539564 NetworkManager[48997]: <info>  [1764402991.8692] device (tap15f7dd7c-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.871 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap664396be-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.871 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b86cfd32-8613-4311-abd6-6e0d821b1830]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.872 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[60b60e09-e9f0-45d2-bfbb-01c1e21f08fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539564 systemd[1]: Started Virtual Machine qemu-22-instance-00000031.
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.884 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[e06c6859-e9fb-4b91-a218-856a12804e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.909 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9eef3a-948b-4e01-a135-77544ff6a2e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:31Z|00131|binding|INFO|Setting lport 15f7dd7c-2638-443f-9e11-42032d9a683b ovn-installed in OVS
Nov 29 02:56:31 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:31Z|00132|binding|INFO|Setting lport 15f7dd7c-2638-443f-9e11-42032d9a683b up in Southbound
Nov 29 02:56:31 np0005539564 nova_compute[226295]: 2025-11-29 07:56:31.918 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.941 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[28deb6f0-a3a9-4406-8e63-429dca53b124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.949 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0cab2f-666f-49fd-9e33-a957191a4ce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539564 systemd-udevd[244828]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:56:31 np0005539564 NetworkManager[48997]: <info>  [1764402991.9515] manager: (tap664396be-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.990 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[2a103e4f-94de-4f1c-934a-cccc2b9e2ce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:31.993 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[42769ef2-3c7f-4882-8bb1-02ca29ab9619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:32 np0005539564 NetworkManager[48997]: <info>  [1764402992.0181] device (tap664396be-c0): carrier: link connected
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.029 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fd57e8-1930-4e38-9d21-b0761a6e466f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.048 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b77bb7-c690-4661-8acb-cd43bf36cc65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap664396be-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:60:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596669, 'reachable_time': 39579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244858, 'error': None, 'target': 'ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.064 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[394aa467-34f8-4f49-8e11-a80643110b66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:60f5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596669, 'tstamp': 596669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244859, 'error': None, 'target': 'ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.091 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[533005d8-72b5-4abc-b039-d7e10881c8fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap664396be-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:60:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596669, 'reachable_time': 39579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244860, 'error': None, 'target': 'ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.124 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49fef735-f797-45e5-aedd-db2e90aad2ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.174 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6c19fdc2-65d7-4c33-b9bb-b327761de0fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.175 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap664396be-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.175 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.176 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap664396be-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:32 np0005539564 NetworkManager[48997]: <info>  [1764402992.1788] manager: (tap664396be-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.178 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:32 np0005539564 kernel: tap664396be-c0: entered promiscuous mode
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.183 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap664396be-c0, col_values=(('external_ids', {'iface-id': '3eaa17d1-f77a-40bf-a2ab-083903e4ec78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:32 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:32Z|00133|binding|INFO|Releasing lport 3eaa17d1-f77a-40bf-a2ab-083903e4ec78 from this chassis (sb_readonly=0)
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.197 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.199 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/664396be-cab9-4a01-a67d-9d46d772620f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/664396be-cab9-4a01-a67d-9d46d772620f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.200 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d22a003a-8e79-41ef-8b4d-25ec26c65ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.201 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-664396be-cab9-4a01-a67d-9d46d772620f
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/664396be-cab9-4a01-a67d-9d46d772620f.pid.haproxy
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 664396be-cab9-4a01-a67d-9d46d772620f
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:56:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:32.202 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f', 'env', 'PROCESS_TAG=haproxy-664396be-cab9-4a01-a67d-9d46d772620f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/664396be-cab9-4a01-a67d-9d46d772620f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.404 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402992.404082, 4cc38edb-3892-4cbc-ad22-7206fb37109d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.409 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] VM Started (Lifecycle Event)#033[00m
Nov 29 02:56:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:32.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.429 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.434 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402992.4047997, 4cc38edb-3892-4cbc-ad22-7206fb37109d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.434 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:56:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:32.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.454 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.458 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:32 np0005539564 nova_compute[226295]: 2025-11-29 07:56:32.476 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:32 np0005539564 podman[244934]: 2025-11-29 07:56:32.554165222 +0000 UTC m=+0.052079699 container create 94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:32 np0005539564 systemd[1]: Started libpod-conmon-94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df.scope.
Nov 29 02:56:32 np0005539564 podman[244934]: 2025-11-29 07:56:32.524365036 +0000 UTC m=+0.022279543 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:56:32 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:56:32 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9cfd5cd0a5d33b2c5b12a40557ee0b9735fc66345f380febbc12a8a3592444/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:32 np0005539564 podman[244934]: 2025-11-29 07:56:32.653759137 +0000 UTC m=+0.151673704 container init 94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:56:32 np0005539564 podman[244934]: 2025-11-29 07:56:32.660752355 +0000 UTC m=+0.158666872 container start 94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:56:32 np0005539564 neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f[244949]: [NOTICE]   (244953) : New worker (244955) forked
Nov 29 02:56:32 np0005539564 neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f[244949]: [NOTICE]   (244953) : Loading success.
Nov 29 02:56:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:34.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:34.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:34 np0005539564 podman[244966]: 2025-11-29 07:56:34.549831329 +0000 UTC m=+0.076775548 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:34 np0005539564 podman[244965]: 2025-11-29 07:56:34.550177808 +0000 UTC m=+0.082918324 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:56:34 np0005539564 podman[244964]: 2025-11-29 07:56:34.582744399 +0000 UTC m=+0.126435111 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.360 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.412 226310 DEBUG nova.compute.manager [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received event network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.415 226310 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.416 226310 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.417 226310 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.417 226310 DEBUG nova.compute.manager [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Processing event network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.417 226310 DEBUG nova.compute.manager [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received event network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.418 226310 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.418 226310 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.419 226310 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.419 226310 DEBUG nova.compute.manager [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] No waiting events found dispatching network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.419 226310 WARNING nova.compute.manager [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received unexpected event network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.421 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.425 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764402995.425426, 4cc38edb-3892-4cbc-ad22-7206fb37109d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.426 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.429 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.434 226310 INFO nova.virt.libvirt.driver [-] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Instance spawned successfully.#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.434 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.482 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.491 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.497 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.498 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.499 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.500 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.501 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.502 226310 DEBUG nova.virt.libvirt.driver [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.511 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.561 226310 INFO nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Took 9.91 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.561 226310 DEBUG nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.673 226310 INFO nova.compute.manager [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Took 10.82 seconds to build instance.#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.932 226310 DEBUG oslo_concurrency.lockutils [None req-37cef10d-bc44-4c03-bcb9-87177821b9d7 b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:35 np0005539564 nova_compute[226295]: 2025-11-29 07:56:35.944 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:36.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e192 e192: 3 total, 3 up, 3 in
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.469 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "4cc38edb-3892-4cbc-ad22-7206fb37109d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.469 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.469 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.470 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.470 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.471 226310 INFO nova.compute.manager [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Terminating instance#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.472 226310 DEBUG nova.compute.manager [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:37 np0005539564 kernel: tap15f7dd7c-26 (unregistering): left promiscuous mode
Nov 29 02:56:37 np0005539564 NetworkManager[48997]: <info>  [1764402997.5125] device (tap15f7dd7c-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.519 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:37Z|00134|binding|INFO|Releasing lport 15f7dd7c-2638-443f-9e11-42032d9a683b from this chassis (sb_readonly=0)
Nov 29 02:56:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:37Z|00135|binding|INFO|Setting lport 15f7dd7c-2638-443f-9e11-42032d9a683b down in Southbound
Nov 29 02:56:37 np0005539564 ovn_controller[130591]: 2025-11-29T07:56:37Z|00136|binding|INFO|Removing iface tap15f7dd7c-26 ovn-installed in OVS
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.523 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:37.527 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:d8:c1 10.100.0.4'], port_security=['fa:16:3e:77:d8:c1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4cc38edb-3892-4cbc-ad22-7206fb37109d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-664396be-cab9-4a01-a67d-9d46d772620f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bbde6bd5a35f4b9ea2c82fe2a2ad72fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bc4520f-600a-42e7-9efe-48278863e446', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5663cb46-aa4a-49db-ac84-48413043d4dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=15f7dd7c-2638-443f-9e11-42032d9a683b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:37.530 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 15f7dd7c-2638-443f-9e11-42032d9a683b in datapath 664396be-cab9-4a01-a67d-9d46d772620f unbound from our chassis#033[00m
Nov 29 02:56:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:37.532 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 664396be-cab9-4a01-a67d-9d46d772620f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:37.533 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cebcc02b-ce05-49ac-b1fd-73fd07b00575]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:37.534 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f namespace which is not needed anymore#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.551 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:37 np0005539564 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 29 02:56:37 np0005539564 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000031.scope: Consumed 2.743s CPU time.
Nov 29 02:56:37 np0005539564 systemd-machined[190128]: Machine qemu-22-instance-00000031 terminated.
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.760 226310 DEBUG nova.compute.manager [req-4a790226-bb79-4c6a-9ce4-dca95ee9f94d req-2e7f6e35-adf8-4c92-b12b-64aa92a576e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received event network-vif-unplugged-15f7dd7c-2638-443f-9e11-42032d9a683b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.761 226310 DEBUG oslo_concurrency.lockutils [req-4a790226-bb79-4c6a-9ce4-dca95ee9f94d req-2e7f6e35-adf8-4c92-b12b-64aa92a576e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.762 226310 DEBUG oslo_concurrency.lockutils [req-4a790226-bb79-4c6a-9ce4-dca95ee9f94d req-2e7f6e35-adf8-4c92-b12b-64aa92a576e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.762 226310 DEBUG oslo_concurrency.lockutils [req-4a790226-bb79-4c6a-9ce4-dca95ee9f94d req-2e7f6e35-adf8-4c92-b12b-64aa92a576e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.762 226310 DEBUG nova.compute.manager [req-4a790226-bb79-4c6a-9ce4-dca95ee9f94d req-2e7f6e35-adf8-4c92-b12b-64aa92a576e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] No waiting events found dispatching network-vif-unplugged-15f7dd7c-2638-443f-9e11-42032d9a683b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.763 226310 DEBUG nova.compute.manager [req-4a790226-bb79-4c6a-9ce4-dca95ee9f94d req-2e7f6e35-adf8-4c92-b12b-64aa92a576e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received event network-vif-unplugged-15f7dd7c-2638-443f-9e11-42032d9a683b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.771 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.787 226310 INFO nova.virt.libvirt.driver [-] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Instance destroyed successfully.#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.787 226310 DEBUG nova.objects.instance [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lazy-loading 'resources' on Instance uuid 4cc38edb-3892-4cbc-ad22-7206fb37109d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.804 226310 DEBUG nova.virt.libvirt.vif [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-208473173',display_name='tempest-ImagesNegativeTestJSON-server-208473173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-208473173',id=49,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:56:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bbde6bd5a35f4b9ea2c82fe2a2ad72fe',ramdisk_id='',reservation_id='r-v5ymn0cy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1920320020',owner_user_name='tempest-ImagesNegativeTestJSON-1920320020-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:56:35Z,user_data=None,user_id='b2176b32c3ee4e8fb0a6fa804f37c849',uuid=4cc38edb-3892-4cbc-ad22-7206fb37109d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.804 226310 DEBUG nova.network.os_vif_util [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Converting VIF {"id": "15f7dd7c-2638-443f-9e11-42032d9a683b", "address": "fa:16:3e:77:d8:c1", "network": {"id": "664396be-cab9-4a01-a67d-9d46d772620f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1287151791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bbde6bd5a35f4b9ea2c82fe2a2ad72fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f7dd7c-26", "ovs_interfaceid": "15f7dd7c-2638-443f-9e11-42032d9a683b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.806 226310 DEBUG nova.network.os_vif_util [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:d8:c1,bridge_name='br-int',has_traffic_filtering=True,id=15f7dd7c-2638-443f-9e11-42032d9a683b,network=Network(664396be-cab9-4a01-a67d-9d46d772620f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f7dd7c-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.806 226310 DEBUG os_vif [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:d8:c1,bridge_name='br-int',has_traffic_filtering=True,id=15f7dd7c-2638-443f-9e11-42032d9a683b,network=Network(664396be-cab9-4a01-a67d-9d46d772620f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f7dd7c-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.809 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.809 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15f7dd7c-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.815 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:37 np0005539564 nova_compute[226295]: 2025-11-29 07:56:37.818 226310 INFO os_vif [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:d8:c1,bridge_name='br-int',has_traffic_filtering=True,id=15f7dd7c-2638-443f-9e11-42032d9a683b,network=Network(664396be-cab9-4a01-a67d-9d46d772620f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f7dd7c-26')#033[00m
Nov 29 02:56:37 np0005539564 neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f[244949]: [NOTICE]   (244953) : haproxy version is 2.8.14-c23fe91
Nov 29 02:56:37 np0005539564 neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f[244949]: [NOTICE]   (244953) : path to executable is /usr/sbin/haproxy
Nov 29 02:56:37 np0005539564 neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f[244949]: [WARNING]  (244953) : Exiting Master process...
Nov 29 02:56:37 np0005539564 neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f[244949]: [ALERT]    (244953) : Current worker (244955) exited with code 143 (Terminated)
Nov 29 02:56:37 np0005539564 neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f[244949]: [WARNING]  (244953) : All workers exited. Exiting... (0)
Nov 29 02:56:37 np0005539564 systemd[1]: libpod-94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df.scope: Deactivated successfully.
Nov 29 02:56:37 np0005539564 podman[245049]: 2025-11-29 07:56:37.865458712 +0000 UTC m=+0.234990338 container died 94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 02:56:37 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df-userdata-shm.mount: Deactivated successfully.
Nov 29 02:56:37 np0005539564 systemd[1]: var-lib-containers-storage-overlay-4d9cfd5cd0a5d33b2c5b12a40557ee0b9735fc66345f380febbc12a8a3592444-merged.mount: Deactivated successfully.
Nov 29 02:56:37 np0005539564 podman[245049]: 2025-11-29 07:56:37.909656958 +0000 UTC m=+0.279188533 container cleanup 94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:56:37 np0005539564 systemd[1]: libpod-conmon-94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df.scope: Deactivated successfully.
Nov 29 02:56:37 np0005539564 podman[245106]: 2025-11-29 07:56:37.997266008 +0000 UTC m=+0.059661205 container remove 94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.005 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[39612523-bd0f-4910-bc6a-bfa79f0c330d]: (4, ('Sat Nov 29 07:56:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f (94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df)\n94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df\nSat Nov 29 07:56:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f (94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df)\n94dfd4955290733c02a32ee9a4c889dfe14eb861ff8b78ec61d75ca43c2e75df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.006 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3dca806c-f4c8-40e2-b800-96e198afc184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.008 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap664396be-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:38 np0005539564 kernel: tap664396be-c0: left promiscuous mode
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.010 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.014 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0208f3-f7fd-41ff-ae91-2356bb24dd1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.025 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.028 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5e1fef-c14a-4db8-8629-31a5b54bb984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.030 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0619e5-a5ca-4ec0-a60e-26dd1e05fd57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e193 e193: 3 total, 3 up, 3 in
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.044 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3214616d-2ccf-40de-9ab1-ad76eeb9d19d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596661, 'reachable_time': 34469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245121, 'error': None, 'target': 'ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:38 np0005539564 systemd[1]: run-netns-ovnmeta\x2d664396be\x2dcab9\x2d4a01\x2da67d\x2d9d46d772620f.mount: Deactivated successfully.
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.047 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-664396be-cab9-4a01-a67d-9d46d772620f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:56:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:38.047 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[654a7fab-9647-48dd-931c-88ce76975df2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.264 226310 INFO nova.virt.libvirt.driver [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Deleting instance files /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d_del#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.265 226310 INFO nova.virt.libvirt.driver [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Deletion of /var/lib/nova/instances/4cc38edb-3892-4cbc-ad22-7206fb37109d_del complete#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.340 226310 INFO nova.compute.manager [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.341 226310 DEBUG oslo.service.loopingcall [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.341 226310 DEBUG nova.compute.manager [-] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.341 226310 DEBUG nova.network.neutron [-] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:38.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:38.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.741 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.742 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.742 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.743 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.761 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.762 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.762 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.762 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.762 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.763 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.763 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.763 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.763 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.763 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.781 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.781 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.781 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.781 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:56:38 np0005539564 nova_compute[226295]: 2025-11-29 07:56:38.782 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e194 e194: 3 total, 3 up, 3 in
Nov 29 02:56:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/294212724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.221 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.396 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.397 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4746MB free_disk=20.946483612060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.397 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.397 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.473 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 4cc38edb-3892-4cbc-ad22-7206fb37109d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.473 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.473 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.592 226310 DEBUG nova.network.neutron [-] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.625 226310 INFO nova.compute.manager [-] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Took 1.28 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.682 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.688 226310 DEBUG nova.compute.manager [req-93c6aeb6-444a-485d-a806-202abc8d9f42 req-3847fc86-a73f-4830-bf6e-aca3c2049d54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received event network-vif-deleted-15f7dd7c-2638-443f-9e11-42032d9a683b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.695 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.864 226310 DEBUG nova.compute.manager [req-73479ac7-ffdd-42ba-a8b1-6495f7a9ed56 req-aff77148-6d99-40cc-97a6-7472f87d0154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received event network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.865 226310 DEBUG oslo_concurrency.lockutils [req-73479ac7-ffdd-42ba-a8b1-6495f7a9ed56 req-aff77148-6d99-40cc-97a6-7472f87d0154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.865 226310 DEBUG oslo_concurrency.lockutils [req-73479ac7-ffdd-42ba-a8b1-6495f7a9ed56 req-aff77148-6d99-40cc-97a6-7472f87d0154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.865 226310 DEBUG oslo_concurrency.lockutils [req-73479ac7-ffdd-42ba-a8b1-6495f7a9ed56 req-aff77148-6d99-40cc-97a6-7472f87d0154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.865 226310 DEBUG nova.compute.manager [req-73479ac7-ffdd-42ba-a8b1-6495f7a9ed56 req-aff77148-6d99-40cc-97a6-7472f87d0154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] No waiting events found dispatching network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:39 np0005539564 nova_compute[226295]: 2025-11-29 07:56:39.865 226310 WARNING nova.compute.manager [req-73479ac7-ffdd-42ba-a8b1-6495f7a9ed56 req-aff77148-6d99-40cc-97a6-7472f87d0154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Received unexpected event network-vif-plugged-15f7dd7c-2638-443f-9e11-42032d9a683b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:56:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4174121997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.152 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.161 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.179 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.205 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.205 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.206 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.207 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.207 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.223 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.273 226310 DEBUG oslo_concurrency.processutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:56:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:40.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2157266730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.759 226310 DEBUG oslo_concurrency.processutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.768 226310 DEBUG nova.compute.provider_tree [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.790 226310 DEBUG nova.scheduler.client.report [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.824 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.852 226310 INFO nova.scheduler.client.report [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Deleted allocations for instance 4cc38edb-3892-4cbc-ad22-7206fb37109d#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.932 226310 DEBUG oslo_concurrency.lockutils [None req-6dbcc3d4-4463-4208-b73f-77e3856fa38b b2176b32c3ee4e8fb0a6fa804f37c849 bbde6bd5a35f4b9ea2c82fe2a2ad72fe - - default default] Lock "4cc38edb-3892-4cbc-ad22-7206fb37109d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:40 np0005539564 nova_compute[226295]: 2025-11-29 07:56:40.947 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e195 e195: 3 total, 3 up, 3 in
Nov 29 02:56:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:42.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:42.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:42 np0005539564 nova_compute[226295]: 2025-11-29 07:56:42.812 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539564 nova_compute[226295]: 2025-11-29 07:56:44.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:45 np0005539564 nova_compute[226295]: 2025-11-29 07:56:45.349 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:45 np0005539564 nova_compute[226295]: 2025-11-29 07:56:45.968 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:46.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:46.904 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:46 np0005539564 nova_compute[226295]: 2025-11-29 07:56:46.904 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:46.905 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:47 np0005539564 nova_compute[226295]: 2025-11-29 07:56:47.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:56:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e196 e196: 3 total, 3 up, 3 in
Nov 29 02:56:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:56:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:50.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:50 np0005539564 nova_compute[226295]: 2025-11-29 07:56:50.968 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:51 np0005539564 nova_compute[226295]: 2025-11-29 07:56:51.880 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:51 np0005539564 nova_compute[226295]: 2025-11-29 07:56:51.880 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:51 np0005539564 nova_compute[226295]: 2025-11-29 07:56:51.900 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.060 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.061 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.073 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.074 226310 INFO nova.compute.claims [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.281 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:52.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/782884504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.786 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402997.7847965, 4cc38edb-3892-4cbc-ad22-7206fb37109d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.787 226310 INFO nova.compute.manager [-] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.790 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.797 226310 DEBUG nova.compute.provider_tree [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.817 226310 DEBUG nova.compute.manager [None req-e8afbcab-7e2f-4cff-8578-df1eebdd072e - - - - - -] [instance: 4cc38edb-3892-4cbc-ad22-7206fb37109d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.818 226310 DEBUG nova.scheduler.client.report [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.822 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.847 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.848 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.947 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:56:52 np0005539564 nova_compute[226295]: 2025-11-29 07:56:52.948 226310 DEBUG nova.network.neutron [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.091 226310 INFO nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.116 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.154 226310 DEBUG nova.policy [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dbeeaca97c3e4a1b9417ab3e996f721f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '219d722e6a2c4164be5a30e9565f13a0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.234 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.235 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.236 226310 INFO nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Creating image(s)#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.269 226310 DEBUG nova.storage.rbd_utils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 726f94b2-1110-4b06-b0a7-a82c1832f942_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.299 226310 DEBUG nova.storage.rbd_utils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 726f94b2-1110-4b06-b0a7-a82c1832f942_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.329 226310 DEBUG nova.storage.rbd_utils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 726f94b2-1110-4b06-b0a7-a82c1832f942_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.334 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.391 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.392 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.393 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.393 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.427 226310 DEBUG nova.storage.rbd_utils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 726f94b2-1110-4b06-b0a7-a82c1832f942_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.431 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 726f94b2-1110-4b06-b0a7-a82c1832f942_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e197 e197: 3 total, 3 up, 3 in
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.763 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 726f94b2-1110-4b06-b0a7-a82c1832f942_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.855 226310 DEBUG nova.storage.rbd_utils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] resizing rbd image 726f94b2-1110-4b06-b0a7-a82c1832f942_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:56:53 np0005539564 nova_compute[226295]: 2025-11-29 07:56:53.986 226310 DEBUG nova.objects.instance [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 726f94b2-1110-4b06-b0a7-a82c1832f942 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:54 np0005539564 nova_compute[226295]: 2025-11-29 07:56:54.010 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:56:54 np0005539564 nova_compute[226295]: 2025-11-29 07:56:54.011 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Ensure instance console log exists: /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:56:54 np0005539564 nova_compute[226295]: 2025-11-29 07:56:54.012 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:54 np0005539564 nova_compute[226295]: 2025-11-29 07:56:54.013 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:54 np0005539564 nova_compute[226295]: 2025-11-29 07:56:54.014 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e198 e198: 3 total, 3 up, 3 in
Nov 29 02:56:54 np0005539564 nova_compute[226295]: 2025-11-29 07:56:54.228 226310 DEBUG nova.network.neutron [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Successfully created port: 5143962e-d252-4cba-b5b3-5c06789ff1d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:56:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:54.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:54.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.244 226310 DEBUG nova.network.neutron [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Successfully updated port: 5143962e-d252-4cba-b5b3-5c06789ff1d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.260 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "refresh_cache-726f94b2-1110-4b06-b0a7-a82c1832f942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.260 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquired lock "refresh_cache-726f94b2-1110-4b06-b0a7-a82c1832f942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.260 226310 DEBUG nova.network.neutron [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.402 226310 DEBUG nova.compute.manager [req-300b127f-af5f-4673-9146-b6d5b19dc548 req-c3926212-7162-40fe-94a7-d89cf16616bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received event network-changed-5143962e-d252-4cba-b5b3-5c06789ff1d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.402 226310 DEBUG nova.compute.manager [req-300b127f-af5f-4673-9146-b6d5b19dc548 req-c3926212-7162-40fe-94a7-d89cf16616bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Refreshing instance network info cache due to event network-changed-5143962e-d252-4cba-b5b3-5c06789ff1d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.403 226310 DEBUG oslo_concurrency.lockutils [req-300b127f-af5f-4673-9146-b6d5b19dc548 req-c3926212-7162-40fe-94a7-d89cf16616bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-726f94b2-1110-4b06-b0a7-a82c1832f942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.541 226310 DEBUG nova.network.neutron [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:56:55 np0005539564 nova_compute[226295]: 2025-11-29 07:56:55.969 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:56.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:56:56.907 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.022 226310 DEBUG nova.network.neutron [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Updating instance_info_cache with network_info: [{"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.048 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Releasing lock "refresh_cache-726f94b2-1110-4b06-b0a7-a82c1832f942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.049 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Instance network_info: |[{"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.049 226310 DEBUG oslo_concurrency.lockutils [req-300b127f-af5f-4673-9146-b6d5b19dc548 req-c3926212-7162-40fe-94a7-d89cf16616bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-726f94b2-1110-4b06-b0a7-a82c1832f942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.049 226310 DEBUG nova.network.neutron [req-300b127f-af5f-4673-9146-b6d5b19dc548 req-c3926212-7162-40fe-94a7-d89cf16616bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Refreshing network info cache for port 5143962e-d252-4cba-b5b3-5c06789ff1d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.052 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Start _get_guest_xml network_info=[{"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.057 226310 WARNING nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.063 226310 DEBUG nova.virt.libvirt.host [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.064 226310 DEBUG nova.virt.libvirt.host [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.070 226310 DEBUG nova.virt.libvirt.host [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.070 226310 DEBUG nova.virt.libvirt.host [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.072 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.072 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.073 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.073 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.073 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.073 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.073 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.074 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.074 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.074 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.075 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.076 226310 DEBUG nova.virt.hardware [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.079 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1559886720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.510 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.537 226310 DEBUG nova.storage.rbd_utils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 726f94b2-1110-4b06-b0a7-a82c1832f942_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.543 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:57 np0005539564 nova_compute[226295]: 2025-11-29 07:56:57.824 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2083138911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.043 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.046 226310 DEBUG nova.virt.libvirt.vif [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1644728623',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1644728623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1644728623',id=50,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-335fbtdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:53Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=726f94b2-1110-4b06-b0a7-a82c1832f942,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.046 226310 DEBUG nova.network.os_vif_util [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.048 226310 DEBUG nova.network.os_vif_util [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:cc:5e,bridge_name='br-int',has_traffic_filtering=True,id=5143962e-d252-4cba-b5b3-5c06789ff1d6,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5143962e-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.049 226310 DEBUG nova.objects.instance [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 726f94b2-1110-4b06-b0a7-a82c1832f942 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.077 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <uuid>726f94b2-1110-4b06-b0a7-a82c1832f942</uuid>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <name>instance-00000032</name>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1644728623</nova:name>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:56:57</nova:creationTime>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <nova:user uuid="dbeeaca97c3e4a1b9417ab3e996f721f">tempest-ImagesOneServerNegativeTestJSON-267959441-project-member</nova:user>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <nova:project uuid="219d722e6a2c4164be5a30e9565f13a0">tempest-ImagesOneServerNegativeTestJSON-267959441</nova:project>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <nova:port uuid="5143962e-d252-4cba-b5b3-5c06789ff1d6">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <entry name="serial">726f94b2-1110-4b06-b0a7-a82c1832f942</entry>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <entry name="uuid">726f94b2-1110-4b06-b0a7-a82c1832f942</entry>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/726f94b2-1110-4b06-b0a7-a82c1832f942_disk">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/726f94b2-1110-4b06-b0a7-a82c1832f942_disk.config">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:43:cc:5e"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <target dev="tap5143962e-d2"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942/console.log" append="off"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:56:58 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:56:58 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:56:58 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:56:58 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.079 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Preparing to wait for external event network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.079 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.080 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.080 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.081 226310 DEBUG nova.virt.libvirt.vif [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1644728623',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1644728623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1644728623',id=50,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-335fbtdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:53Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=726f94b2-1110-4b06-b0a7-a82c1832f942,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.081 226310 DEBUG nova.network.os_vif_util [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.082 226310 DEBUG nova.network.os_vif_util [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:cc:5e,bridge_name='br-int',has_traffic_filtering=True,id=5143962e-d252-4cba-b5b3-5c06789ff1d6,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5143962e-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.082 226310 DEBUG os_vif [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:cc:5e,bridge_name='br-int',has_traffic_filtering=True,id=5143962e-d252-4cba-b5b3-5c06789ff1d6,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5143962e-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.082 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.083 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.083 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.085 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.086 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5143962e-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.086 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5143962e-d2, col_values=(('external_ids', {'iface-id': '5143962e-d252-4cba-b5b3-5c06789ff1d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:cc:5e', 'vm-uuid': '726f94b2-1110-4b06-b0a7-a82c1832f942'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.133 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:58 np0005539564 NetworkManager[48997]: <info>  [1764403018.1343] manager: (tap5143962e-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.136 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.143 226310 INFO os_vif [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:cc:5e,bridge_name='br-int',has_traffic_filtering=True,id=5143962e-d252-4cba-b5b3-5c06789ff1d6,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5143962e-d2')#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.203 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.203 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.204 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No VIF found with MAC fa:16:3e:43:cc:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.205 226310 INFO nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Using config drive#033[00m
Nov 29 02:56:58 np0005539564 nova_compute[226295]: 2025-11-29 07:56:58.231 226310 DEBUG nova.storage.rbd_utils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 726f94b2-1110-4b06-b0a7-a82c1832f942_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:58.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:56:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:58.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:59 np0005539564 nova_compute[226295]: 2025-11-29 07:56:59.310 226310 INFO nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Creating config drive at /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942/disk.config#033[00m
Nov 29 02:56:59 np0005539564 nova_compute[226295]: 2025-11-29 07:56:59.321 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkknpuxcl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:59 np0005539564 nova_compute[226295]: 2025-11-29 07:56:59.466 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkknpuxcl" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:59 np0005539564 nova_compute[226295]: 2025-11-29 07:56:59.493 226310 DEBUG nova.storage.rbd_utils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 726f94b2-1110-4b06-b0a7-a82c1832f942_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:59 np0005539564 nova_compute[226295]: 2025-11-29 07:56:59.498 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942/disk.config 726f94b2-1110-4b06-b0a7-a82c1832f942_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:59 np0005539564 nova_compute[226295]: 2025-11-29 07:56:59.533 226310 DEBUG nova.network.neutron [req-300b127f-af5f-4673-9146-b6d5b19dc548 req-c3926212-7162-40fe-94a7-d89cf16616bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Updated VIF entry in instance network info cache for port 5143962e-d252-4cba-b5b3-5c06789ff1d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:56:59 np0005539564 nova_compute[226295]: 2025-11-29 07:56:59.534 226310 DEBUG nova.network.neutron [req-300b127f-af5f-4673-9146-b6d5b19dc548 req-c3926212-7162-40fe-94a7-d89cf16616bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Updating instance_info_cache with network_info: [{"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:59 np0005539564 nova_compute[226295]: 2025-11-29 07:56:59.554 226310 DEBUG oslo_concurrency.lockutils [req-300b127f-af5f-4673-9146-b6d5b19dc548 req-c3926212-7162-40fe-94a7-d89cf16616bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-726f94b2-1110-4b06-b0a7-a82c1832f942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:00.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:00.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:00 np0005539564 nova_compute[226295]: 2025-11-29 07:57:00.624 226310 DEBUG oslo_concurrency.processutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942/disk.config 726f94b2-1110-4b06-b0a7-a82c1832f942_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:00 np0005539564 nova_compute[226295]: 2025-11-29 07:57:00.625 226310 INFO nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Deleting local config drive /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942/disk.config because it was imported into RBD.#033[00m
Nov 29 02:57:00 np0005539564 kernel: tap5143962e-d2: entered promiscuous mode
Nov 29 02:57:00 np0005539564 NetworkManager[48997]: <info>  [1764403020.7023] manager: (tap5143962e-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 02:57:00 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:00Z|00137|binding|INFO|Claiming lport 5143962e-d252-4cba-b5b3-5c06789ff1d6 for this chassis.
Nov 29 02:57:00 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:00Z|00138|binding|INFO|5143962e-d252-4cba-b5b3-5c06789ff1d6: Claiming fa:16:3e:43:cc:5e 10.100.0.11
Nov 29 02:57:00 np0005539564 nova_compute[226295]: 2025-11-29 07:57:00.701 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:00 np0005539564 nova_compute[226295]: 2025-11-29 07:57:00.707 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.729 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:cc:5e 10.100.0.11'], port_security=['fa:16:3e:43:cc:5e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '726f94b2-1110-4b06-b0a7-a82c1832f942', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57fc634c-e12c-411b-a4cb-47f24328da03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '219d722e6a2c4164be5a30e9565f13a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '402e53f5-f525-45fd-8980-0b9445b1b6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7363dfa8-cf81-4433-ae14-b180682ce437, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=5143962e-d252-4cba-b5b3-5c06789ff1d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.731 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 5143962e-d252-4cba-b5b3-5c06789ff1d6 in datapath 57fc634c-e12c-411b-a4cb-47f24328da03 bound to our chassis#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.732 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 57fc634c-e12c-411b-a4cb-47f24328da03#033[00m
Nov 29 02:57:00 np0005539564 systemd-udevd[245697]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:57:00 np0005539564 systemd-machined[190128]: New machine qemu-23-instance-00000032.
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.753 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[06ce3007-d496-4334-b29a-75e6314edf98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.754 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap57fc634c-e1 in ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.759 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap57fc634c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.760 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[abe9e370-a87f-40ee-aa27-f8adbd9b5cf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.761 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecda71b-1630-4bc4-b025-12c2637043d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 NetworkManager[48997]: <info>  [1764403020.7621] device (tap5143962e-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:57:00 np0005539564 NetworkManager[48997]: <info>  [1764403020.7635] device (tap5143962e-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:57:00 np0005539564 nova_compute[226295]: 2025-11-29 07:57:00.774 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:00 np0005539564 systemd[1]: Started Virtual Machine qemu-23-instance-00000032.
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.775 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[76838203-34c1-406e-a177-09c3408e6870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:00Z|00139|binding|INFO|Setting lport 5143962e-d252-4cba-b5b3-5c06789ff1d6 ovn-installed in OVS
Nov 29 02:57:00 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:00Z|00140|binding|INFO|Setting lport 5143962e-d252-4cba-b5b3-5c06789ff1d6 up in Southbound
Nov 29 02:57:00 np0005539564 nova_compute[226295]: 2025-11-29 07:57:00.779 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.795 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2d74cb7b-3e33-4975-8db7-7694d61aee2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.833 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c136bde7-2c65-42d7-9cb5-4955fe1ab219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.839 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6b28c9e9-7f2d-4f23-9ef6-3fe064ee9cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 NetworkManager[48997]: <info>  [1764403020.8401] manager: (tap57fc634c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 02:57:00 np0005539564 systemd-udevd[245701]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.875 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccd84c5-81c2-4c00-909c-2b5f79f3d5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.878 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d75b5a37-f5f8-4a71-b6c4-9426835fd1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 NetworkManager[48997]: <info>  [1764403020.9018] device (tap57fc634c-e0): carrier: link connected
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.907 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b366fdbf-8fbc-40fd-a8ac-88f0038d3e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.926 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[65d56bc3-2561-4c2a-b8e7-04f38f901dfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57fc634c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:45:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599558, 'reachable_time': 21778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245731, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.948 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b8bb3b-91de-4f53-a967-219630661d64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:458f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599558, 'tstamp': 599558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245732, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.966 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[77372e09-f1bf-4d3c-b106-497e87488594]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57fc634c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:45:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599558, 'reachable_time': 21778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245733, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:00 np0005539564 nova_compute[226295]: 2025-11-29 07:57:00.973 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:00.997 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9aee06-21ac-4b7a-a141-e7010e180f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.056 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e73119-b935-4cca-9d8f-6c3e34756f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.058 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57fc634c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.058 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.059 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57fc634c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:01 np0005539564 NetworkManager[48997]: <info>  [1764403021.0616] manager: (tap57fc634c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:01 np0005539564 kernel: tap57fc634c-e0: entered promiscuous mode
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.067 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap57fc634c-e0, col_values=(('external_ids', {'iface-id': '423869da-3f5f-4bc2-bc87-d86bc5a0c7ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.070 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:01 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:01Z|00141|binding|INFO|Releasing lport 423869da-3f5f-4bc2-bc87-d86bc5a0c7ce from this chassis (sb_readonly=0)
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.100 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.104 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.103 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.105 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[80278b34-65e7-47d1-a122-fd2c966843a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.106 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-57fc634c-e12c-411b-a4cb-47f24328da03
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 57fc634c-e12c-411b-a4cb-47f24328da03
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:57:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:01.106 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'env', 'PROCESS_TAG=haproxy-57fc634c-e12c-411b-a4cb-47f24328da03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/57fc634c-e12c-411b-a4cb-47f24328da03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.199 226310 DEBUG nova.compute.manager [req-4ce19c2d-997d-4810-8238-0ef6ee84113b req-10cf7465-2fdf-473a-bbbe-13f551f07efa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received event network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.199 226310 DEBUG oslo_concurrency.lockutils [req-4ce19c2d-997d-4810-8238-0ef6ee84113b req-10cf7465-2fdf-473a-bbbe-13f551f07efa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.200 226310 DEBUG oslo_concurrency.lockutils [req-4ce19c2d-997d-4810-8238-0ef6ee84113b req-10cf7465-2fdf-473a-bbbe-13f551f07efa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.200 226310 DEBUG oslo_concurrency.lockutils [req-4ce19c2d-997d-4810-8238-0ef6ee84113b req-10cf7465-2fdf-473a-bbbe-13f551f07efa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.200 226310 DEBUG nova.compute.manager [req-4ce19c2d-997d-4810-8238-0ef6ee84113b req-10cf7465-2fdf-473a-bbbe-13f551f07efa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Processing event network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.367 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403021.367, 726f94b2-1110-4b06-b0a7-a82c1832f942 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.368 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] VM Started (Lifecycle Event)#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.371 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.377 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.382 226310 INFO nova.virt.libvirt.driver [-] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Instance spawned successfully.#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.382 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.391 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.414 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.418 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.420 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.420 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.421 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.421 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.422 226310 DEBUG nova.virt.libvirt.driver [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.455 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.456 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403021.3671553, 726f94b2-1110-4b06-b0a7-a82c1832f942 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.456 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.488 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.492 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403021.3755555, 726f94b2-1110-4b06-b0a7-a82c1832f942 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.492 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:57:01 np0005539564 podman[245807]: 2025-11-29 07:57:01.516538128 +0000 UTC m=+0.054391553 container create 6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.516 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.518 226310 INFO nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Took 8.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.524 226310 DEBUG nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.528 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:01 np0005539564 systemd[1]: Started libpod-conmon-6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33.scope.
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.560 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:01 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:57:01 np0005539564 podman[245807]: 2025-11-29 07:57:01.491013918 +0000 UTC m=+0.028867353 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:57:01 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5436b34de3fa4c32b34fe8605c117429689276b18abd34ea3be4c1801d727aad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:01 np0005539564 podman[245807]: 2025-11-29 07:57:01.601123606 +0000 UTC m=+0.138977031 container init 6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.605 226310 INFO nova.compute.manager [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Took 9.63 seconds to build instance.#033[00m
Nov 29 02:57:01 np0005539564 podman[245807]: 2025-11-29 07:57:01.61159746 +0000 UTC m=+0.149450925 container start 6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 02:57:01 np0005539564 nova_compute[226295]: 2025-11-29 07:57:01.623 226310 DEBUG oslo_concurrency.lockutils [None req-0101471b-55f5-4b6b-ac9a-4fc634560564 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:01 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[245823]: [NOTICE]   (245827) : New worker (245829) forked
Nov 29 02:57:01 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[245823]: [NOTICE]   (245827) : Loading success.
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.392 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.392 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.393 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.393 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.393 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.394 226310 INFO nova.compute.manager [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Terminating instance#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.395 226310 DEBUG nova.compute.manager [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:57:02 np0005539564 kernel: tap5143962e-d2 (unregistering): left promiscuous mode
Nov 29 02:57:02 np0005539564 NetworkManager[48997]: <info>  [1764403022.4260] device (tap5143962e-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.430 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:02 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:02Z|00142|binding|INFO|Releasing lport 5143962e-d252-4cba-b5b3-5c06789ff1d6 from this chassis (sb_readonly=0)
Nov 29 02:57:02 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:02Z|00143|binding|INFO|Setting lport 5143962e-d252-4cba-b5b3-5c06789ff1d6 down in Southbound
Nov 29 02:57:02 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:02Z|00144|binding|INFO|Removing iface tap5143962e-d2 ovn-installed in OVS
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.432 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.436 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:cc:5e 10.100.0.11'], port_security=['fa:16:3e:43:cc:5e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '726f94b2-1110-4b06-b0a7-a82c1832f942', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57fc634c-e12c-411b-a4cb-47f24328da03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '219d722e6a2c4164be5a30e9565f13a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '402e53f5-f525-45fd-8980-0b9445b1b6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7363dfa8-cf81-4433-ae14-b180682ce437, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=5143962e-d252-4cba-b5b3-5c06789ff1d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.438 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 5143962e-d252-4cba-b5b3-5c06789ff1d6 in datapath 57fc634c-e12c-411b-a4cb-47f24328da03 unbound from our chassis#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.439 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57fc634c-e12c-411b-a4cb-47f24328da03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.440 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aaae63b7-e140-480f-b097-760822745172]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.440 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 namespace which is not needed anymore#033[00m
Nov 29 02:57:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:02.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.470 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:02.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:02 np0005539564 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 29 02:57:02 np0005539564 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000032.scope: Consumed 1.657s CPU time.
Nov 29 02:57:02 np0005539564 systemd-machined[190128]: Machine qemu-23-instance-00000032 terminated.
Nov 29 02:57:02 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[245823]: [NOTICE]   (245827) : haproxy version is 2.8.14-c23fe91
Nov 29 02:57:02 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[245823]: [NOTICE]   (245827) : path to executable is /usr/sbin/haproxy
Nov 29 02:57:02 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[245823]: [WARNING]  (245827) : Exiting Master process...
Nov 29 02:57:02 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[245823]: [WARNING]  (245827) : Exiting Master process...
Nov 29 02:57:02 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[245823]: [ALERT]    (245827) : Current worker (245829) exited with code 143 (Terminated)
Nov 29 02:57:02 np0005539564 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[245823]: [WARNING]  (245827) : All workers exited. Exiting... (0)
Nov 29 02:57:02 np0005539564 systemd[1]: libpod-6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33.scope: Deactivated successfully.
Nov 29 02:57:02 np0005539564 podman[245860]: 2025-11-29 07:57:02.582031621 +0000 UTC m=+0.049280473 container died 6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:57:02 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33-userdata-shm.mount: Deactivated successfully.
Nov 29 02:57:02 np0005539564 systemd[1]: var-lib-containers-storage-overlay-5436b34de3fa4c32b34fe8605c117429689276b18abd34ea3be4c1801d727aad-merged.mount: Deactivated successfully.
Nov 29 02:57:02 np0005539564 NetworkManager[48997]: <info>  [1764403022.6185] manager: (tap5143962e-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 29 02:57:02 np0005539564 podman[245860]: 2025-11-29 07:57:02.630096341 +0000 UTC m=+0.097345173 container cleanup 6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.634 226310 INFO nova.virt.libvirt.driver [-] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Instance destroyed successfully.#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.635 226310 DEBUG nova.objects.instance [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'resources' on Instance uuid 726f94b2-1110-4b06-b0a7-a82c1832f942 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:02 np0005539564 systemd[1]: libpod-conmon-6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33.scope: Deactivated successfully.
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.659 226310 DEBUG nova.virt.libvirt.vif [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:56:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1644728623',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1644728623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1644728623',id=50,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:57:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-335fbtdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:57:01Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=726f94b2-1110-4b06-b0a7-a82c1832f942,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.660 226310 DEBUG nova.network.os_vif_util [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "address": "fa:16:3e:43:cc:5e", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5143962e-d2", "ovs_interfaceid": "5143962e-d252-4cba-b5b3-5c06789ff1d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.661 226310 DEBUG nova.network.os_vif_util [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:cc:5e,bridge_name='br-int',has_traffic_filtering=True,id=5143962e-d252-4cba-b5b3-5c06789ff1d6,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5143962e-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.661 226310 DEBUG os_vif [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:cc:5e,bridge_name='br-int',has_traffic_filtering=True,id=5143962e-d252-4cba-b5b3-5c06789ff1d6,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5143962e-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.665 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.666 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5143962e-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.689 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.693 226310 INFO os_vif [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:cc:5e,bridge_name='br-int',has_traffic_filtering=True,id=5143962e-d252-4cba-b5b3-5c06789ff1d6,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5143962e-d2')#033[00m
Nov 29 02:57:02 np0005539564 podman[245898]: 2025-11-29 07:57:02.729217253 +0000 UTC m=+0.071998289 container remove 6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.734 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[36ee32d6-e395-4482-96f3-5dfeb188cd90]: (4, ('Sat Nov 29 07:57:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 (6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33)\n6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33\nSat Nov 29 07:57:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 (6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33)\n6ebc04581badc09b77a0b75e66ede8cdeeb8761f85d089e3c7695b814be19b33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.736 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[231f8d32-a52b-4791-8984-b7d49bd34224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.738 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57fc634c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:02 np0005539564 kernel: tap57fc634c-e0: left promiscuous mode
Nov 29 02:57:02 np0005539564 nova_compute[226295]: 2025-11-29 07:57:02.757 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.761 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7d018703-1518-4b0e-93ee-b9664845cce8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.776 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f081ac3c-6ed3-4229-9a69-1e7df6c71608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.778 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[80084c74-f547-433c-92b7-737554a8247e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.795 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cda70b32-1eb5-45e1-93ca-be1c9e7b4e95]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599550, 'reachable_time': 18556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245929, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:02 np0005539564 systemd[1]: run-netns-ovnmeta\x2d57fc634c\x2de12c\x2d411b\x2da4cb\x2d47f24328da03.mount: Deactivated successfully.
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.800 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:57:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:02.800 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5391f6-da3c-4958-abff-31139f4d11d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.139 226310 INFO nova.virt.libvirt.driver [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Deleting instance files /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942_del#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.140 226310 INFO nova.virt.libvirt.driver [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Deletion of /var/lib/nova/instances/726f94b2-1110-4b06-b0a7-a82c1832f942_del complete#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.195 226310 INFO nova.compute.manager [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.196 226310 DEBUG oslo.service.loopingcall [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.196 226310 DEBUG nova.compute.manager [-] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.196 226310 DEBUG nova.network.neutron [-] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.307 226310 DEBUG nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received event network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.307 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.307 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.308 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.308 226310 DEBUG nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] No waiting events found dispatching network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.308 226310 WARNING nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received unexpected event network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.308 226310 DEBUG nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received event network-vif-unplugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.308 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.309 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.309 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.309 226310 DEBUG nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] No waiting events found dispatching network-vif-unplugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.309 226310 DEBUG nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received event network-vif-unplugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.310 226310 DEBUG nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received event network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.310 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.310 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.310 226310 DEBUG oslo_concurrency.lockutils [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.310 226310 DEBUG nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] No waiting events found dispatching network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.311 226310 WARNING nova.compute.manager [req-d7f97de8-29df-4358-96c1-0898965b9b7c req-d8f563ad-1b2a-4934-ab26-8949c247b341 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received unexpected event network-vif-plugged-5143962e-d252-4cba-b5b3-5c06789ff1d6 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.507 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.549 226310 WARNING nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.550 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Triggering sync for uuid 726f94b2-1110-4b06-b0a7-a82c1832f942 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:57:03 np0005539564 nova_compute[226295]: 2025-11-29 07:57:03.550 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "726f94b2-1110-4b06-b0a7-a82c1832f942" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:03.706 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:03.707 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:03.707 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:04.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:04.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:04 np0005539564 nova_compute[226295]: 2025-11-29 07:57:04.539 226310 DEBUG nova.network.neutron [-] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:04 np0005539564 nova_compute[226295]: 2025-11-29 07:57:04.555 226310 INFO nova.compute.manager [-] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 29 02:57:04 np0005539564 nova_compute[226295]: 2025-11-29 07:57:04.659 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:04 np0005539564 nova_compute[226295]: 2025-11-29 07:57:04.660 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:04 np0005539564 nova_compute[226295]: 2025-11-29 07:57:04.677 226310 DEBUG nova.compute.manager [req-158a337a-34df-40b9-bfc2-d52c17190207 req-740f884e-d5a1-452f-af1b-80dec87b565a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Received event network-vif-deleted-5143962e-d252-4cba-b5b3-5c06789ff1d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:04 np0005539564 nova_compute[226295]: 2025-11-29 07:57:04.746 226310 DEBUG oslo_concurrency.processutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:05 np0005539564 podman[245944]: 2025-11-29 07:57:05.527623365 +0000 UTC m=+0.082713459 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 29 02:57:05 np0005539564 podman[245943]: 2025-11-29 07:57:05.528034126 +0000 UTC m=+0.088461974 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:57:05 np0005539564 podman[245942]: 2025-11-29 07:57:05.555943551 +0000 UTC m=+0.118852676 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Nov 29 02:57:05 np0005539564 nova_compute[226295]: 2025-11-29 07:57:05.978 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:06.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2915494390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.594 226310 DEBUG oslo_concurrency.processutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.848s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.603 226310 DEBUG nova.compute.provider_tree [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.622 226310 DEBUG nova.scheduler.client.report [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.658 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.690 226310 INFO nova.scheduler.client.report [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Deleted allocations for instance 726f94b2-1110-4b06-b0a7-a82c1832f942#033[00m
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.934 226310 DEBUG oslo_concurrency.lockutils [None req-6b2e567d-50a5-47ea-9fe3-19904e88ff31 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.935 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.936 226310 INFO nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 29 02:57:06 np0005539564 nova_compute[226295]: 2025-11-29 07:57:06.936 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "726f94b2-1110-4b06-b0a7-a82c1832f942" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:07 np0005539564 nova_compute[226295]: 2025-11-29 07:57:07.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:07 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 02:57:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:08.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:08.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:10.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:10 np0005539564 nova_compute[226295]: 2025-11-29 07:57:10.980 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:12.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:12 np0005539564 nova_compute[226295]: 2025-11-29 07:57:12.692 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e199 e199: 3 total, 3 up, 3 in
Nov 29 02:57:14 np0005539564 nova_compute[226295]: 2025-11-29 07:57:14.030 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:14.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:14.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:15 np0005539564 nova_compute[226295]: 2025-11-29 07:57:15.982 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:16.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:16.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:17 np0005539564 nova_compute[226295]: 2025-11-29 07:57:17.634 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403022.6322653, 726f94b2-1110-4b06-b0a7-a82c1832f942 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:17 np0005539564 nova_compute[226295]: 2025-11-29 07:57:17.635 226310 INFO nova.compute.manager [-] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:57:17 np0005539564 nova_compute[226295]: 2025-11-29 07:57:17.655 226310 DEBUG nova.compute.manager [None req-fea47a88-b934-4f66-9dcc-141124364b08 - - - - - -] [instance: 726f94b2-1110-4b06-b0a7-a82c1832f942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:17 np0005539564 nova_compute[226295]: 2025-11-29 07:57:17.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:18.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e200 e200: 3 total, 3 up, 3 in
Nov 29 02:57:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:20.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:20 np0005539564 nova_compute[226295]: 2025-11-29 07:57:20.985 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:22.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:22 np0005539564 nova_compute[226295]: 2025-11-29 07:57:22.755 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:24.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:24.795 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:24 np0005539564 nova_compute[226295]: 2025-11-29 07:57:24.796 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:24.797 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:57:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e201 e201: 3 total, 3 up, 3 in
Nov 29 02:57:25 np0005539564 nova_compute[226295]: 2025-11-29 07:57:25.991 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:26.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e202 e202: 3 total, 3 up, 3 in
Nov 29 02:57:27 np0005539564 nova_compute[226295]: 2025-11-29 07:57:27.757 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:28 np0005539564 nova_compute[226295]: 2025-11-29 07:57:28.380 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:28.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:28.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:29 np0005539564 nova_compute[226295]: 2025-11-29 07:57:29.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:29 np0005539564 nova_compute[226295]: 2025-11-29 07:57:29.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:57:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e203 e203: 3 total, 3 up, 3 in
Nov 29 02:57:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:30.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:30 np0005539564 nova_compute[226295]: 2025-11-29 07:57:30.994 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539564 nova_compute[226295]: 2025-11-29 07:57:31.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:31 np0005539564 nova_compute[226295]: 2025-11-29 07:57:31.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:32.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:32 np0005539564 nova_compute[226295]: 2025-11-29 07:57:32.759 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:57:32.799 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.133 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "a371a1b6-e687-429a-a4d8-338fe777c73e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.134 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a371a1b6-e687-429a-a4d8-338fe777c73e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.154 226310 DEBUG nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.233 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.234 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.240 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.241 226310 INFO nova.compute.claims [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.367 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.368 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.444 226310 DEBUG nova.scheduler.client.report [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.462 226310 DEBUG nova.scheduler.client.report [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.463 226310 DEBUG nova.compute.provider_tree [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.479 226310 DEBUG nova.scheduler.client.report [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.504 226310 DEBUG nova.scheduler.client.report [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:57:33 np0005539564 nova_compute[226295]: 2025-11-29 07:57:33.550 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2663283178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.024 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.033 226310 DEBUG nova.compute.provider_tree [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.057 226310 DEBUG nova.scheduler.client.report [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.105 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.107 226310 DEBUG nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.160 226310 DEBUG nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.160 226310 DEBUG nova.network.neutron [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.179 226310 INFO nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.197 226310 DEBUG nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.285 226310 DEBUG nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.286 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.287 226310 INFO nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Creating image(s)#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.318 226310 DEBUG nova.storage.rbd_utils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a371a1b6-e687-429a-a4d8-338fe777c73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.350 226310 DEBUG nova.storage.rbd_utils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a371a1b6-e687-429a-a4d8-338fe777c73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.383 226310 DEBUG nova.storage.rbd_utils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a371a1b6-e687-429a-a4d8-338fe777c73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.388 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.419 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.419 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.481 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.482 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.483 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.483 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.507 226310 DEBUG nova.storage.rbd_utils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a371a1b6-e687-429a-a4d8-338fe777c73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:34.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.511 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a371a1b6-e687-429a-a4d8-338fe777c73e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:34.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.815 226310 DEBUG nova.network.neutron [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:57:34 np0005539564 nova_compute[226295]: 2025-11-29 07:57:34.815 226310 DEBUG nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.112 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a371a1b6-e687-429a-a4d8-338fe777c73e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.218 226310 DEBUG nova.storage.rbd_utils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] resizing rbd image a371a1b6-e687-429a-a4d8-338fe777c73e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.396 226310 DEBUG nova.objects.instance [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lazy-loading 'migration_context' on Instance uuid a371a1b6-e687-429a-a4d8-338fe777c73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.424 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.425 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Ensure instance console log exists: /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.426 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.426 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.426 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.428 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.433 226310 WARNING nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.439 226310 DEBUG nova.virt.libvirt.host [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.440 226310 DEBUG nova.virt.libvirt.host [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.444 226310 DEBUG nova.virt.libvirt.host [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.445 226310 DEBUG nova.virt.libvirt.host [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.446 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.446 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.446 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.447 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.447 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.447 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.447 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.447 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.448 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.448 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.448 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.448 226310 DEBUG nova.virt.hardware [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.451 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2192504686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.921 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.960 226310 DEBUG nova.storage.rbd_utils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a371a1b6-e687-429a-a4d8-338fe777c73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.966 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:35 np0005539564 nova_compute[226295]: 2025-11-29 07:57:35.998 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.374 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.375 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.376 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3443759431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.407 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.410 226310 DEBUG nova.objects.instance [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lazy-loading 'pci_devices' on Instance uuid a371a1b6-e687-429a-a4d8-338fe777c73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.434 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <uuid>a371a1b6-e687-429a-a4d8-338fe777c73e</uuid>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <name>instance-00000034</name>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1749169198</nova:name>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:57:35</nova:creationTime>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <nova:user uuid="c2aeea466c9049d3a023483ec2e5b4f6">tempest-ListImageFiltersTestJSON-667978844-project-member</nova:user>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <nova:project uuid="30d42ce85b6840c6942b24cf4a7b9d64">tempest-ListImageFiltersTestJSON-667978844</nova:project>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <entry name="serial">a371a1b6-e687-429a-a4d8-338fe777c73e</entry>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <entry name="uuid">a371a1b6-e687-429a-a4d8-338fe777c73e</entry>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/a371a1b6-e687-429a-a4d8-338fe777c73e_disk">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/a371a1b6-e687-429a-a4d8-338fe777c73e_disk.config">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e/console.log" append="off"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:57:36 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:57:36 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:57:36 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:57:36 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:57:36 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:57:36 np0005539564 podman[246274]: 2025-11-29 07:57:36.51022976 +0000 UTC m=+0.057913497 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:57:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:36 np0005539564 podman[246273]: 2025-11-29 07:57:36.517824915 +0000 UTC m=+0.070737764 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.534 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.534 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.535 226310 INFO nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Using config drive#033[00m
Nov 29 02:57:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:36.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.565 226310 DEBUG nova.storage.rbd_utils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a371a1b6-e687-429a-a4d8-338fe777c73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:36 np0005539564 podman[246272]: 2025-11-29 07:57:36.571565799 +0000 UTC m=+0.125118135 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:57:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1285861010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.816 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.826 226310 INFO nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Creating config drive at /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e/disk.config#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.836 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzs53twq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.941 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.941 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:36 np0005539564 nova_compute[226295]: 2025-11-29 07:57:36.975 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzs53twq" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.019 226310 DEBUG nova.storage.rbd_utils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] rbd image a371a1b6-e687-429a-a4d8-338fe777c73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.023 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e/disk.config a371a1b6-e687-429a-a4d8-338fe777c73e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.218 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.219 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4704MB free_disk=20.960533142089844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.220 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.220 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.251 226310 DEBUG oslo_concurrency.processutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e/disk.config a371a1b6-e687-429a-a4d8-338fe777c73e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.252 226310 INFO nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Deleting local config drive /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e/disk.config because it was imported into RBD.#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.287 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance a371a1b6-e687-429a-a4d8-338fe777c73e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.287 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.288 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:57:37 np0005539564 systemd-machined[190128]: New machine qemu-24-instance-00000034.
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.325 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:37 np0005539564 systemd[1]: Started Virtual Machine qemu-24-instance-00000034.
Nov 29 02:57:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1639132123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.834 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.842 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.858 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.895 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.895 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.930 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403057.9302964, a371a1b6-e687-429a-a4d8-338fe777c73e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.932 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.936 226310 DEBUG nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.937 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.942 226310 INFO nova.virt.libvirt.driver [-] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Instance spawned successfully.#033[00m
Nov 29 02:57:37 np0005539564 nova_compute[226295]: 2025-11-29 07:57:37.943 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:57:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.018 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.025 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.076 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.077 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.078 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.079 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.080 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.081 226310 DEBUG nova.virt.libvirt.driver [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.102 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.103 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403057.934366, a371a1b6-e687-429a-a4d8-338fe777c73e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.103 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] VM Started (Lifecycle Event)#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.224 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.228 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.277 226310 INFO nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Took 3.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.278 226310 DEBUG nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.280 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.473 226310 INFO nova.compute.manager [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Took 5.27 seconds to build instance.#033[00m
Nov 29 02:57:38 np0005539564 nova_compute[226295]: 2025-11-29 07:57:38.492 226310 DEBUG oslo_concurrency.lockutils [None req-eff178f8-7d6e-4955-8397-39215248f393 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a371a1b6-e687-429a-a4d8-338fe777c73e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:38.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:38.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:40.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:41 np0005539564 nova_compute[226295]: 2025-11-29 07:57:41.000 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:42.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:42.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:42 np0005539564 nova_compute[226295]: 2025-11-29 07:57:42.817 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:44.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:44.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:45 np0005539564 nova_compute[226295]: 2025-11-29 07:57:45.462 226310 DEBUG nova.compute.manager [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:45 np0005539564 nova_compute[226295]: 2025-11-29 07:57:45.516 226310 INFO nova.compute.manager [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] instance snapshotting#033[00m
Nov 29 02:57:45 np0005539564 nova_compute[226295]: 2025-11-29 07:57:45.788 226310 INFO nova.virt.libvirt.driver [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Beginning live snapshot process#033[00m
Nov 29 02:57:45 np0005539564 nova_compute[226295]: 2025-11-29 07:57:45.961 226310 DEBUG nova.virt.libvirt.imagebackend [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 02:57:46 np0005539564 nova_compute[226295]: 2025-11-29 07:57:46.003 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:46 np0005539564 nova_compute[226295]: 2025-11-29 07:57:46.263 226310 DEBUG nova.storage.rbd_utils [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] creating snapshot(6c4f105dd0ad42e3be1969060c2e87e8) on rbd image(a371a1b6-e687-429a-a4d8-338fe777c73e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:57:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:46.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e204 e204: 3 total, 3 up, 3 in
Nov 29 02:57:47 np0005539564 nova_compute[226295]: 2025-11-29 07:57:47.272 226310 DEBUG nova.storage.rbd_utils [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] cloning vms/a371a1b6-e687-429a-a4d8-338fe777c73e_disk@6c4f105dd0ad42e3be1969060c2e87e8 to images/e89bf772-b42e-4a81-949b-a562a33a3e0c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:57:47 np0005539564 nova_compute[226295]: 2025-11-29 07:57:47.416 226310 DEBUG nova.storage.rbd_utils [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] flattening images/e89bf772-b42e-4a81-949b-a562a33a3e0c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:57:47 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 29 02:57:47 np0005539564 nova_compute[226295]: 2025-11-29 07:57:47.864 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:47 np0005539564 nova_compute[226295]: 2025-11-29 07:57:47.880 226310 DEBUG nova.storage.rbd_utils [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] removing snapshot(6c4f105dd0ad42e3be1969060c2e87e8) on rbd image(a371a1b6-e687-429a-a4d8-338fe777c73e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:57:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e205 e205: 3 total, 3 up, 3 in
Nov 29 02:57:48 np0005539564 nova_compute[226295]: 2025-11-29 07:57:48.261 226310 DEBUG nova.storage.rbd_utils [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] creating snapshot(snap) on rbd image(e89bf772-b42e-4a81-949b-a562a33a3e0c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:57:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:48.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:48.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e206 e206: 3 total, 3 up, 3 in
Nov 29 02:57:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:50.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:50 np0005539564 nova_compute[226295]: 2025-11-29 07:57:50.832 226310 INFO nova.virt.libvirt.driver [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Snapshot image upload complete#033[00m
Nov 29 02:57:50 np0005539564 nova_compute[226295]: 2025-11-29 07:57:50.833 226310 INFO nova.compute.manager [None req-571a6874-8d3a-4e94-b7bd-639830c46036 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Took 5.31 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 02:57:51 np0005539564 nova_compute[226295]: 2025-11-29 07:57:51.006 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:51 np0005539564 ovn_controller[130591]: 2025-11-29T07:57:51Z|00145|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 02:57:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:52.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:52 np0005539564 nova_compute[226295]: 2025-11-29 07:57:52.866 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:56 np0005539564 nova_compute[226295]: 2025-11-29 07:57:56.009 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.008000213s ======
Nov 29 02:57:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:56.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.008000213s
Nov 29 02:57:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e207 e207: 3 total, 3 up, 3 in
Nov 29 02:57:57 np0005539564 nova_compute[226295]: 2025-11-29 07:57:57.913 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 02:57:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:58.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 02:57:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:57:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:58.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:57:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:57:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:58:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e208 e208: 3 total, 3 up, 3 in
Nov 29 02:58:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:00.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:00.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:01 np0005539564 nova_compute[226295]: 2025-11-29 07:58:01.012 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e209 e209: 3 total, 3 up, 3 in
Nov 29 02:58:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:02.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:02.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:02 np0005539564 nova_compute[226295]: 2025-11-29 07:58:02.916 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e210 e210: 3 total, 3 up, 3 in
Nov 29 02:58:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:58:03.707 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:58:03.707 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:58:03.708 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:04.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:04.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:06 np0005539564 nova_compute[226295]: 2025-11-29 07:58:06.015 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:06.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:07 np0005539564 podman[246774]: 2025-11-29 07:58:07.537357961 +0000 UTC m=+0.070553029 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:58:07 np0005539564 podman[246773]: 2025-11-29 07:58:07.542758388 +0000 UTC m=+0.083017508 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:58:07 np0005539564 podman[246772]: 2025-11-29 07:58:07.597116078 +0000 UTC m=+0.135952639 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:58:07 np0005539564 nova_compute[226295]: 2025-11-29 07:58:07.916 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e211 e211: 3 total, 3 up, 3 in
Nov 29 02:58:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:08.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:08.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:08 np0005539564 nova_compute[226295]: 2025-11-29 07:58:08.973 226310 DEBUG nova.compute.manager [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:09 np0005539564 nova_compute[226295]: 2025-11-29 07:58:09.032 226310 INFO nova.compute.manager [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] instance snapshotting#033[00m
Nov 29 02:58:09 np0005539564 nova_compute[226295]: 2025-11-29 07:58:09.363 226310 INFO nova.virt.libvirt.driver [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Beginning live snapshot process#033[00m
Nov 29 02:58:09 np0005539564 nova_compute[226295]: 2025-11-29 07:58:09.574 226310 DEBUG nova.virt.libvirt.imagebackend [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 02:58:09 np0005539564 nova_compute[226295]: 2025-11-29 07:58:09.940 226310 DEBUG nova.storage.rbd_utils [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] creating snapshot(005004c81ed743a5bbfeb6f6df888344) on rbd image(a371a1b6-e687-429a-a4d8-338fe777c73e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:58:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:10.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:58:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:58:11 np0005539564 nova_compute[226295]: 2025-11-29 07:58:11.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e212 e212: 3 total, 3 up, 3 in
Nov 29 02:58:11 np0005539564 nova_compute[226295]: 2025-11-29 07:58:11.271 226310 DEBUG nova.storage.rbd_utils [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] cloning vms/a371a1b6-e687-429a-a4d8-338fe777c73e_disk@005004c81ed743a5bbfeb6f6df888344 to images/36972a45-a1b4-44eb-9fc1-abe990709f21 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:58:11 np0005539564 nova_compute[226295]: 2025-11-29 07:58:11.753 226310 DEBUG nova.storage.rbd_utils [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] flattening images/36972a45-a1b4-44eb-9fc1-abe990709f21 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:58:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e213 e213: 3 total, 3 up, 3 in
Nov 29 02:58:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:12.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:12.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:12 np0005539564 nova_compute[226295]: 2025-11-29 07:58:12.917 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:13 np0005539564 nova_compute[226295]: 2025-11-29 07:58:13.254 226310 DEBUG nova.storage.rbd_utils [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] removing snapshot(005004c81ed743a5bbfeb6f6df888344) on rbd image(a371a1b6-e687-429a-a4d8-338fe777c73e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:58:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e214 e214: 3 total, 3 up, 3 in
Nov 29 02:58:13 np0005539564 nova_compute[226295]: 2025-11-29 07:58:13.985 226310 DEBUG nova.storage.rbd_utils [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] creating snapshot(snap) on rbd image(36972a45-a1b4-44eb-9fc1-abe990709f21) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:58:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:14.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e215 e215: 3 total, 3 up, 3 in
Nov 29 02:58:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e216 e216: 3 total, 3 up, 3 in
Nov 29 02:58:16 np0005539564 nova_compute[226295]: 2025-11-29 07:58:16.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:16.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:17 np0005539564 nova_compute[226295]: 2025-11-29 07:58:17.948 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:18 np0005539564 nova_compute[226295]: 2025-11-29 07:58:18.556 226310 INFO nova.virt.libvirt.driver [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Snapshot image upload complete#033[00m
Nov 29 02:58:18 np0005539564 nova_compute[226295]: 2025-11-29 07:58:18.556 226310 INFO nova.compute.manager [None req-ce156f31-84ad-4edc-b99d-291f7fb5a5a8 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Took 9.52 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 02:58:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:18.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:18.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:18 np0005539564 nova_compute[226295]: 2025-11-29 07:58:18.918 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:58:18.917 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:58:18.919 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:58:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:21 np0005539564 nova_compute[226295]: 2025-11-29 07:58:21.056 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e217 e217: 3 total, 3 up, 3 in
Nov 29 02:58:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:58:22.921 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:22 np0005539564 nova_compute[226295]: 2025-11-29 07:58:22.950 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:24.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:24.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e218 e218: 3 total, 3 up, 3 in
Nov 29 02:58:26 np0005539564 nova_compute[226295]: 2025-11-29 07:58:26.059 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:26.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:26.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:27 np0005539564 nova_compute[226295]: 2025-11-29 07:58:27.951 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:28.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:29 np0005539564 nova_compute[226295]: 2025-11-29 07:58:29.889 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:30.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:31 np0005539564 nova_compute[226295]: 2025-11-29 07:58:31.063 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:31 np0005539564 nova_compute[226295]: 2025-11-29 07:58:31.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:31 np0005539564 nova_compute[226295]: 2025-11-29 07:58:31.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:58:32 np0005539564 nova_compute[226295]: 2025-11-29 07:58:32.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:32 np0005539564 nova_compute[226295]: 2025-11-29 07:58:32.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:32.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:32 np0005539564 nova_compute[226295]: 2025-11-29 07:58:32.953 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:33 np0005539564 nova_compute[226295]: 2025-11-29 07:58:33.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:33 np0005539564 nova_compute[226295]: 2025-11-29 07:58:33.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:58:33 np0005539564 nova_compute[226295]: 2025-11-29 07:58:33.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:58:33 np0005539564 nova_compute[226295]: 2025-11-29 07:58:33.573 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-a371a1b6-e687-429a-a4d8-338fe777c73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:33 np0005539564 nova_compute[226295]: 2025-11-29 07:58:33.574 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-a371a1b6-e687-429a-a4d8-338fe777c73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:33 np0005539564 nova_compute[226295]: 2025-11-29 07:58:33.574 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:58:33 np0005539564 nova_compute[226295]: 2025-11-29 07:58:33.575 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a371a1b6-e687-429a-a4d8-338fe777c73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:34 np0005539564 nova_compute[226295]: 2025-11-29 07:58:34.146 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:34.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:34.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:34 np0005539564 nova_compute[226295]: 2025-11-29 07:58:34.748 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:34 np0005539564 nova_compute[226295]: 2025-11-29 07:58:34.767 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-a371a1b6-e687-429a-a4d8-338fe777c73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:34 np0005539564 nova_compute[226295]: 2025-11-29 07:58:34.768 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:58:34 np0005539564 nova_compute[226295]: 2025-11-29 07:58:34.769 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:35 np0005539564 nova_compute[226295]: 2025-11-29 07:58:35.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:36 np0005539564 nova_compute[226295]: 2025-11-29 07:58:36.068 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:36 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:58:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:36.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:36.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:37 np0005539564 nova_compute[226295]: 2025-11-29 07:58:37.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:37 np0005539564 nova_compute[226295]: 2025-11-29 07:58:37.665 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:37 np0005539564 nova_compute[226295]: 2025-11-29 07:58:37.666 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:37 np0005539564 nova_compute[226295]: 2025-11-29 07:58:37.667 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:37 np0005539564 nova_compute[226295]: 2025-11-29 07:58:37.667 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:58:37 np0005539564 nova_compute[226295]: 2025-11-29 07:58:37.668 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:37 np0005539564 nova_compute[226295]: 2025-11-29 07:58:37.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/955524022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.153 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:38 np0005539564 podman[247053]: 2025-11-29 07:58:38.242100791 +0000 UTC m=+0.044073844 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 02:58:38 np0005539564 podman[247052]: 2025-11-29 07:58:38.249736857 +0000 UTC m=+0.055106502 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:58:38 np0005539564 podman[247051]: 2025-11-29 07:58:38.301895017 +0000 UTC m=+0.110109978 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.347 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.348 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.569 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.570 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4568MB free_disk=20.851024627685547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.571 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.571 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:38.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.863 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance a371a1b6-e687-429a-a4d8-338fe777c73e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.863 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.864 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:58:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:38 np0005539564 nova_compute[226295]: 2025-11-29 07:58:38.931 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3329070535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:39 np0005539564 nova_compute[226295]: 2025-11-29 07:58:39.416 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:39 np0005539564 nova_compute[226295]: 2025-11-29 07:58:39.423 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:39 np0005539564 nova_compute[226295]: 2025-11-29 07:58:39.481 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:39 np0005539564 nova_compute[226295]: 2025-11-29 07:58:39.516 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:58:39 np0005539564 nova_compute[226295]: 2025-11-29 07:58:39.517 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e219 e219: 3 total, 3 up, 3 in
Nov 29 02:58:40 np0005539564 nova_compute[226295]: 2025-11-29 07:58:40.520 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:40.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:41 np0005539564 nova_compute[226295]: 2025-11-29 07:58:41.072 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e220 e220: 3 total, 3 up, 3 in
Nov 29 02:58:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:42.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:42.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:42 np0005539564 nova_compute[226295]: 2025-11-29 07:58:42.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539564 nova_compute[226295]: 2025-11-29 07:58:43.705 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:43 np0005539564 nova_compute[226295]: 2025-11-29 07:58:43.706 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:43 np0005539564 nova_compute[226295]: 2025-11-29 07:58:43.763 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:58:43 np0005539564 nova_compute[226295]: 2025-11-29 07:58:43.842 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:43 np0005539564 nova_compute[226295]: 2025-11-29 07:58:43.843 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:43 np0005539564 nova_compute[226295]: 2025-11-29 07:58:43.849 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:58:43 np0005539564 nova_compute[226295]: 2025-11-29 07:58:43.850 226310 INFO nova.compute.claims [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:58:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:43 np0005539564 nova_compute[226295]: 2025-11-29 07:58:43.987 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2982941923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.447 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.454 226310 DEBUG nova.compute.provider_tree [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.481 226310 DEBUG nova.scheduler.client.report [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.509 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.510 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.590 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.591 226310 DEBUG nova.network.neutron [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:58:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:44.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.630 226310 INFO nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:58:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e221 e221: 3 total, 3 up, 3 in
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.699 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.906 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.908 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.908 226310 INFO nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Creating image(s)#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.940 226310 DEBUG nova.storage.rbd_utils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:44 np0005539564 nova_compute[226295]: 2025-11-29 07:58:44.972 226310 DEBUG nova.storage.rbd_utils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.004 226310 DEBUG nova.storage.rbd_utils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.009 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.074 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.075 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.076 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.076 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.113 226310 DEBUG nova.storage.rbd_utils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.118 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.227 226310 DEBUG nova.policy [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.414 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.508 226310 DEBUG nova.storage.rbd_utils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] resizing rbd image 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.633 226310 DEBUG nova.objects.instance [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'migration_context' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.660 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.660 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Ensure instance console log exists: /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.661 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.661 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:45 np0005539564 nova_compute[226295]: 2025-11-29 07:58:45.661 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:46 np0005539564 nova_compute[226295]: 2025-11-29 07:58:46.074 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:46.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:47 np0005539564 nova_compute[226295]: 2025-11-29 07:58:47.032 226310 DEBUG nova.network.neutron [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Successfully created port: da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:58:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e222 e222: 3 total, 3 up, 3 in
Nov 29 02:58:48 np0005539564 nova_compute[226295]: 2025-11-29 07:58:47.999 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:48.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:48.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:48 np0005539564 nova_compute[226295]: 2025-11-29 07:58:48.852 226310 DEBUG nova.network.neutron [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Successfully updated port: da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:58:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:48 np0005539564 nova_compute[226295]: 2025-11-29 07:58:48.899 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:48 np0005539564 nova_compute[226295]: 2025-11-29 07:58:48.900 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:48 np0005539564 nova_compute[226295]: 2025-11-29 07:58:48.900 226310 DEBUG nova.network.neutron [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:49 np0005539564 nova_compute[226295]: 2025-11-29 07:58:49.099 226310 DEBUG nova.compute.manager [req-f85b7d85-e12a-454c-a137-45cb3256cdca req-8b204740-e244-489e-b26a-d1798a0cd154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-changed-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:49 np0005539564 nova_compute[226295]: 2025-11-29 07:58:49.100 226310 DEBUG nova.compute.manager [req-f85b7d85-e12a-454c-a137-45cb3256cdca req-8b204740-e244-489e-b26a-d1798a0cd154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Refreshing instance network info cache due to event network-changed-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:58:49 np0005539564 nova_compute[226295]: 2025-11-29 07:58:49.100 226310 DEBUG oslo_concurrency.lockutils [req-f85b7d85-e12a-454c-a137-45cb3256cdca req-8b204740-e244-489e-b26a-d1798a0cd154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:49 np0005539564 nova_compute[226295]: 2025-11-29 07:58:49.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:49 np0005539564 nova_compute[226295]: 2025-11-29 07:58:49.455 226310 DEBUG nova.network.neutron [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:50.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:50.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e223 e223: 3 total, 3 up, 3 in
Nov 29 02:58:51 np0005539564 nova_compute[226295]: 2025-11-29 07:58:51.076 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.371 226310 DEBUG nova.network.neutron [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.465 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.466 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Instance network_info: |[{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.466 226310 DEBUG oslo_concurrency.lockutils [req-f85b7d85-e12a-454c-a137-45cb3256cdca req-8b204740-e244-489e-b26a-d1798a0cd154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.467 226310 DEBUG nova.network.neutron [req-f85b7d85-e12a-454c-a137-45cb3256cdca req-8b204740-e244-489e-b26a-d1798a0cd154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Refreshing network info cache for port da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.470 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Start _get_guest_xml network_info=[{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.476 226310 WARNING nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.481 226310 DEBUG nova.virt.libvirt.host [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.482 226310 DEBUG nova.virt.libvirt.host [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.485 226310 DEBUG nova.virt.libvirt.host [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.486 226310 DEBUG nova.virt.libvirt.host [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.487 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.487 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.488 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.488 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.489 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.489 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.489 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.490 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.490 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.490 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.490 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.491 226310 DEBUG nova.virt.hardware [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.494 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:52.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:52.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.701 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "a371a1b6-e687-429a-a4d8-338fe777c73e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.702 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a371a1b6-e687-429a-a4d8-338fe777c73e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.702 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "a371a1b6-e687-429a-a4d8-338fe777c73e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.702 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a371a1b6-e687-429a-a4d8-338fe777c73e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.702 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a371a1b6-e687-429a-a4d8-338fe777c73e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.704 226310 INFO nova.compute.manager [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Terminating instance#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.704 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "refresh_cache-a371a1b6-e687-429a-a4d8-338fe777c73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.705 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquired lock "refresh_cache-a371a1b6-e687-429a-a4d8-338fe777c73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:52 np0005539564 nova_compute[226295]: 2025-11-29 07:58:52.705 226310 DEBUG nova.network.neutron [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/212407046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.000 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.155 226310 DEBUG nova.network.neutron [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.175 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.215 226310 DEBUG nova.storage.rbd_utils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.219 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.546 226310 DEBUG nova.network.neutron [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.581 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Releasing lock "refresh_cache-a371a1b6-e687-429a-a4d8-338fe777c73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.582 226310 DEBUG nova.compute.manager [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:58:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2854085679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.624 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.626 226310 DEBUG nova.virt.libvirt.vif [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:58:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.627 226310 DEBUG nova.network.os_vif_util [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.628 226310 DEBUG nova.network.os_vif_util [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:c3,bridge_name='br-int',has_traffic_filtering=True,id=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41dfe7-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.630 226310 DEBUG nova.objects.instance [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.650 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <uuid>5e56bdc6-e188-4475-a5b5-41dec34857ee</uuid>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <name>instance-00000038</name>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1207742721</nova:name>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:58:52</nova:creationTime>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <nova:port uuid="da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <entry name="serial">5e56bdc6-e188-4475-a5b5-41dec34857ee</entry>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <entry name="uuid">5e56bdc6-e188-4475-a5b5-41dec34857ee</entry>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5e56bdc6-e188-4475-a5b5-41dec34857ee_disk">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5b:d2:c3"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <target dev="tapda41dfe7-58"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/console.log" append="off"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 02:58:53 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:58:53 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:58:53 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:58:53 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.652 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Preparing to wait for external event network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.652 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.652 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.652 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.653 226310 DEBUG nova.virt.libvirt.vif [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:58:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.653 226310 DEBUG nova.network.os_vif_util [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.653 226310 DEBUG nova.network.os_vif_util [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:c3,bridge_name='br-int',has_traffic_filtering=True,id=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41dfe7-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.654 226310 DEBUG os_vif [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:c3,bridge_name='br-int',has_traffic_filtering=True,id=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41dfe7-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.654 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.655 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.655 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.660 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.660 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda41dfe7-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.661 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda41dfe7-58, col_values=(('external_ids', {'iface-id': 'da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:d2:c3', 'vm-uuid': '5e56bdc6-e188-4475-a5b5-41dec34857ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:53 np0005539564 NetworkManager[48997]: <info>  [1764403133.6637] manager: (tapda41dfe7-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.665 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.670 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.671 226310 INFO os_vif [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:c3,bridge_name='br-int',has_traffic_filtering=True,id=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41dfe7-58')#033[00m
Nov 29 02:58:53 np0005539564 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000034.scope: Deactivated successfully.
Nov 29 02:58:53 np0005539564 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000034.scope: Consumed 17.244s CPU time.
Nov 29 02:58:53 np0005539564 systemd-machined[190128]: Machine qemu-24-instance-00000034 terminated.
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.804 226310 INFO nova.virt.libvirt.driver [-] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Instance destroyed successfully.#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.805 226310 DEBUG nova.objects.instance [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lazy-loading 'resources' on Instance uuid a371a1b6-e687-429a-a4d8-338fe777c73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.970 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.972 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.972 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:5b:d2:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:58:53 np0005539564 nova_compute[226295]: 2025-11-29 07:58:53.973 226310 INFO nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Using config drive#033[00m
Nov 29 02:58:54 np0005539564 nova_compute[226295]: 2025-11-29 07:58:54.010 226310 DEBUG nova.storage.rbd_utils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:54.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:54 np0005539564 nova_compute[226295]: 2025-11-29 07:58:54.922 226310 INFO nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Creating config drive at /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/disk.config#033[00m
Nov 29 02:58:54 np0005539564 nova_compute[226295]: 2025-11-29 07:58:54.933 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp09dkgw5q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:55 np0005539564 nova_compute[226295]: 2025-11-29 07:58:55.071 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp09dkgw5q" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:55 np0005539564 nova_compute[226295]: 2025-11-29 07:58:55.112 226310 DEBUG nova.storage.rbd_utils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:55 np0005539564 nova_compute[226295]: 2025-11-29 07:58:55.119 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/disk.config 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:55 np0005539564 nova_compute[226295]: 2025-11-29 07:58:55.193 226310 DEBUG nova.network.neutron [req-f85b7d85-e12a-454c-a137-45cb3256cdca req-8b204740-e244-489e-b26a-d1798a0cd154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updated VIF entry in instance network info cache for port da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:58:55 np0005539564 nova_compute[226295]: 2025-11-29 07:58:55.195 226310 DEBUG nova.network.neutron [req-f85b7d85-e12a-454c-a137-45cb3256cdca req-8b204740-e244-489e-b26a-d1798a0cd154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:55 np0005539564 nova_compute[226295]: 2025-11-29 07:58:55.209 226310 DEBUG oslo_concurrency.lockutils [req-f85b7d85-e12a-454c-a137-45cb3256cdca req-8b204740-e244-489e-b26a-d1798a0cd154 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:56 np0005539564 nova_compute[226295]: 2025-11-29 07:58:56.079 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:56.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:58:56.877 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:56 np0005539564 nova_compute[226295]: 2025-11-29 07:58:56.878 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:58:56.879 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:58:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:58 np0005539564 nova_compute[226295]: 2025-11-29 07:58:58.664 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:58:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:58.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:00 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 02:59:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:00.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:00.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:01 np0005539564 nova_compute[226295]: 2025-11-29 07:59:01.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:02.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:02.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:03 np0005539564 nova_compute[226295]: 2025-11-29 07:59:03.709 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:03.708 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:03.709 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:03.709 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e224 e224: 3 total, 3 up, 3 in
Nov 29 02:59:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:03.882 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:04.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:04.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:06 np0005539564 nova_compute[226295]: 2025-11-29 07:59:06.084 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:06.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:06.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:07 np0005539564 nova_compute[226295]: 2025-11-29 07:59:07.336 226310 INFO nova.virt.libvirt.driver [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Deleting instance files /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e_del#033[00m
Nov 29 02:59:07 np0005539564 nova_compute[226295]: 2025-11-29 07:59:07.337 226310 INFO nova.virt.libvirt.driver [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Deletion of /var/lib/nova/instances/a371a1b6-e687-429a-a4d8-338fe777c73e_del complete#033[00m
Nov 29 02:59:08 np0005539564 podman[247472]: 2025-11-29 07:59:08.546656312 +0000 UTC m=+0.069857831 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 02:59:08 np0005539564 podman[247471]: 2025-11-29 07:59:08.546700043 +0000 UTC m=+0.072058750 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:59:08 np0005539564 podman[247470]: 2025-11-29 07:59:08.594901768 +0000 UTC m=+0.131940110 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:59:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:08.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:08.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:08 np0005539564 nova_compute[226295]: 2025-11-29 07:59:08.710 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:08 np0005539564 nova_compute[226295]: 2025-11-29 07:59:08.804 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403133.8026752, a371a1b6-e687-429a-a4d8-338fe777c73e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:08 np0005539564 nova_compute[226295]: 2025-11-29 07:59:08.804 226310 INFO nova.compute.manager [-] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:59:08 np0005539564 nova_compute[226295]: 2025-11-29 07:59:08.846 226310 DEBUG nova.compute.manager [None req-13b7b817-8faf-4c23-ba54-709cd0214c96 - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:08 np0005539564 nova_compute[226295]: 2025-11-29 07:59:08.850 226310 DEBUG nova.compute.manager [None req-13b7b817-8faf-4c23-ba54-709cd0214c96 - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:08 np0005539564 nova_compute[226295]: 2025-11-29 07:59:08.878 226310 INFO nova.compute.manager [None req-13b7b817-8faf-4c23-ba54-709cd0214c96 - - - - - -] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 29 02:59:09 np0005539564 nova_compute[226295]: 2025-11-29 07:59:09.264 226310 INFO nova.compute.manager [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Took 15.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:59:09 np0005539564 nova_compute[226295]: 2025-11-29 07:59:09.265 226310 DEBUG oslo.service.loopingcall [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:59:09 np0005539564 nova_compute[226295]: 2025-11-29 07:59:09.266 226310 DEBUG nova.compute.manager [-] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:59:09 np0005539564 nova_compute[226295]: 2025-11-29 07:59:09.268 226310 DEBUG nova.network.neutron [-] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.184 226310 DEBUG nova.network.neutron [-] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.227 226310 DEBUG oslo_concurrency.processutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/disk.config 5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 15.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.227 226310 INFO nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Deleting local config drive /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/disk.config because it was imported into RBD.#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.229 226310 DEBUG nova.network.neutron [-] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.256 226310 INFO nova.compute.manager [-] [instance: a371a1b6-e687-429a-a4d8-338fe777c73e] Took 0.99 seconds to deallocate network for instance.#033[00m
Nov 29 02:59:10 np0005539564 kernel: tapda41dfe7-58: entered promiscuous mode
Nov 29 02:59:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:10Z|00146|binding|INFO|Claiming lport da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb for this chassis.
Nov 29 02:59:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:10Z|00147|binding|INFO|da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb: Claiming fa:16:3e:5b:d2:c3 10.100.0.12
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.306 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539564 NetworkManager[48997]: <info>  [1764403150.3079] manager: (tapda41dfe7-58): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.314 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.329 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:d2:c3 10.100.0.12'], port_security=['fa:16:3e:5b:d2:c3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5e56bdc6-e188-4475-a5b5-41dec34857ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4015fd67-b711-4b9b-a6df-c2ac0c4e22ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.331 139780 INFO neutron.agent.ovn.metadata.agent [-] Port da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.333 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 02:59:10 np0005539564 systemd-udevd[247546]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.344 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[67d53ecd-d65f-48ea-b9fc-ae4032a91298]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.346 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap738e99b4-b1 in ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.350 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap738e99b4-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.350 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3aeec240-b9eb-43e3-bb3c-3e9da9e873b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.351 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc84c25-3790-40a7-a49d-5d5f42978ff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 systemd-machined[190128]: New machine qemu-25-instance-00000038.
Nov 29 02:59:10 np0005539564 NetworkManager[48997]: <info>  [1764403150.3685] device (tapda41dfe7-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.367 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[31b4944c-055a-4fd8-9c53-c04c069f95bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 NetworkManager[48997]: <info>  [1764403150.3696] device (tapda41dfe7-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:59:10 np0005539564 systemd[1]: Started Virtual Machine qemu-25-instance-00000038.
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.389 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[711e3f2a-cb9f-45ab-acca-769e1a3d10c8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.397 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:10Z|00148|binding|INFO|Setting lport da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb ovn-installed in OVS
Nov 29 02:59:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:10Z|00149|binding|INFO|Setting lport da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb up in Southbound
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.401 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.420 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9d9256-784f-48fd-b52c-01d1b1787fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.427 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[54a31fc7-2550-4168-ae25-25765270d6e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 NetworkManager[48997]: <info>  [1764403150.4294] manager: (tap738e99b4-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.456 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f11dd640-9893-40c7-97d1-eeb69f933fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.459 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[64ad1851-6202-4c10-8264-060d6a6ab9e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 NetworkManager[48997]: <info>  [1764403150.4824] device (tap738e99b4-b0): carrier: link connected
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.487 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[74215441-4447-43d9-ab6d-763afb0a8a2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.489 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.490 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.512 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc5777e-4863-4c87-b0ae-a7976bc04cba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612516, 'reachable_time': 30271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247582, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.540 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b3433b11-da6a-4018-864f-d232ac6f4e81]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:bee3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612516, 'tstamp': 612516}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247583, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.568 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fda60770-2dab-444e-9c0b-381c3c4651e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612516, 'reachable_time': 30271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247584, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.608 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7317dc-61eb-440d-888e-3c1c5c6bb313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:10.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.678 226310 DEBUG oslo_concurrency.processutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.687 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f0714f66-614a-4522-8650-6a821c0f89d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.689 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.690 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.691 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:10 np0005539564 NetworkManager[48997]: <info>  [1764403150.6947] manager: (tap738e99b4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.701 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:10 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:10Z|00150|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.705 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539564 kernel: tap738e99b4-b0: entered promiscuous mode
Nov 29 02:59:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:10.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.731 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.732 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0450edad-ee5e-4c25-ae70-d76fe6ea9564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.733 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-738e99b4-b58e-4eff-b209-c4aa3748c994
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 738e99b4-b58e-4eff-b209-c4aa3748c994
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:59:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:10.734 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'env', 'PROCESS_TAG=haproxy-738e99b4-b58e-4eff-b209-c4aa3748c994', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/738e99b4-b58e-4eff-b209-c4aa3748c994.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:59:10 np0005539564 nova_compute[226295]: 2025-11-29 07:59:10.734 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:11 np0005539564 nova_compute[226295]: 2025-11-29 07:59:11.085 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:11 np0005539564 podman[247733]: 2025-11-29 07:59:11.080319322 +0000 UTC m=+0.032835319 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:59:11 np0005539564 nova_compute[226295]: 2025-11-29 07:59:11.536 226310 DEBUG nova.compute.manager [req-2fb30957-33c0-4271-bb01-4fd6f1d991e3 req-422467da-44eb-46f4-a5cb-30fadb262881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:11 np0005539564 nova_compute[226295]: 2025-11-29 07:59:11.536 226310 DEBUG oslo_concurrency.lockutils [req-2fb30957-33c0-4271-bb01-4fd6f1d991e3 req-422467da-44eb-46f4-a5cb-30fadb262881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:11 np0005539564 nova_compute[226295]: 2025-11-29 07:59:11.536 226310 DEBUG oslo_concurrency.lockutils [req-2fb30957-33c0-4271-bb01-4fd6f1d991e3 req-422467da-44eb-46f4-a5cb-30fadb262881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:11 np0005539564 nova_compute[226295]: 2025-11-29 07:59:11.537 226310 DEBUG oslo_concurrency.lockutils [req-2fb30957-33c0-4271-bb01-4fd6f1d991e3 req-422467da-44eb-46f4-a5cb-30fadb262881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:11 np0005539564 nova_compute[226295]: 2025-11-29 07:59:11.537 226310 DEBUG nova.compute.manager [req-2fb30957-33c0-4271-bb01-4fd6f1d991e3 req-422467da-44eb-46f4-a5cb-30fadb262881 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Processing event network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:59:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3976739805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:12 np0005539564 nova_compute[226295]: 2025-11-29 07:59:12.098 226310 DEBUG oslo_concurrency.processutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:12 np0005539564 nova_compute[226295]: 2025-11-29 07:59:12.106 226310 DEBUG nova.compute.provider_tree [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:12 np0005539564 nova_compute[226295]: 2025-11-29 07:59:12.130 226310 DEBUG nova.scheduler.client.report [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:12 np0005539564 nova_compute[226295]: 2025-11-29 07:59:12.175 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:12 np0005539564 nova_compute[226295]: 2025-11-29 07:59:12.250 226310 INFO nova.scheduler.client.report [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Deleted allocations for instance a371a1b6-e687-429a-a4d8-338fe777c73e#033[00m
Nov 29 02:59:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:12 np0005539564 nova_compute[226295]: 2025-11-29 07:59:12.462 226310 DEBUG oslo_concurrency.lockutils [None req-9a214f9d-2292-4781-90ef-c116f3c50c35 c2aeea466c9049d3a023483ec2e5b4f6 30d42ce85b6840c6942b24cf4a7b9d64 - - default default] Lock "a371a1b6-e687-429a-a4d8-338fe777c73e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 19.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:12.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:12.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:13 np0005539564 podman[247733]: 2025-11-29 07:59:13.114547922 +0000 UTC m=+2.067063909 container create 140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:59:13 np0005539564 systemd[1]: Started libpod-conmon-140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886.scope.
Nov 29 02:59:13 np0005539564 systemd[1]: Started libcrun container.
Nov 29 02:59:13 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caee926296f262832ced588e44fc1f0b708c8f3b9f64cb1a710ebbc5a66f545/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.428 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.428 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403153.4274595, 5e56bdc6-e188-4475-a5b5-41dec34857ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.429 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] VM Started (Lifecycle Event)#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.431 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:59:13 np0005539564 podman[247733]: 2025-11-29 07:59:13.433837739 +0000 UTC m=+2.386353706 container init 140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.434 226310 INFO nova.virt.libvirt.driver [-] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Instance spawned successfully.#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.434 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:59:13 np0005539564 podman[247733]: 2025-11-29 07:59:13.443572482 +0000 UTC m=+2.396088429 container start 140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.464 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.467 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.468 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.468 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.468 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.469 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.469 226310 DEBUG nova.virt.libvirt.driver [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:13 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[247820]: [NOTICE]   (247831) : New worker (247833) forked
Nov 29 02:59:13 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[247820]: [NOTICE]   (247831) : Loading success.
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.475 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.507 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.508 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403153.4275753, 5e56bdc6-e188-4475-a5b5-41dec34857ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.508 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.534 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.537 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403153.4308856, 5e56bdc6-e188-4475-a5b5-41dec34857ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.538 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.571 226310 INFO nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Took 28.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.571 226310 DEBUG nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.574 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.580 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.615 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.668 226310 INFO nova.compute.manager [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Took 29.85 seconds to build instance.#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.691 226310 DEBUG nova.compute.manager [req-783947e3-7e38-4f5a-8f9e-46dc87784603 req-b8982bdd-3e39-4c9a-b081-d1f01d915ce2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.692 226310 DEBUG oslo_concurrency.lockutils [req-783947e3-7e38-4f5a-8f9e-46dc87784603 req-b8982bdd-3e39-4c9a-b081-d1f01d915ce2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.692 226310 DEBUG oslo_concurrency.lockutils [req-783947e3-7e38-4f5a-8f9e-46dc87784603 req-b8982bdd-3e39-4c9a-b081-d1f01d915ce2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.692 226310 DEBUG oslo_concurrency.lockutils [req-783947e3-7e38-4f5a-8f9e-46dc87784603 req-b8982bdd-3e39-4c9a-b081-d1f01d915ce2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.692 226310 DEBUG nova.compute.manager [req-783947e3-7e38-4f5a-8f9e-46dc87784603 req-b8982bdd-3e39-4c9a-b081-d1f01d915ce2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] No waiting events found dispatching network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.692 226310 WARNING nova.compute.manager [req-783947e3-7e38-4f5a-8f9e-46dc87784603 req-b8982bdd-3e39-4c9a-b081-d1f01d915ce2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received unexpected event network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb for instance with vm_state active and task_state None.#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.693 226310 DEBUG oslo_concurrency.lockutils [None req-ddd85b7c-5293-43c5-883f-bcb4b10311d3 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:13 np0005539564 nova_compute[226295]: 2025-11-29 07:59:13.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:14.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e225 e225: 3 total, 3 up, 3 in
Nov 29 02:59:16 np0005539564 nova_compute[226295]: 2025-11-29 07:59:16.088 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:16.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:18 np0005539564 nova_compute[226295]: 2025-11-29 07:59:18.259 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:18 np0005539564 NetworkManager[48997]: <info>  [1764403158.2643] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 29 02:59:18 np0005539564 NetworkManager[48997]: <info>  [1764403158.2649] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Nov 29 02:59:18 np0005539564 nova_compute[226295]: 2025-11-29 07:59:18.375 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:18 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:18Z|00151|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 02:59:18 np0005539564 nova_compute[226295]: 2025-11-29 07:59:18.388 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:18 np0005539564 nova_compute[226295]: 2025-11-29 07:59:18.714 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:18.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:59:19 np0005539564 nova_compute[226295]: 2025-11-29 07:59:19.526 226310 DEBUG nova.compute.manager [req-f8bcb975-4217-44e4-b37b-5f713de3d046 req-328c6cb2-590d-4696-aa92-2f55ca770f16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-changed-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:19 np0005539564 nova_compute[226295]: 2025-11-29 07:59:19.526 226310 DEBUG nova.compute.manager [req-f8bcb975-4217-44e4-b37b-5f713de3d046 req-328c6cb2-590d-4696-aa92-2f55ca770f16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Refreshing instance network info cache due to event network-changed-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:59:19 np0005539564 nova_compute[226295]: 2025-11-29 07:59:19.526 226310 DEBUG oslo_concurrency.lockutils [req-f8bcb975-4217-44e4-b37b-5f713de3d046 req-328c6cb2-590d-4696-aa92-2f55ca770f16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:19 np0005539564 nova_compute[226295]: 2025-11-29 07:59:19.527 226310 DEBUG oslo_concurrency.lockutils [req-f8bcb975-4217-44e4-b37b-5f713de3d046 req-328c6cb2-590d-4696-aa92-2f55ca770f16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:19 np0005539564 nova_compute[226295]: 2025-11-29 07:59:19.527 226310 DEBUG nova.network.neutron [req-f8bcb975-4217-44e4-b37b-5f713de3d046 req-328c6cb2-590d-4696-aa92-2f55ca770f16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Refreshing network info cache for port da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:59:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:59:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e226 e226: 3 total, 3 up, 3 in
Nov 29 02:59:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:20.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:21 np0005539564 nova_compute[226295]: 2025-11-29 07:59:21.091 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:22.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:22.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:23 np0005539564 nova_compute[226295]: 2025-11-29 07:59:23.310 226310 DEBUG nova.network.neutron [req-f8bcb975-4217-44e4-b37b-5f713de3d046 req-328c6cb2-590d-4696-aa92-2f55ca770f16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updated VIF entry in instance network info cache for port da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:59:23 np0005539564 nova_compute[226295]: 2025-11-29 07:59:23.311 226310 DEBUG nova.network.neutron [req-f8bcb975-4217-44e4-b37b-5f713de3d046 req-328c6cb2-590d-4696-aa92-2f55ca770f16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:23 np0005539564 nova_compute[226295]: 2025-11-29 07:59:23.380 226310 DEBUG oslo_concurrency.lockutils [req-f8bcb975-4217-44e4-b37b-5f713de3d046 req-328c6cb2-590d-4696-aa92-2f55ca770f16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:23 np0005539564 nova_compute[226295]: 2025-11-29 07:59:23.716 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:24.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:26 np0005539564 nova_compute[226295]: 2025-11-29 07:59:26.095 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:26.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:26.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e227 e227: 3 total, 3 up, 3 in
Nov 29 02:59:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:28 np0005539564 nova_compute[226295]: 2025-11-29 07:59:28.432 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:28.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:28 np0005539564 nova_compute[226295]: 2025-11-29 07:59:28.718 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:29 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:29Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:d2:c3 10.100.0.12
Nov 29 02:59:29 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:29Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:d2:c3 10.100.0.12
Nov 29 02:59:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:30.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:31 np0005539564 nova_compute[226295]: 2025-11-29 07:59:31.097 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:31 np0005539564 nova_compute[226295]: 2025-11-29 07:59:31.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:31 np0005539564 nova_compute[226295]: 2025-11-29 07:59:31.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:59:32 np0005539564 nova_compute[226295]: 2025-11-29 07:59:32.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:32 np0005539564 nova_compute[226295]: 2025-11-29 07:59:32.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:32.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:32.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e228 e228: 3 total, 3 up, 3 in
Nov 29 02:59:33 np0005539564 nova_compute[226295]: 2025-11-29 07:59:33.720 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e229 e229: 3 total, 3 up, 3 in
Nov 29 02:59:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:34.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:34.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:35 np0005539564 nova_compute[226295]: 2025-11-29 07:59:35.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:35 np0005539564 nova_compute[226295]: 2025-11-29 07:59:35.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:59:35 np0005539564 nova_compute[226295]: 2025-11-29 07:59:35.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:59:36 np0005539564 nova_compute[226295]: 2025-11-29 07:59:36.100 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:36 np0005539564 nova_compute[226295]: 2025-11-29 07:59:36.233 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:36 np0005539564 nova_compute[226295]: 2025-11-29 07:59:36.233 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:36 np0005539564 nova_compute[226295]: 2025-11-29 07:59:36.233 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:59:36 np0005539564 nova_compute[226295]: 2025-11-29 07:59:36.234 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:36.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.085 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.103 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.104 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.105 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.105 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.173 226310 DEBUG oslo_concurrency.lockutils [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-5e56bdc6-e188-4475-a5b5-41dec34857ee-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.174 226310 DEBUG oslo_concurrency.lockutils [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-5e56bdc6-e188-4475-a5b5-41dec34857ee-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.175 226310 DEBUG nova.objects.instance [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.389 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.390 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.390 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.391 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.391 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.655 226310 DEBUG nova.objects.instance [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_requests' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.676 226310 DEBUG nova.network.neutron [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:59:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:38 np0005539564 nova_compute[226295]: 2025-11-29 07:59:38.724 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:38.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1208097719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.011 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.118 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.119 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:39 np0005539564 podman[247921]: 2025-11-29 07:59:39.171941152 +0000 UTC m=+0.089907794 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:59:39 np0005539564 podman[247920]: 2025-11-29 07:59:39.183048222 +0000 UTC m=+0.109024930 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.217 226310 DEBUG nova.policy [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:59:39 np0005539564 podman[247919]: 2025-11-29 07:59:39.227705099 +0000 UTC m=+0.157244024 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.361 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.364 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4533MB free_disk=20.897350311279297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.365 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.365 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.441 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 5e56bdc6-e188-4475-a5b5-41dec34857ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.442 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.442 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:59:39 np0005539564 nova_compute[226295]: 2025-11-29 07:59:39.480 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1698824759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:40 np0005539564 nova_compute[226295]: 2025-11-29 07:59:40.215 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:40 np0005539564 nova_compute[226295]: 2025-11-29 07:59:40.221 226310 DEBUG nova.network.neutron [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Successfully created port: abd6dd85-61a4-4a09-8f10-22994ccc6546 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:59:40 np0005539564 nova_compute[226295]: 2025-11-29 07:59:40.230 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:40 np0005539564 nova_compute[226295]: 2025-11-29 07:59:40.248 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:40 np0005539564 nova_compute[226295]: 2025-11-29 07:59:40.270 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:59:40 np0005539564 nova_compute[226295]: 2025-11-29 07:59:40.271 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:40.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e230 e230: 3 total, 3 up, 3 in
Nov 29 02:59:41 np0005539564 nova_compute[226295]: 2025-11-29 07:59:41.104 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:42 np0005539564 nova_compute[226295]: 2025-11-29 07:59:42.192 226310 DEBUG nova.network.neutron [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Successfully updated port: abd6dd85-61a4-4a09-8f10-22994ccc6546 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:59:42 np0005539564 nova_compute[226295]: 2025-11-29 07:59:42.207 226310 DEBUG oslo_concurrency.lockutils [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:42 np0005539564 nova_compute[226295]: 2025-11-29 07:59:42.208 226310 DEBUG oslo_concurrency.lockutils [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:42 np0005539564 nova_compute[226295]: 2025-11-29 07:59:42.208 226310 DEBUG nova.network.neutron [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:59:42 np0005539564 nova_compute[226295]: 2025-11-29 07:59:42.470 226310 WARNING nova.network.neutron [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 02:59:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:42.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:42 np0005539564 nova_compute[226295]: 2025-11-29 07:59:42.769 226310 DEBUG nova.compute.manager [req-61e39658-3a55-4863-b7f2-04bf12aed229 req-7e2a4515-71e4-49e8-8a77-a0f72d52bbdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-changed-abd6dd85-61a4-4a09-8f10-22994ccc6546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:42 np0005539564 nova_compute[226295]: 2025-11-29 07:59:42.770 226310 DEBUG nova.compute.manager [req-61e39658-3a55-4863-b7f2-04bf12aed229 req-7e2a4515-71e4-49e8-8a77-a0f72d52bbdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Refreshing instance network info cache due to event network-changed-abd6dd85-61a4-4a09-8f10-22994ccc6546. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:59:42 np0005539564 nova_compute[226295]: 2025-11-29 07:59:42.770 226310 DEBUG oslo_concurrency.lockutils [req-61e39658-3a55-4863-b7f2-04bf12aed229 req-7e2a4515-71e4-49e8-8a77-a0f72d52bbdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:42.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e231 e231: 3 total, 3 up, 3 in
Nov 29 02:59:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:43 np0005539564 nova_compute[226295]: 2025-11-29 07:59:43.727 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:44.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:44.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.105 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.217 226310 DEBUG nova.network.neutron [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.238 226310 DEBUG oslo_concurrency.lockutils [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.240 226310 DEBUG oslo_concurrency.lockutils [req-61e39658-3a55-4863-b7f2-04bf12aed229 req-7e2a4515-71e4-49e8-8a77-a0f72d52bbdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.240 226310 DEBUG nova.network.neutron [req-61e39658-3a55-4863-b7f2-04bf12aed229 req-7e2a4515-71e4-49e8-8a77-a0f72d52bbdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Refreshing network info cache for port abd6dd85-61a4-4a09-8f10-22994ccc6546 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.244 226310 DEBUG nova.virt.libvirt.vif [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.245 226310 DEBUG nova.network.os_vif_util [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.246 226310 DEBUG nova.network.os_vif_util [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.246 226310 DEBUG os_vif [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.247 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.248 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.248 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.254 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.254 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabd6dd85-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.255 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapabd6dd85-61, col_values=(('external_ids', {'iface-id': 'abd6dd85-61a4-4a09-8f10-22994ccc6546', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:70:da', 'vm-uuid': '5e56bdc6-e188-4475-a5b5-41dec34857ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.295 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 NetworkManager[48997]: <info>  [1764403186.2970] manager: (tapabd6dd85-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.298 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.307 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.310 226310 INFO os_vif [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61')#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.312 226310 DEBUG nova.virt.libvirt.vif [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.313 226310 DEBUG nova.network.os_vif_util [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.314 226310 DEBUG nova.network.os_vif_util [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.317 226310 DEBUG nova.virt.libvirt.guest [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] attach device xml: <interface type="ethernet">
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:09:70:da"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <target dev="tapabd6dd85-61"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]: </interface>
Nov 29 02:59:46 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 02:59:46 np0005539564 kernel: tapabd6dd85-61: entered promiscuous mode
Nov 29 02:59:46 np0005539564 NetworkManager[48997]: <info>  [1764403186.3346] manager: (tapabd6dd85-61): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.336 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:46Z|00152|binding|INFO|Claiming lport abd6dd85-61a4-4a09-8f10-22994ccc6546 for this chassis.
Nov 29 02:59:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:46Z|00153|binding|INFO|abd6dd85-61a4-4a09-8f10-22994ccc6546: Claiming fa:16:3e:09:70:da 10.100.0.14
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.352 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:46Z|00154|binding|INFO|Setting lport abd6dd85-61a4-4a09-8f10-22994ccc6546 ovn-installed in OVS
Nov 29 02:59:46 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:46Z|00155|binding|INFO|Setting lport abd6dd85-61a4-4a09-8f10-22994ccc6546 up in Southbound
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.356 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.357 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:70:da 10.100.0.14'], port_security=['fa:16:3e:09:70:da 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5e56bdc6-e188-4475-a5b5-41dec34857ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=abd6dd85-61a4-4a09-8f10-22994ccc6546) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.360 139780 INFO neutron.agent.ovn.metadata.agent [-] Port abd6dd85-61a4-4a09-8f10-22994ccc6546 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.362 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 02:59:46 np0005539564 systemd-udevd[248013]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.381 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7b153b06-1a44-4bb0-8bb9-d91fa5051047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:46 np0005539564 NetworkManager[48997]: <info>  [1764403186.3966] device (tapabd6dd85-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:59:46 np0005539564 NetworkManager[48997]: <info>  [1764403186.3984] device (tapabd6dd85-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.431 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d070cadb-b820-4ca6-a9ba-750104cdd7a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.436 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd2e847-1dd8-4e9c-8ef8-e3612b3282ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.480 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5b10613b-5230-4df6-97da-4d9a62aec59b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.509 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[df1868c7-844e-4f6d-bb66-dde93a873e84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612516, 'reachable_time': 30271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248020, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.536 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e995f0f8-8449-46d5-a9d5-4e92e67da75c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612531, 'tstamp': 612531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248021, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612536, 'tstamp': 612536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248021, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.539 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.541 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.543 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.543 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.543 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.544 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:46.544 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.698 226310 DEBUG nova.virt.libvirt.driver [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.698 226310 DEBUG nova.virt.libvirt.driver [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.699 226310 DEBUG nova.virt.libvirt.driver [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:5b:d2:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.699 226310 DEBUG nova.virt.libvirt.driver [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:09:70:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:59:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:46.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.752 226310 DEBUG nova.virt.libvirt.guest [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1207742721</nova:name>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 07:59:46</nova:creationTime>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:port uuid="da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb">
Nov 29 02:59:46 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    <nova:port uuid="abd6dd85-61a4-4a09-8f10-22994ccc6546">
Nov 29 02:59:46 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 02:59:46 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 02:59:46 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 02:59:46 np0005539564 nova_compute[226295]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 02:59:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:46.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:46 np0005539564 nova_compute[226295]: 2025-11-29 07:59:46.785 226310 DEBUG oslo_concurrency.lockutils [None req-453b626b-e9d8-45b2-acc6-275df39ed281 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-5e56bdc6-e188-4475-a5b5-41dec34857ee-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e232 e232: 3 total, 3 up, 3 in
Nov 29 02:59:47 np0005539564 nova_compute[226295]: 2025-11-29 07:59:47.248 226310 DEBUG nova.compute.manager [req-2706d406-e673-4d21-bfac-074df0699a76 req-8e816afe-8afb-4bfa-b8bc-a48eb8096d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:47 np0005539564 nova_compute[226295]: 2025-11-29 07:59:47.248 226310 DEBUG oslo_concurrency.lockutils [req-2706d406-e673-4d21-bfac-074df0699a76 req-8e816afe-8afb-4bfa-b8bc-a48eb8096d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:47 np0005539564 nova_compute[226295]: 2025-11-29 07:59:47.249 226310 DEBUG oslo_concurrency.lockutils [req-2706d406-e673-4d21-bfac-074df0699a76 req-8e816afe-8afb-4bfa-b8bc-a48eb8096d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:47 np0005539564 nova_compute[226295]: 2025-11-29 07:59:47.249 226310 DEBUG oslo_concurrency.lockutils [req-2706d406-e673-4d21-bfac-074df0699a76 req-8e816afe-8afb-4bfa-b8bc-a48eb8096d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:47 np0005539564 nova_compute[226295]: 2025-11-29 07:59:47.249 226310 DEBUG nova.compute.manager [req-2706d406-e673-4d21-bfac-074df0699a76 req-8e816afe-8afb-4bfa-b8bc-a48eb8096d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] No waiting events found dispatching network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:47 np0005539564 nova_compute[226295]: 2025-11-29 07:59:47.250 226310 WARNING nova.compute.manager [req-2706d406-e673-4d21-bfac-074df0699a76 req-8e816afe-8afb-4bfa-b8bc-a48eb8096d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received unexpected event network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:59:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:48.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:49Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:70:da 10.100.0.14
Nov 29 02:59:49 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:49Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:70:da 10.100.0.14
Nov 29 02:59:49 np0005539564 nova_compute[226295]: 2025-11-29 07:59:49.488 226310 DEBUG nova.compute.manager [req-65d56947-4015-4876-9bcc-6fd0cd3579cb req-12158d16-6393-4043-823e-0ade684590a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:49 np0005539564 nova_compute[226295]: 2025-11-29 07:59:49.489 226310 DEBUG oslo_concurrency.lockutils [req-65d56947-4015-4876-9bcc-6fd0cd3579cb req-12158d16-6393-4043-823e-0ade684590a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:49 np0005539564 nova_compute[226295]: 2025-11-29 07:59:49.489 226310 DEBUG oslo_concurrency.lockutils [req-65d56947-4015-4876-9bcc-6fd0cd3579cb req-12158d16-6393-4043-823e-0ade684590a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:49 np0005539564 nova_compute[226295]: 2025-11-29 07:59:49.489 226310 DEBUG oslo_concurrency.lockutils [req-65d56947-4015-4876-9bcc-6fd0cd3579cb req-12158d16-6393-4043-823e-0ade684590a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:49 np0005539564 nova_compute[226295]: 2025-11-29 07:59:49.489 226310 DEBUG nova.compute.manager [req-65d56947-4015-4876-9bcc-6fd0cd3579cb req-12158d16-6393-4043-823e-0ade684590a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] No waiting events found dispatching network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:49 np0005539564 nova_compute[226295]: 2025-11-29 07:59:49.490 226310 WARNING nova.compute.manager [req-65d56947-4015-4876-9bcc-6fd0cd3579cb req-12158d16-6393-4043-823e-0ade684590a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received unexpected event network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.394 226310 DEBUG nova.network.neutron [req-61e39658-3a55-4863-b7f2-04bf12aed229 req-7e2a4515-71e4-49e8-8a77-a0f72d52bbdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updated VIF entry in instance network info cache for port abd6dd85-61a4-4a09-8f10-22994ccc6546. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.395 226310 DEBUG nova.network.neutron [req-61e39658-3a55-4863-b7f2-04bf12aed229 req-7e2a4515-71e4-49e8-8a77-a0f72d52bbdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.440 226310 DEBUG oslo_concurrency.lockutils [req-61e39658-3a55-4863-b7f2-04bf12aed229 req-7e2a4515-71e4-49e8-8a77-a0f72d52bbdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.462 226310 DEBUG oslo_concurrency.lockutils [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-5e56bdc6-e188-4475-a5b5-41dec34857ee-abd6dd85-61a4-4a09-8f10-22994ccc6546" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.463 226310 DEBUG oslo_concurrency.lockutils [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-5e56bdc6-e188-4475-a5b5-41dec34857ee-abd6dd85-61a4-4a09-8f10-22994ccc6546" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.510 226310 DEBUG nova.objects.instance [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.542 226310 DEBUG nova.virt.libvirt.vif [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.543 226310 DEBUG nova.network.os_vif_util [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.544 226310 DEBUG nova.network.os_vif_util [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.549 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:09:70:da"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapabd6dd85-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.552 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:09:70:da"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapabd6dd85-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.555 226310 DEBUG nova.virt.libvirt.driver [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Attempting to detach device tapabd6dd85-61 from instance 5e56bdc6-e188-4475-a5b5-41dec34857ee from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.556 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] detach device xml: <interface type="ethernet">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:09:70:da"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <target dev="tapabd6dd85-61"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: </interface>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.695 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:09:70:da"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapabd6dd85-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.701 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:09:70:da"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapabd6dd85-61"/></interface>not found in domain: <domain type='kvm' id='25'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <name>instance-00000038</name>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <uuid>5e56bdc6-e188-4475-a5b5-41dec34857ee</uuid>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1207742721</nova:name>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 07:59:46</nova:creationTime>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:port uuid="da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:port uuid="abd6dd85-61a4-4a09-8f10-22994ccc6546">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <resource>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <partition>/machine</partition>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </resource>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='serial'>5e56bdc6-e188-4475-a5b5-41dec34857ee</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='uuid'>5e56bdc6-e188-4475-a5b5-41dec34857ee</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <model fallback='forbid'>Nehalem</model>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <feature policy='require' name='x2apic'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <feature policy='require' name='hypervisor'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <feature policy='require' name='vme'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/5e56bdc6-e188-4475-a5b5-41dec34857ee_disk' index='2'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='virtio-disk0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config' index='1'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='sata0-0-0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pcie.0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.4'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.5'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.6'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.7'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.8'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.9'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.10'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.11'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.12'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.13'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.14'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.15'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.16'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.17'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.18'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.19'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.20'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.21'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.22'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.23'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.24'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.25'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.26'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='usb'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='ide'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:5b:d2:c3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target dev='tapda41dfe7-58'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='net0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:09:70:da'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target dev='tapabd6dd85-61'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='net1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/console.log' append='off'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </target>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/console.log' append='off'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </console>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='input0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </input>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='input1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </input>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='input2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </input>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='video0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='watchdog0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </watchdog>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='balloon0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='rng0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <label>system_u:system_r:svirt_t:s0:c294,c305</label>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c294,c305</imagelabel>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <label>+107:+107</label>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <imagelabel>+107:+107</imagelabel>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.702 226310 INFO nova.virt.libvirt.driver [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully detached device tapabd6dd85-61 from instance 5e56bdc6-e188-4475-a5b5-41dec34857ee from the persistent domain config.#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.703 226310 DEBUG nova.virt.libvirt.driver [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] (1/8): Attempting to detach device tapabd6dd85-61 with device alias net1 from instance 5e56bdc6-e188-4475-a5b5-41dec34857ee from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.703 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] detach device xml: <interface type="ethernet">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:09:70:da"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <target dev="tapabd6dd85-61"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: </interface>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:59:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:50.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:50 np0005539564 kernel: tapabd6dd85-61 (unregistering): left promiscuous mode
Nov 29 02:59:50 np0005539564 NetworkManager[48997]: <info>  [1764403190.8252] device (tapabd6dd85-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:59:50 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 29 02:59:50 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:50.830967) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:59:50 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 29 02:59:50 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190831001, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2646, "num_deletes": 267, "total_data_size": 6125664, "memory_usage": 6193072, "flush_reason": "Manual Compaction"}
Nov 29 02:59:50 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 29 02:59:50 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:50Z|00156|binding|INFO|Releasing lport abd6dd85-61a4-4a09-8f10-22994ccc6546 from this chassis (sb_readonly=0)
Nov 29 02:59:50 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:50Z|00157|binding|INFO|Setting lport abd6dd85-61a4-4a09-8f10-22994ccc6546 down in Southbound
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.892 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:50Z|00158|binding|INFO|Removing iface tapabd6dd85-61 ovn-installed in OVS
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.894 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764403190.8931983, 5e56bdc6-e188-4475-a5b5-41dec34857ee => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.895 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.895 226310 DEBUG nova.virt.libvirt.driver [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Start waiting for the detach event from libvirt for device tapabd6dd85-61 with device alias net1 for instance 5e56bdc6-e188-4475-a5b5-41dec34857ee _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.896 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:09:70:da"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapabd6dd85-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.899 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:09:70:da"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapabd6dd85-61"/></interface>not found in domain: <domain type='kvm' id='25'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <name>instance-00000038</name>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <uuid>5e56bdc6-e188-4475-a5b5-41dec34857ee</uuid>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1207742721</nova:name>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 07:59:46</nova:creationTime>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:port uuid="da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:port uuid="abd6dd85-61a4-4a09-8f10-22994ccc6546">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <resource>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <partition>/machine</partition>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </resource>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='serial'>5e56bdc6-e188-4475-a5b5-41dec34857ee</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='uuid'>5e56bdc6-e188-4475-a5b5-41dec34857ee</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <model fallback='forbid'>Nehalem</model>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <feature policy='require' name='x2apic'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <feature policy='require' name='hypervisor'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <feature policy='require' name='vme'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/5e56bdc6-e188-4475-a5b5-41dec34857ee_disk' index='2'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='virtio-disk0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config' index='1'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='sata0-0-0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pcie.0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.4'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.5'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.6'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.7'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.8'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.9'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.10'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.11'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.12'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.13'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.14'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.15'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.16'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.17'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.18'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.19'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.20'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.21'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.22'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.23'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.24'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.25'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='pci.26'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='usb'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='ide'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:5b:d2:c3'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target dev='tapda41dfe7-58'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='net0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/console.log' append='off'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      </target>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/console.log' append='off'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </console>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='input0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </input>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='input1'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </input>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='input2'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </input>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='video0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='watchdog0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </watchdog>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='balloon0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <alias name='rng0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <label>system_u:system_r:svirt_t:s0:c294,c305</label>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c294,c305</imagelabel>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <label>+107:+107</label>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <imagelabel>+107:+107</imagelabel>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.899 226310 INFO nova.virt.libvirt.driver [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully detached device tapabd6dd85-61 from instance 5e56bdc6-e188-4475-a5b5-41dec34857ee from the live domain config.#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.900 226310 DEBUG nova.virt.libvirt.vif [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.900 226310 DEBUG nova.network.os_vif_util [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.901 226310 DEBUG nova.network.os_vif_util [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.901 226310 DEBUG os_vif [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.903 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabd6dd85-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.904 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.907 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.909 226310 INFO os_vif [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61')#033[00m
Nov 29 02:59:50 np0005539564 nova_compute[226295]: 2025-11-29 07:59:50.910 226310 DEBUG nova.virt.libvirt.guest [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1207742721</nova:name>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 07:59:50</nova:creationTime>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    <nova:port uuid="da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb">
Nov 29 02:59:50 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 02:59:50 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 02:59:50 np0005539564 nova_compute[226295]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 02:59:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:50.994 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:70:da 10.100.0.14'], port_security=['fa:16:3e:09:70:da 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5e56bdc6-e188-4475-a5b5-41dec34857ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=abd6dd85-61a4-4a09-8f10-22994ccc6546) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:50.995 139780 INFO neutron.agent.ovn.metadata.agent [-] Port abd6dd85-61a4-4a09-8f10-22994ccc6546 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 02:59:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:50.996 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.015 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b195d5c6-cfcf-4aa0-a3ae-eb9eb69025eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.052 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9565798c-1421-4ed9-943c-872e5ee92c6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.056 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[dae81a2a-4705-40d5-8ef5-469dc1f08a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.095 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8be9cdbf-2ef0-4185-a8b9-c42cf1b483dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:51 np0005539564 nova_compute[226295]: 2025-11-29 07:59:51.108 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.121 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b73771-ef47-4424-92a9-a24c59e9b70d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612516, 'reachable_time': 30271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248035, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.143 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[59f67ace-6ba3-46e6-a6bc-c0a23ff5635e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612531, 'tstamp': 612531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248036, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612536, 'tstamp': 612536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248036, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.144 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:51 np0005539564 nova_compute[226295]: 2025-11-29 07:59:51.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:51 np0005539564 nova_compute[226295]: 2025-11-29 07:59:51.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.147 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.147 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.148 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:51.148 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403191333399, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4002433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31671, "largest_seqno": 34312, "table_properties": {"data_size": 3991328, "index_size": 7215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23577, "raw_average_key_size": 21, "raw_value_size": 3969064, "raw_average_value_size": 3598, "num_data_blocks": 310, "num_entries": 1103, "num_filter_entries": 1103, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402984, "oldest_key_time": 1764402984, "file_creation_time": 1764403190, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 502509 microseconds, and 8694 cpu microseconds.
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:51.333467) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4002433 bytes OK
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:51.333494) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:51.661727) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:51.661789) EVENT_LOG_v1 {"time_micros": 1764403191661774, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:51.661821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6113739, prev total WAL file size 6113739, number of live WAL files 2.
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:51.665803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3908KB)], [60(9328KB)]
Nov 29 02:59:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403191665866, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 13554938, "oldest_snapshot_seqno": -1}
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.080 226310 DEBUG nova.compute.manager [req-e89b9ea4-5b99-4c79-8b84-a465e5d81a35 req-7e718537-62fe-40b1-8b4b-22d66029f660 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-unplugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.080 226310 DEBUG oslo_concurrency.lockutils [req-e89b9ea4-5b99-4c79-8b84-a465e5d81a35 req-7e718537-62fe-40b1-8b4b-22d66029f660 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.081 226310 DEBUG oslo_concurrency.lockutils [req-e89b9ea4-5b99-4c79-8b84-a465e5d81a35 req-7e718537-62fe-40b1-8b4b-22d66029f660 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.081 226310 DEBUG oslo_concurrency.lockutils [req-e89b9ea4-5b99-4c79-8b84-a465e5d81a35 req-7e718537-62fe-40b1-8b4b-22d66029f660 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.082 226310 DEBUG nova.compute.manager [req-e89b9ea4-5b99-4c79-8b84-a465e5d81a35 req-7e718537-62fe-40b1-8b4b-22d66029f660 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] No waiting events found dispatching network-vif-unplugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.082 226310 WARNING nova.compute.manager [req-e89b9ea4-5b99-4c79-8b84-a465e5d81a35 req-7e718537-62fe-40b1-8b4b-22d66029f660 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received unexpected event network-vif-unplugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6567 keys, 11704005 bytes, temperature: kUnknown
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403192160515, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 11704005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11657786, "index_size": 28699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 167489, "raw_average_key_size": 25, "raw_value_size": 11537627, "raw_average_value_size": 1756, "num_data_blocks": 1154, "num_entries": 6567, "num_filter_entries": 6567, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:52.160758) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11704005 bytes
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:52.162888) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.4 rd, 23.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 7104, records dropped: 537 output_compression: NoCompression
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:52.162930) EVENT_LOG_v1 {"time_micros": 1764403192162902, "job": 36, "event": "compaction_finished", "compaction_time_micros": 494708, "compaction_time_cpu_micros": 53761, "output_level": 6, "num_output_files": 1, "total_output_size": 11704005, "num_input_records": 7104, "num_output_records": 6567, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403192163972, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403192165747, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:51.665400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:52.165897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:52.165904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:52.165906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:52.165909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-07:59:52.165931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.204 226310 DEBUG oslo_concurrency.lockutils [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.204 226310 DEBUG oslo_concurrency.lockutils [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.205 226310 DEBUG nova.network.neutron [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.535 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.535 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.535 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.536 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.536 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.537 226310 INFO nova.compute.manager [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Terminating instance#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.539 226310 DEBUG nova.compute.manager [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:59:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:52.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:59:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:52.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.847 226310 DEBUG nova.compute.manager [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-deleted-abd6dd85-61a4-4a09-8f10-22994ccc6546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.847 226310 INFO nova.compute.manager [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Neutron deleted interface abd6dd85-61a4-4a09-8f10-22994ccc6546; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 02:59:52 np0005539564 nova_compute[226295]: 2025-11-29 07:59:52.848 226310 DEBUG nova.network.neutron [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.256 226310 DEBUG nova.compute.manager [req-098cb026-468e-4633-9f1a-705c360b6eae req-995d664e-5814-4d93-bf39-329bd35fe282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.256 226310 DEBUG oslo_concurrency.lockutils [req-098cb026-468e-4633-9f1a-705c360b6eae req-995d664e-5814-4d93-bf39-329bd35fe282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.257 226310 DEBUG oslo_concurrency.lockutils [req-098cb026-468e-4633-9f1a-705c360b6eae req-995d664e-5814-4d93-bf39-329bd35fe282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.258 226310 DEBUG oslo_concurrency.lockutils [req-098cb026-468e-4633-9f1a-705c360b6eae req-995d664e-5814-4d93-bf39-329bd35fe282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.258 226310 DEBUG nova.compute.manager [req-098cb026-468e-4633-9f1a-705c360b6eae req-995d664e-5814-4d93-bf39-329bd35fe282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] No waiting events found dispatching network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.258 226310 WARNING nova.compute.manager [req-098cb026-468e-4633-9f1a-705c360b6eae req-995d664e-5814-4d93-bf39-329bd35fe282 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received unexpected event network-vif-plugged-abd6dd85-61a4-4a09-8f10-22994ccc6546 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.361 226310 INFO nova.network.neutron [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Port abd6dd85-61a4-4a09-8f10-22994ccc6546 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.362 226310 DEBUG nova.network.neutron [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [{"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.382 226310 DEBUG oslo_concurrency.lockutils [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-5e56bdc6-e188-4475-a5b5-41dec34857ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.416 226310 DEBUG oslo_concurrency.lockutils [None req-e360e9c1-9921-4857-91a7-60135e99c514 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-5e56bdc6-e188-4475-a5b5-41dec34857ee-abd6dd85-61a4-4a09-8f10-22994ccc6546" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:54.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.857 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "3886780f-7115-4500-9cdd-6e5aae5d95f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.857 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.884 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.984 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.984 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.993 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:59:54 np0005539564 nova_compute[226295]: 2025-11-29 07:59:54.993 226310 INFO nova.compute.claims [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:59:55 np0005539564 nova_compute[226295]: 2025-11-29 07:59:55.126 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:55 np0005539564 nova_compute[226295]: 2025-11-29 07:59:55.555 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:55 np0005539564 nova_compute[226295]: 2025-11-29 07:59:55.905 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4006629938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.071 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.945s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.079 226310 DEBUG nova.compute.provider_tree [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.109 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.180 226310 DEBUG nova.scheduler.client.report [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.213 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.214 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:59:56 np0005539564 kernel: tapda41dfe7-58 (unregistering): left promiscuous mode
Nov 29 02:59:56 np0005539564 NetworkManager[48997]: <info>  [1764403196.2444] device (tapda41dfe7-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:59:56 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:56Z|00159|binding|INFO|Releasing lport da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb from this chassis (sb_readonly=0)
Nov 29 02:59:56 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:56Z|00160|binding|INFO|Setting lport da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb down in Southbound
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.251 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 ovn_controller[130591]: 2025-11-29T07:59:56Z|00161|binding|INFO|Removing iface tapda41dfe7-58 ovn-installed in OVS
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.255 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:56.263 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:d2:c3 10.100.0.12'], port_security=['fa:16:3e:5b:d2:c3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5e56bdc6-e188-4475-a5b5-41dec34857ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4015fd67-b711-4b9b-a6df-c2ac0c4e22ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:56.264 139780 INFO neutron.agent.ovn.metadata.agent [-] Port da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 02:59:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:56.265 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 738e99b4-b58e-4eff-b209-c4aa3748c994, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:59:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:56.267 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[30668cfa-0a48-49d8-b805-c8b7bc25faa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:56.267 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 namespace which is not needed anymore#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.282 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.283 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.284 226310 DEBUG nova.network.neutron [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.313 226310 INFO nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:59:56 np0005539564 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000038.scope: Deactivated successfully.
Nov 29 02:59:56 np0005539564 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000038.scope: Consumed 15.208s CPU time.
Nov 29 02:59:56 np0005539564 systemd-machined[190128]: Machine qemu-25-instance-00000038 terminated.
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.337 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.348 226310 DEBUG nova.objects.instance [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'system_metadata' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.378 226310 INFO nova.virt.libvirt.driver [-] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Instance destroyed successfully.#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.379 226310 DEBUG nova.objects.instance [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'resources' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.421 226310 DEBUG nova.objects.instance [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'flavor' on Instance uuid 5e56bdc6-e188-4475-a5b5-41dec34857ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.427 226310 DEBUG nova.virt.libvirt.vif [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.428 226310 DEBUG nova.network.os_vif_util [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "address": "fa:16:3e:5b:d2:c3", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41dfe7-58", "ovs_interfaceid": "da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.428 226310 DEBUG nova.network.os_vif_util [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:d2:c3,bridge_name='br-int',has_traffic_filtering=True,id=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41dfe7-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.429 226310 DEBUG os_vif [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:d2:c3,bridge_name='br-int',has_traffic_filtering=True,id=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41dfe7-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.431 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.431 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda41dfe7-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.432 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.434 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.436 226310 INFO os_vif [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:d2:c3,bridge_name='br-int',has_traffic_filtering=True,id=da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41dfe7-58')#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.437 226310 DEBUG nova.virt.libvirt.vif [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.437 226310 DEBUG nova.network.os_vif_util [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.438 226310 DEBUG nova.network.os_vif_util [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.438 226310 DEBUG os_vif [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.439 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.439 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabd6dd85-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.439 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.441 226310 INFO os_vif [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61')#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.464 226310 DEBUG nova.virt.libvirt.vif [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.466 226310 DEBUG nova.network.os_vif_util [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.466 226310 DEBUG nova.network.os_vif_util [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.470 226310 DEBUG nova.virt.libvirt.guest [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:09:70:da"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapabd6dd85-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.475 226310 DEBUG nova.virt.libvirt.guest [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:09:70:da"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapabd6dd85-61"/></interface>not found in domain: <domain type='kvm'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <name>instance-00000038</name>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <uuid>5e56bdc6-e188-4475-a5b5-41dec34857ee</uuid>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1207742721</nova:name>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 07:58:52</nova:creationTime>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <nova:port uuid="da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb">
Nov 29 02:59:56 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <system>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <entry name='serial'>5e56bdc6-e188-4475-a5b5-41dec34857ee</entry>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <entry name='uuid'>5e56bdc6-e188-4475-a5b5-41dec34857ee</entry>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </system>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <os>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </os>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <features>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </features>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='partial'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <model fallback='allow'>Nehalem</model>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </clock>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <devices>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/5e56bdc6-e188-4475-a5b5-41dec34857ee_disk'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      </auth>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/5e56bdc6-e188-4475-a5b5-41dec34857ee_disk.config'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      </source>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </disk>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </controller>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:5b:d2:c3'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target dev='tapda41dfe7-58'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </interface>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/console.log' append='off'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      </target>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </serial>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <console type='pty'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee/console.log' append='off'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </console>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </input>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <video>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </video>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </rng>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </devices>
Nov 29 02:59:56 np0005539564 nova_compute[226295]: </domain>
Nov 29 02:59:56 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.475 226310 WARNING nova.virt.libvirt.driver [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Detaching interface fa:16:3e:09:70:da failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapabd6dd85-61' not found.#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.476 226310 DEBUG nova.virt.libvirt.vif [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1207742721',display_name='tempest-AttachInterfacesTestJSON-server-1207742721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1207742721',id=56,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDycEVDRdfXO8pLXmtjTRtnr0BF71pXddXuCIscLHPIEn50XzNEMPicg3w04Q889ueh17/8w/QBi2AydIP+WMWiGgPA2kSG2wH1FLSX5aM1oFvCoFB8oXZBEaftkGJwedg==',key_name='tempest-keypair-1494504529',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-3d1scc5u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=5e56bdc6-e188-4475-a5b5-41dec34857ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.476 226310 DEBUG nova.network.os_vif_util [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "address": "fa:16:3e:09:70:da", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabd6dd85-61", "ovs_interfaceid": "abd6dd85-61a4-4a09-8f10-22994ccc6546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.477 226310 DEBUG nova.network.os_vif_util [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.477 226310 DEBUG os_vif [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.479 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.479 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabd6dd85-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.479 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.482 226310 INFO os_vif [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:70:da,bridge_name='br-int',has_traffic_filtering=True,id=abd6dd85-61a4-4a09-8f10-22994ccc6546,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabd6dd85-61')#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.483 226310 DEBUG nova.virt.libvirt.guest [req-d8e8186c-d614-45bf-8903-45ce38af819c req-cd4dac24-22aa-4870-9cd6-dae172505555 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1207742721</nova:name>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 07:59:56</nova:creationTime>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    <nova:port uuid="da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb">
Nov 29 02:59:56 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 02:59:56 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 02:59:56 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 02:59:56 np0005539564 nova_compute[226295]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.524 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.526 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.527 226310 INFO nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Creating image(s)#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.565 226310 DEBUG nova.storage.rbd_utils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.607 226310 DEBUG nova.storage.rbd_utils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.650 226310 DEBUG nova.storage.rbd_utils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.656 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.696 226310 DEBUG nova.policy [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7d59bea260d4752aa29379967636c0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:59:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:56.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.753 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.754 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.755 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:56 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.755 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:56 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[247820]: [NOTICE]   (247831) : haproxy version is 2.8.14-c23fe91
Nov 29 02:59:56 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[247820]: [NOTICE]   (247831) : path to executable is /usr/sbin/haproxy
Nov 29 02:59:56 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[247820]: [WARNING]  (247831) : Exiting Master process...
Nov 29 02:59:56 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[247820]: [ALERT]    (247831) : Current worker (247833) exited with code 143 (Terminated)
Nov 29 02:59:56 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[247820]: [WARNING]  (247831) : All workers exited. Exiting... (0)
Nov 29 02:59:56 np0005539564 systemd[1]: libpod-140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886.scope: Deactivated successfully.
Nov 29 02:59:56 np0005539564 podman[248096]: 2025-11-29 07:59:56.947313298 +0000 UTC m=+0.543352810 container died 140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:56.999 226310 DEBUG nova.storage.rbd_utils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.005 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:57 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886-userdata-shm.mount: Deactivated successfully.
Nov 29 02:59:57 np0005539564 systemd[1]: var-lib-containers-storage-overlay-4caee926296f262832ced588e44fc1f0b708c8f3b9f64cb1a710ebbc5a66f545-merged.mount: Deactivated successfully.
Nov 29 02:59:57 np0005539564 podman[248096]: 2025-11-29 07:59:57.077756576 +0000 UTC m=+0.673796088 container cleanup 140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:59:57 np0005539564 systemd[1]: libpod-conmon-140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886.scope: Deactivated successfully.
Nov 29 02:59:57 np0005539564 podman[248226]: 2025-11-29 07:59:57.181275776 +0000 UTC m=+0.066998013 container remove 140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.193 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6a22d4-f5ee-4ac8-b3e5-6b70ce73d51a]: (4, ('Sat Nov 29 07:59:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 (140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886)\n140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886\nSat Nov 29 07:59:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 (140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886)\n140101a4525744b420ccf61d89daef5d561888b4631880d75e6ba97d7e5a9886\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.195 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0c51cf5a-2fa5-43c1-bb4e-921f7d8559b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.196 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.198 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:57 np0005539564 kernel: tap738e99b4-b0: left promiscuous mode
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.201 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.204 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcd35ed-f22c-4028-b256-217ca1343f69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.214 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.221 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[54b91c8b-2898-4834-97b8-e5e65e386ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.223 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce58fe4-1042-4372-83fd-68c61fef98f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.245 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4933de6f-2bb8-4188-803e-175ad1fc3ee2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612509, 'reachable_time': 32149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248252, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:57 np0005539564 systemd[1]: run-netns-ovnmeta\x2d738e99b4\x2db58e\x2d4eff\x2db209\x2dc4aa3748c994.mount: Deactivated successfully.
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.253 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.254 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9a1df4-3ace-4140-a030-7ea5758de31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.371 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.440 226310 DEBUG nova.storage.rbd_utils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] resizing rbd image 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.551 226310 DEBUG nova.objects.instance [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 3886780f-7115-4500-9cdd-6e5aae5d95f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.571 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.572 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Ensure instance console log exists: /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.572 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.573 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.573 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.743 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.743 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.745 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:59:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 07:59:57.746 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.798 226310 INFO nova.virt.libvirt.driver [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Deleting instance files /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee_del#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.800 226310 INFO nova.virt.libvirt.driver [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Deletion of /var/lib/nova/instances/5e56bdc6-e188-4475-a5b5-41dec34857ee_del complete#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.902 226310 INFO nova.compute.manager [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Took 5.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.905 226310 DEBUG oslo.service.loopingcall [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.906 226310 DEBUG nova.compute.manager [-] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.907 226310 DEBUG nova.network.neutron [-] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:59:57 np0005539564 nova_compute[226295]: 2025-11-29 07:59:57.993 226310 DEBUG nova.network.neutron [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Successfully created port: aa9c9321-95f5-459f-816a-f038233bdb6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:59:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e233 e233: 3 total, 3 up, 3 in
Nov 29 02:59:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.512 226310 DEBUG nova.compute.manager [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-unplugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.513 226310 DEBUG oslo_concurrency.lockutils [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.513 226310 DEBUG oslo_concurrency.lockutils [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.514 226310 DEBUG oslo_concurrency.lockutils [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.514 226310 DEBUG nova.compute.manager [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] No waiting events found dispatching network-vif-unplugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.515 226310 DEBUG nova.compute.manager [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-unplugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.515 226310 DEBUG nova.compute.manager [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.516 226310 DEBUG oslo_concurrency.lockutils [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.516 226310 DEBUG oslo_concurrency.lockutils [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.516 226310 DEBUG oslo_concurrency.lockutils [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.517 226310 DEBUG nova.compute.manager [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] No waiting events found dispatching network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:58 np0005539564 nova_compute[226295]: 2025-11-29 07:59:58.517 226310 WARNING nova.compute.manager [req-666f2d95-2f8f-44fb-ac27-5786075fbd77 req-293c13c0-7ddc-4b4f-8de3-8631d27586d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received unexpected event network-vif-plugged-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:59:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 02:59:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:58.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.083 226310 DEBUG nova.network.neutron [-] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.115 226310 INFO nova.compute.manager [-] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Took 1.21 seconds to deallocate network for instance.#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.184 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.184 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.215 226310 DEBUG nova.compute.manager [req-efc65946-2522-42ab-8ea6-2215c2efb4e5 req-9ab53195-88c9-4f1c-8695-9d15afe3eedd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Received event network-vif-deleted-da41dfe7-58cc-4f6b-a0ca-f4793d86fdfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.282 226310 DEBUG oslo_concurrency.processutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.514 226310 DEBUG nova.network.neutron [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Successfully updated port: aa9c9321-95f5-459f-816a-f038233bdb6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.572 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "refresh_cache-3886780f-7115-4500-9cdd-6e5aae5d95f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.573 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquired lock "refresh_cache-3886780f-7115-4500-9cdd-6e5aae5d95f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.573 226310 DEBUG nova.network.neutron [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.628 226310 DEBUG nova.compute.manager [req-9a163cd5-b4a6-4f0b-a757-47823d332826 req-305adf94-ffc1-4d61-8268-76f0921e57db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received event network-changed-aa9c9321-95f5-459f-816a-f038233bdb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.629 226310 DEBUG nova.compute.manager [req-9a163cd5-b4a6-4f0b-a757-47823d332826 req-305adf94-ffc1-4d61-8268-76f0921e57db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Refreshing instance network info cache due to event network-changed-aa9c9321-95f5-459f-816a-f038233bdb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.630 226310 DEBUG oslo_concurrency.lockutils [req-9a163cd5-b4a6-4f0b-a757-47823d332826 req-305adf94-ffc1-4d61-8268-76f0921e57db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-3886780f-7115-4500-9cdd-6e5aae5d95f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2478403709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.808 226310 DEBUG oslo_concurrency.processutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.815 226310 DEBUG nova.compute.provider_tree [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.832 226310 DEBUG nova.scheduler.client.report [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.869 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.911 226310 INFO nova.scheduler.client.report [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Deleted allocations for instance 5e56bdc6-e188-4475-a5b5-41dec34857ee#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.955 226310 DEBUG nova.network.neutron [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:59 np0005539564 nova_compute[226295]: 2025-11-29 07:59:59.995 226310 DEBUG oslo_concurrency.lockutils [None req-acb4cb6f-6621-4b0c-98af-58e06bd828c4 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "5e56bdc6-e188-4475-a5b5-41dec34857ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:00 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 03:00:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:00.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:01 np0005539564 nova_compute[226295]: 2025-11-29 08:00:01.111 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:01 np0005539564 nova_compute[226295]: 2025-11-29 08:00:01.433 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.035 226310 DEBUG nova.network.neutron [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Updating instance_info_cache with network_info: [{"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.065 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Releasing lock "refresh_cache-3886780f-7115-4500-9cdd-6e5aae5d95f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.066 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Instance network_info: |[{"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.067 226310 DEBUG oslo_concurrency.lockutils [req-9a163cd5-b4a6-4f0b-a757-47823d332826 req-305adf94-ffc1-4d61-8268-76f0921e57db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-3886780f-7115-4500-9cdd-6e5aae5d95f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.067 226310 DEBUG nova.network.neutron [req-9a163cd5-b4a6-4f0b-a757-47823d332826 req-305adf94-ffc1-4d61-8268-76f0921e57db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Refreshing network info cache for port aa9c9321-95f5-459f-816a-f038233bdb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.071 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Start _get_guest_xml network_info=[{"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.078 226310 WARNING nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.084 226310 DEBUG nova.virt.libvirt.host [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.086 226310 DEBUG nova.virt.libvirt.host [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.113 226310 DEBUG nova.virt.libvirt.host [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.114 226310 DEBUG nova.virt.libvirt.host [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.116 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.116 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.117 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.117 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.117 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.117 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.118 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.118 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.118 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.118 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.119 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.119 226310 DEBUG nova.virt.hardware [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.124 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.246 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1426721546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.576 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.608 226310 DEBUG nova.storage.rbd_utils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:02 np0005539564 nova_compute[226295]: 2025-11-29 08:00:02.614 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:02.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:02.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/911836961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.065 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.067 226310 DEBUG nova.virt.libvirt.vif [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-476331396',display_name='tempest-ImagesTestJSON-server-476331396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-476331396',id=58,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-8xzl5l41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:59:56Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=3886780f-7115-4500-9cdd-6e5aae5d95f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.067 226310 DEBUG nova.network.os_vif_util [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.068 226310 DEBUG nova.network.os_vif_util [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:15:7d,bridge_name='br-int',has_traffic_filtering=True,id=aa9c9321-95f5-459f-816a-f038233bdb6f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c9321-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.069 226310 DEBUG nova.objects.instance [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3886780f-7115-4500-9cdd-6e5aae5d95f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.115 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <uuid>3886780f-7115-4500-9cdd-6e5aae5d95f9</uuid>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <name>instance-0000003a</name>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <nova:name>tempest-ImagesTestJSON-server-476331396</nova:name>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:00:02</nova:creationTime>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <nova:user uuid="f7d59bea260d4752aa29379967636c0b">tempest-ImagesTestJSON-911260095-project-member</nova:user>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <nova:project uuid="4d8c5b7e3ca74bc1880eb616b04711f7">tempest-ImagesTestJSON-911260095</nova:project>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <nova:port uuid="aa9c9321-95f5-459f-816a-f038233bdb6f">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <entry name="serial">3886780f-7115-4500-9cdd-6e5aae5d95f9</entry>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <entry name="uuid">3886780f-7115-4500-9cdd-6e5aae5d95f9</entry>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/3886780f-7115-4500-9cdd-6e5aae5d95f9_disk">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/3886780f-7115-4500-9cdd-6e5aae5d95f9_disk.config">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:bc:15:7d"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <target dev="tapaa9c9321-95"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9/console.log" append="off"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:00:03 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:00:03 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:00:03 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:00:03 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.117 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Preparing to wait for external event network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.118 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.119 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.120 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.121 226310 DEBUG nova.virt.libvirt.vif [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-476331396',display_name='tempest-ImagesTestJSON-server-476331396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-476331396',id=58,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-8xzl5l41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:59:56Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=3886780f-7115-4500-9cdd-6e5aae5d95f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.122 226310 DEBUG nova.network.os_vif_util [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.123 226310 DEBUG nova.network.os_vif_util [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:15:7d,bridge_name='br-int',has_traffic_filtering=True,id=aa9c9321-95f5-459f-816a-f038233bdb6f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c9321-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.124 226310 DEBUG os_vif [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:15:7d,bridge_name='br-int',has_traffic_filtering=True,id=aa9c9321-95f5-459f-816a-f038233bdb6f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c9321-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.125 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.126 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.127 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.131 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.132 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9c9321-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.133 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa9c9321-95, col_values=(('external_ids', {'iface-id': 'aa9c9321-95f5-459f-816a-f038233bdb6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:15:7d', 'vm-uuid': '3886780f-7115-4500-9cdd-6e5aae5d95f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:03 np0005539564 NetworkManager[48997]: <info>  [1764403203.1370] manager: (tapaa9c9321-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.138 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.144 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.146 226310 INFO os_vif [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:15:7d,bridge_name='br-int',has_traffic_filtering=True,id=aa9c9321-95f5-459f-816a-f038233bdb6f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c9321-95')#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.229 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.230 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.230 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No VIF found with MAC fa:16:3e:bc:15:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.230 226310 INFO nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Using config drive#033[00m
Nov 29 03:00:03 np0005539564 nova_compute[226295]: 2025-11-29 08:00:03.257 226310 DEBUG nova.storage.rbd_utils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:03.710 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:03.711 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:03.711 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.400 226310 INFO nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Creating config drive at /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9/disk.config#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.407 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvpkkr3iz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.443 226310 DEBUG nova.network.neutron [req-9a163cd5-b4a6-4f0b-a757-47823d332826 req-305adf94-ffc1-4d61-8268-76f0921e57db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Updated VIF entry in instance network info cache for port aa9c9321-95f5-459f-816a-f038233bdb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.446 226310 DEBUG nova.network.neutron [req-9a163cd5-b4a6-4f0b-a757-47823d332826 req-305adf94-ffc1-4d61-8268-76f0921e57db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Updating instance_info_cache with network_info: [{"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.549 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvpkkr3iz" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.580 226310 DEBUG nova.storage.rbd_utils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.585 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9/disk.config 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.621 226310 DEBUG oslo_concurrency.lockutils [req-9a163cd5-b4a6-4f0b-a757-47823d332826 req-305adf94-ffc1-4d61-8268-76f0921e57db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-3886780f-7115-4500-9cdd-6e5aae5d95f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.777 226310 DEBUG oslo_concurrency.processutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9/disk.config 3886780f-7115-4500-9cdd-6e5aae5d95f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.778 226310 INFO nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Deleting local config drive /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9/disk.config because it was imported into RBD.#033[00m
Nov 29 03:00:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:04.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:04 np0005539564 kernel: tapaa9c9321-95: entered promiscuous mode
Nov 29 03:00:04 np0005539564 NetworkManager[48997]: <info>  [1764403204.8654] manager: (tapaa9c9321-95): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Nov 29 03:00:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:04Z|00162|binding|INFO|Claiming lport aa9c9321-95f5-459f-816a-f038233bdb6f for this chassis.
Nov 29 03:00:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:04Z|00163|binding|INFO|aa9c9321-95f5-459f-816a-f038233bdb6f: Claiming fa:16:3e:bc:15:7d 10.100.0.12
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.868 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.880 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:15:7d 10.100.0.12'], port_security=['fa:16:3e:bc:15:7d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3886780f-7115-4500-9cdd-6e5aae5d95f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=aa9c9321-95f5-459f-816a-f038233bdb6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.882 139780 INFO neutron.agent.ovn.metadata.agent [-] Port aa9c9321-95f5-459f-816a-f038233bdb6f in datapath 7471f45a-da60-4567-a888-2a87ff526609 bound to our chassis#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.883 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7471f45a-da60-4567-a888-2a87ff526609#033[00m
Nov 29 03:00:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:04Z|00164|binding|INFO|Setting lport aa9c9321-95f5-459f-816a-f038233bdb6f ovn-installed in OVS
Nov 29 03:00:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:04Z|00165|binding|INFO|Setting lport aa9c9321-95f5-459f-816a-f038233bdb6f up in Southbound
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.897 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:04 np0005539564 systemd-udevd[248485]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.904 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8132ea48-3017-433b-87ff-7465a29e42b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.905 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7471f45a-d1 in ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:00:04 np0005539564 nova_compute[226295]: 2025-11-29 08:00:04.905 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.907 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7471f45a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.907 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b89486-bd47-4a7e-9b5d-2d8637f3bea3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.908 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[192fc941-387d-48d9-88a0-75be0cf7c299]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:04 np0005539564 systemd-machined[190128]: New machine qemu-26-instance-0000003a.
Nov 29 03:00:04 np0005539564 NetworkManager[48997]: <info>  [1764403204.9166] device (tapaa9c9321-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:00:04 np0005539564 NetworkManager[48997]: <info>  [1764403204.9175] device (tapaa9c9321-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.921 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[e89a6405-58b1-4cc6-951b-75326eb2bfbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:04 np0005539564 systemd[1]: Started Virtual Machine qemu-26-instance-0000003a.
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.934 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3f13999a-1747-4272-8178-71a55a04b74e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.971 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c423e949-786a-4fff-a3c2-bed01c3ed4b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:04.979 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aabc2913-a940-4fcf-8844-4e64969a654d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:04 np0005539564 NetworkManager[48997]: <info>  [1764403204.9806] manager: (tap7471f45a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Nov 29 03:00:04 np0005539564 systemd-udevd[248489]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.022 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d3807300-137d-4c81-97f6-82b467726aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.026 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[004d969f-6a8b-423d-a2c0-1089ed97319b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 NetworkManager[48997]: <info>  [1764403205.0580] device (tap7471f45a-d0): carrier: link connected
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.067 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7548e7-0835-4c07-acab-f57662ded70a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.090 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[923f4c15-902f-408c-b254-c3be2841eb6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617973, 'reachable_time': 41352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248518, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.107 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f36cf6b6-8254-41ac-b84e-2386ee7c16dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:d764'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617973, 'tstamp': 617973}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248519, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.128 226310 DEBUG nova.compute.manager [req-468c2cb6-3247-4ba6-a81f-e67f8050f9d6 req-af5a05b3-eb40-4846-84e9-97177c730a56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received event network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.128 226310 DEBUG oslo_concurrency.lockutils [req-468c2cb6-3247-4ba6-a81f-e67f8050f9d6 req-af5a05b3-eb40-4846-84e9-97177c730a56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.128 226310 DEBUG oslo_concurrency.lockutils [req-468c2cb6-3247-4ba6-a81f-e67f8050f9d6 req-af5a05b3-eb40-4846-84e9-97177c730a56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.129 226310 DEBUG oslo_concurrency.lockutils [req-468c2cb6-3247-4ba6-a81f-e67f8050f9d6 req-af5a05b3-eb40-4846-84e9-97177c730a56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.129 226310 DEBUG nova.compute.manager [req-468c2cb6-3247-4ba6-a81f-e67f8050f9d6 req-af5a05b3-eb40-4846-84e9-97177c730a56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Processing event network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.130 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a55cfd13-902b-4e99-912b-afc0d5ac77f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617973, 'reachable_time': 41352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248520, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.161 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[71ea9727-55c5-4541-9f5f-a8282561f0a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.216 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1be232e4-991d-43f8-8197-510580073fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.218 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.219 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.219 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7471f45a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.221 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:05 np0005539564 kernel: tap7471f45a-d0: entered promiscuous mode
Nov 29 03:00:05 np0005539564 NetworkManager[48997]: <info>  [1764403205.2235] manager: (tap7471f45a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.225 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7471f45a-d0, col_values=(('external_ids', {'iface-id': '06264566-5ffe-42a3-ad44-b3f54b7d79bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:05Z|00166|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.227 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.241 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.242 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.244 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4957332c-b53d-4917-963a-2d2935cda867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.244 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-7471f45a-da60-4567-a888-2a87ff526609
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 7471f45a-da60-4567-a888-2a87ff526609
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:00:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:05.245 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'env', 'PROCESS_TAG=haproxy-7471f45a-da60-4567-a888-2a87ff526609', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7471f45a-da60-4567-a888-2a87ff526609.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.415 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403205.414813, 3886780f-7115-4500-9cdd-6e5aae5d95f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.417 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.419 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.424 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.430 226310 INFO nova.virt.libvirt.driver [-] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Instance spawned successfully.#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.430 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.451 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.458 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.472 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.473 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.473 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.474 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.474 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.475 226310 DEBUG nova.virt.libvirt.driver [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.479 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.479 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403205.416143, 3886780f-7115-4500-9cdd-6e5aae5d95f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.480 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.511 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.519 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403205.422656, 3886780f-7115-4500-9cdd-6e5aae5d95f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.519 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.539 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.543 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.550 226310 INFO nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Took 9.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.551 226310 DEBUG nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.576 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.623 226310 INFO nova.compute.manager [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Took 10.68 seconds to build instance.#033[00m
Nov 29 03:00:05 np0005539564 nova_compute[226295]: 2025-11-29 08:00:05.639 226310 DEBUG oslo_concurrency.lockutils [None req-b8d13f58-5f0d-4aa4-bebc-0ea75a478f5b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:05 np0005539564 podman[248595]: 2025-11-29 08:00:05.705103571 +0000 UTC m=+0.091763483 container create c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:05 np0005539564 podman[248595]: 2025-11-29 08:00:05.6562526 +0000 UTC m=+0.042912572 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:00:05 np0005539564 systemd[1]: Started libpod-conmon-c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60.scope.
Nov 29 03:00:05 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:00:05 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f22c8af08a7178251869a6253a0a1b4e8e5938ecfb312f6e9c84e35b84ad2d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:05 np0005539564 podman[248595]: 2025-11-29 08:00:05.812137857 +0000 UTC m=+0.198797829 container init c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:00:05 np0005539564 podman[248595]: 2025-11-29 08:00:05.818271852 +0000 UTC m=+0.204931774 container start c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:05 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[248609]: [NOTICE]   (248613) : New worker (248615) forked
Nov 29 03:00:05 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[248609]: [NOTICE]   (248613) : Loading success.
Nov 29 03:00:06 np0005539564 nova_compute[226295]: 2025-11-29 08:00:06.114 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:06.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:07 np0005539564 nova_compute[226295]: 2025-11-29 08:00:07.243 226310 DEBUG nova.compute.manager [req-c1bfc908-eaf9-4921-a555-5203b7131206 req-0b7b18b8-a47e-4d5c-8905-9d1c50a5306f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received event network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:07 np0005539564 nova_compute[226295]: 2025-11-29 08:00:07.246 226310 DEBUG oslo_concurrency.lockutils [req-c1bfc908-eaf9-4921-a555-5203b7131206 req-0b7b18b8-a47e-4d5c-8905-9d1c50a5306f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:07 np0005539564 nova_compute[226295]: 2025-11-29 08:00:07.246 226310 DEBUG oslo_concurrency.lockutils [req-c1bfc908-eaf9-4921-a555-5203b7131206 req-0b7b18b8-a47e-4d5c-8905-9d1c50a5306f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:07 np0005539564 nova_compute[226295]: 2025-11-29 08:00:07.246 226310 DEBUG oslo_concurrency.lockutils [req-c1bfc908-eaf9-4921-a555-5203b7131206 req-0b7b18b8-a47e-4d5c-8905-9d1c50a5306f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:07 np0005539564 nova_compute[226295]: 2025-11-29 08:00:07.246 226310 DEBUG nova.compute.manager [req-c1bfc908-eaf9-4921-a555-5203b7131206 req-0b7b18b8-a47e-4d5c-8905-9d1c50a5306f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] No waiting events found dispatching network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:07 np0005539564 nova_compute[226295]: 2025-11-29 08:00:07.246 226310 WARNING nova.compute.manager [req-c1bfc908-eaf9-4921-a555-5203b7131206 req-0b7b18b8-a47e-4d5c-8905-9d1c50a5306f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received unexpected event network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:00:08 np0005539564 nova_compute[226295]: 2025-11-29 08:00:08.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:08.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:08 np0005539564 nova_compute[226295]: 2025-11-29 08:00:08.964 226310 DEBUG nova.objects.instance [None req-4a44c910-65fd-4380-9346-859a3b529f59 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3886780f-7115-4500-9cdd-6e5aae5d95f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:08 np0005539564 nova_compute[226295]: 2025-11-29 08:00:08.997 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403208.9970329, 3886780f-7115-4500-9cdd-6e5aae5d95f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:08 np0005539564 nova_compute[226295]: 2025-11-29 08:00:08.997 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:00:09 np0005539564 nova_compute[226295]: 2025-11-29 08:00:09.023 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:09 np0005539564 nova_compute[226295]: 2025-11-29 08:00:09.026 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:09 np0005539564 nova_compute[226295]: 2025-11-29 08:00:09.070 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:00:09 np0005539564 podman[248628]: 2025-11-29 08:00:09.502822427 +0000 UTC m=+0.058657838 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:00:09 np0005539564 podman[248627]: 2025-11-29 08:00:09.527470233 +0000 UTC m=+0.080090927 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:00:09 np0005539564 podman[248629]: 2025-11-29 08:00:09.542568032 +0000 UTC m=+0.081867375 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:00:10 np0005539564 kernel: tapaa9c9321-95 (unregistering): left promiscuous mode
Nov 29 03:00:10 np0005539564 NetworkManager[48997]: <info>  [1764403210.0910] device (tapaa9c9321-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:00:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:10Z|00167|binding|INFO|Releasing lport aa9c9321-95f5-459f-816a-f038233bdb6f from this chassis (sb_readonly=0)
Nov 29 03:00:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:10Z|00168|binding|INFO|Setting lport aa9c9321-95f5-459f-816a-f038233bdb6f down in Southbound
Nov 29 03:00:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:10Z|00169|binding|INFO|Removing iface tapaa9c9321-95 ovn-installed in OVS
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.132 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.138 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:15:7d 10.100.0.12'], port_security=['fa:16:3e:bc:15:7d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3886780f-7115-4500-9cdd-6e5aae5d95f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=aa9c9321-95f5-459f-816a-f038233bdb6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.139 139780 INFO neutron.agent.ovn.metadata.agent [-] Port aa9c9321-95f5-459f-816a-f038233bdb6f in datapath 7471f45a-da60-4567-a888-2a87ff526609 unbound from our chassis#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.141 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7471f45a-da60-4567-a888-2a87ff526609, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.142 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[46ccc403-d613-4038-b03e-5445b76f360e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.143 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace which is not needed anymore#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.152 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539564 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 29 03:00:10 np0005539564 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003a.scope: Consumed 4.259s CPU time.
Nov 29 03:00:10 np0005539564 systemd-machined[190128]: Machine qemu-26-instance-0000003a terminated.
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.265 226310 DEBUG nova.compute.manager [None req-4a44c910-65fd-4380-9346-859a3b529f59 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:10 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[248609]: [NOTICE]   (248613) : haproxy version is 2.8.14-c23fe91
Nov 29 03:00:10 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[248609]: [NOTICE]   (248613) : path to executable is /usr/sbin/haproxy
Nov 29 03:00:10 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[248609]: [WARNING]  (248613) : Exiting Master process...
Nov 29 03:00:10 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[248609]: [ALERT]    (248613) : Current worker (248615) exited with code 143 (Terminated)
Nov 29 03:00:10 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[248609]: [WARNING]  (248613) : All workers exited. Exiting... (0)
Nov 29 03:00:10 np0005539564 systemd[1]: libpod-c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60.scope: Deactivated successfully.
Nov 29 03:00:10 np0005539564 podman[248709]: 2025-11-29 08:00:10.285826188 +0000 UTC m=+0.055142322 container died c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:00:10 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60-userdata-shm.mount: Deactivated successfully.
Nov 29 03:00:10 np0005539564 systemd[1]: var-lib-containers-storage-overlay-2f22c8af08a7178251869a6253a0a1b4e8e5938ecfb312f6e9c84e35b84ad2d6-merged.mount: Deactivated successfully.
Nov 29 03:00:10 np0005539564 podman[248709]: 2025-11-29 08:00:10.326374406 +0000 UTC m=+0.095690530 container cleanup c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:00:10 np0005539564 systemd[1]: libpod-conmon-c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60.scope: Deactivated successfully.
Nov 29 03:00:10 np0005539564 podman[248749]: 2025-11-29 08:00:10.409646707 +0000 UTC m=+0.053487997 container remove c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.414 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[19730d38-6497-4dd9-9179-53a62744d90c]: (4, ('Sat Nov 29 08:00:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60)\nc12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60\nSat Nov 29 08:00:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (c12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60)\nc12a1b95db055ccec2389cffe141a3612668bdf13a0a5edf8436f6c7c6773a60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.417 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc678d8-f7d8-47c1-bc3e-8c9a8439657f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.419 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.420 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539564 kernel: tap7471f45a-d0: left promiscuous mode
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.438 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.440 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.442 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1509d0-9179-4e16-b809-b023827be116]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.460 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea20486-b2e8-479b-86d0-864384ef68f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.462 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[370dc6ff-97ca-42dc-a98b-9647e75a07f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.478 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[454c6a4e-4d6f-4be6-9e00-e2cc3aa3e057]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617964, 'reachable_time': 40811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248768, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.482 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:00:10 np0005539564 systemd[1]: run-netns-ovnmeta\x2d7471f45a\x2dda60\x2d4567\x2da888\x2d2a87ff526609.mount: Deactivated successfully.
Nov 29 03:00:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:10.482 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[165d4248-df37-4317-a686-0538446fbb0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.603 226310 DEBUG nova.compute.manager [req-e04bfb5c-c2a1-45b1-9145-8300059f9170 req-5e862ac1-f43b-4136-b313-8185799c6aa6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received event network-vif-unplugged-aa9c9321-95f5-459f-816a-f038233bdb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.604 226310 DEBUG oslo_concurrency.lockutils [req-e04bfb5c-c2a1-45b1-9145-8300059f9170 req-5e862ac1-f43b-4136-b313-8185799c6aa6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.605 226310 DEBUG oslo_concurrency.lockutils [req-e04bfb5c-c2a1-45b1-9145-8300059f9170 req-5e862ac1-f43b-4136-b313-8185799c6aa6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.605 226310 DEBUG oslo_concurrency.lockutils [req-e04bfb5c-c2a1-45b1-9145-8300059f9170 req-5e862ac1-f43b-4136-b313-8185799c6aa6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.605 226310 DEBUG nova.compute.manager [req-e04bfb5c-c2a1-45b1-9145-8300059f9170 req-5e862ac1-f43b-4136-b313-8185799c6aa6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] No waiting events found dispatching network-vif-unplugged-aa9c9321-95f5-459f-816a-f038233bdb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:10 np0005539564 nova_compute[226295]: 2025-11-29 08:00:10.606 226310 WARNING nova.compute.manager [req-e04bfb5c-c2a1-45b1-9145-8300059f9170 req-5e862ac1-f43b-4136-b313-8185799c6aa6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received unexpected event network-vif-unplugged-aa9c9321-95f5-459f-816a-f038233bdb6f for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:00:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:10.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:10.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:11 np0005539564 nova_compute[226295]: 2025-11-29 08:00:11.116 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:11 np0005539564 nova_compute[226295]: 2025-11-29 08:00:11.377 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403196.3757882, 5e56bdc6-e188-4475-a5b5-41dec34857ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:11 np0005539564 nova_compute[226295]: 2025-11-29 08:00:11.378 226310 INFO nova.compute.manager [-] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:00:11 np0005539564 nova_compute[226295]: 2025-11-29 08:00:11.401 226310 DEBUG nova.compute.manager [None req-44ae6e47-7294-4a03-bf92-343d5ad87649 - - - - - -] [instance: 5e56bdc6-e188-4475-a5b5-41dec34857ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:11 np0005539564 nova_compute[226295]: 2025-11-29 08:00:11.635 226310 DEBUG nova.compute.manager [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:11 np0005539564 nova_compute[226295]: 2025-11-29 08:00:11.692 226310 INFO nova.compute.manager [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] instance snapshotting#033[00m
Nov 29 03:00:11 np0005539564 nova_compute[226295]: 2025-11-29 08:00:11.693 226310 WARNING nova.compute.manager [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.060 226310 INFO nova.virt.libvirt.driver [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Beginning cold snapshot process#033[00m
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.398 226310 DEBUG nova.virt.libvirt.imagebackend [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.678 226310 DEBUG nova.storage.rbd_utils [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(777b6dda2517427c9f8d71a24674604a) on rbd image(3886780f-7115-4500-9cdd-6e5aae5d95f9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:00:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:12.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.769 226310 DEBUG nova.compute.manager [req-01cf34ec-27ad-40db-bccd-4e803aba5d36 req-e6f967d2-75e9-445f-88eb-7e1415fed7a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received event network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.770 226310 DEBUG oslo_concurrency.lockutils [req-01cf34ec-27ad-40db-bccd-4e803aba5d36 req-e6f967d2-75e9-445f-88eb-7e1415fed7a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.770 226310 DEBUG oslo_concurrency.lockutils [req-01cf34ec-27ad-40db-bccd-4e803aba5d36 req-e6f967d2-75e9-445f-88eb-7e1415fed7a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.771 226310 DEBUG oslo_concurrency.lockutils [req-01cf34ec-27ad-40db-bccd-4e803aba5d36 req-e6f967d2-75e9-445f-88eb-7e1415fed7a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.771 226310 DEBUG nova.compute.manager [req-01cf34ec-27ad-40db-bccd-4e803aba5d36 req-e6f967d2-75e9-445f-88eb-7e1415fed7a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] No waiting events found dispatching network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:12 np0005539564 nova_compute[226295]: 2025-11-29 08:00:12.772 226310 WARNING nova.compute.manager [req-01cf34ec-27ad-40db-bccd-4e803aba5d36 req-e6f967d2-75e9-445f-88eb-7e1415fed7a9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received unexpected event network-vif-plugged-aa9c9321-95f5-459f-816a-f038233bdb6f for instance with vm_state suspended and task_state image_uploading.#033[00m
Nov 29 03:00:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:12.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e234 e234: 3 total, 3 up, 3 in
Nov 29 03:00:13 np0005539564 nova_compute[226295]: 2025-11-29 08:00:13.148 226310 DEBUG nova.storage.rbd_utils [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] cloning vms/3886780f-7115-4500-9cdd-6e5aae5d95f9_disk@777b6dda2517427c9f8d71a24674604a to images/5db9d02f-2f54-48e1-952b-46fcc86d3d02 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:00:13 np0005539564 nova_compute[226295]: 2025-11-29 08:00:13.191 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:13 np0005539564 nova_compute[226295]: 2025-11-29 08:00:13.309 226310 DEBUG nova.storage.rbd_utils [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] flattening images/5db9d02f-2f54-48e1-952b-46fcc86d3d02 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:00:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:13 np0005539564 nova_compute[226295]: 2025-11-29 08:00:13.850 226310 DEBUG nova.storage.rbd_utils [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] removing snapshot(777b6dda2517427c9f8d71a24674604a) on rbd image(3886780f-7115-4500-9cdd-6e5aae5d95f9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:00:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:14.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:14.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e235 e235: 3 total, 3 up, 3 in
Nov 29 03:00:15 np0005539564 nova_compute[226295]: 2025-11-29 08:00:15.547 226310 DEBUG nova.storage.rbd_utils [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(snap) on rbd image(5db9d02f-2f54-48e1-952b-46fcc86d3d02) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:00:16 np0005539564 nova_compute[226295]: 2025-11-29 08:00:16.119 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e236 e236: 3 total, 3 up, 3 in
Nov 29 03:00:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:16.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:16.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:18 np0005539564 nova_compute[226295]: 2025-11-29 08:00:18.193 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:18 np0005539564 nova_compute[226295]: 2025-11-29 08:00:18.391 226310 INFO nova.virt.libvirt.driver [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Snapshot image upload complete#033[00m
Nov 29 03:00:18 np0005539564 nova_compute[226295]: 2025-11-29 08:00:18.392 226310 INFO nova.compute.manager [None req-1502e2f9-3f39-4c44-89ea-b73552d48dfb f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Took 6.70 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:00:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:18.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:20.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.121 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e237 e237: 3 total, 3 up, 3 in
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.931 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "3886780f-7115-4500-9cdd-6e5aae5d95f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.932 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.932 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.933 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.933 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.934 226310 INFO nova.compute.manager [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Terminating instance#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.936 226310 DEBUG nova.compute.manager [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.943 226310 INFO nova.virt.libvirt.driver [-] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Instance destroyed successfully.#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.943 226310 DEBUG nova.objects.instance [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'resources' on Instance uuid 3886780f-7115-4500-9cdd-6e5aae5d95f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.959 226310 DEBUG nova.virt.libvirt.vif [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-476331396',display_name='tempest-ImagesTestJSON-server-476331396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-476331396',id=58,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-8xzl5l41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=3886780f-7115-4500-9cdd-6e5aae5d95f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.960 226310 DEBUG nova.network.os_vif_util [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "aa9c9321-95f5-459f-816a-f038233bdb6f", "address": "fa:16:3e:bc:15:7d", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c9321-95", "ovs_interfaceid": "aa9c9321-95f5-459f-816a-f038233bdb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.961 226310 DEBUG nova.network.os_vif_util [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:15:7d,bridge_name='br-int',has_traffic_filtering=True,id=aa9c9321-95f5-459f-816a-f038233bdb6f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c9321-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.961 226310 DEBUG os_vif [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:15:7d,bridge_name='br-int',has_traffic_filtering=True,id=aa9c9321-95f5-459f-816a-f038233bdb6f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c9321-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.963 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.964 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9c9321-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.969 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:00:21 np0005539564 nova_compute[226295]: 2025-11-29 08:00:21.971 226310 INFO os_vif [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:15:7d,bridge_name='br-int',has_traffic_filtering=True,id=aa9c9321-95f5-459f-816a-f038233bdb6f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c9321-95')#033[00m
Nov 29 03:00:22 np0005539564 nova_compute[226295]: 2025-11-29 08:00:22.457 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:22 np0005539564 nova_compute[226295]: 2025-11-29 08:00:22.678 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:22.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:22.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e238 e238: 3 total, 3 up, 3 in
Nov 29 03:00:24 np0005539564 nova_compute[226295]: 2025-11-29 08:00:24.705 226310 INFO nova.virt.libvirt.driver [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Deleting instance files /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9_del#033[00m
Nov 29 03:00:24 np0005539564 nova_compute[226295]: 2025-11-29 08:00:24.706 226310 INFO nova.virt.libvirt.driver [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Deletion of /var/lib/nova/instances/3886780f-7115-4500-9cdd-6e5aae5d95f9_del complete#033[00m
Nov 29 03:00:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:24 np0005539564 nova_compute[226295]: 2025-11-29 08:00:24.833 226310 INFO nova.compute.manager [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Took 2.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:00:24 np0005539564 nova_compute[226295]: 2025-11-29 08:00:24.834 226310 DEBUG oslo.service.loopingcall [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:00:24 np0005539564 nova_compute[226295]: 2025-11-29 08:00:24.834 226310 DEBUG nova.compute.manager [-] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:00:24 np0005539564 nova_compute[226295]: 2025-11-29 08:00:24.834 226310 DEBUG nova.network.neutron [-] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:00:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.267 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403210.2662468, 3886780f-7115-4500-9cdd-6e5aae5d95f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.268 226310 INFO nova.compute.manager [-] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.293 226310 DEBUG nova.compute.manager [None req-37e0323f-a03f-4dc9-b47c-7486df360549 - - - - - -] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.863 226310 DEBUG nova.network.neutron [-] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.886 226310 INFO nova.compute.manager [-] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Took 1.05 seconds to deallocate network for instance.#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.902 226310 DEBUG nova.compute.manager [req-d5eca5eb-ba20-4d25-a6f2-d332f2de82f1 req-86935f4b-e096-4a1c-b278-a45a8c272b69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Received event network-vif-deleted-aa9c9321-95f5-459f-816a-f038233bdb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.902 226310 INFO nova.compute.manager [req-d5eca5eb-ba20-4d25-a6f2-d332f2de82f1 req-86935f4b-e096-4a1c-b278-a45a8c272b69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Neutron deleted interface aa9c9321-95f5-459f-816a-f038233bdb6f; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.903 226310 DEBUG nova.network.neutron [req-d5eca5eb-ba20-4d25-a6f2-d332f2de82f1 req-86935f4b-e096-4a1c-b278-a45a8c272b69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.928 226310 DEBUG nova.compute.manager [req-d5eca5eb-ba20-4d25-a6f2-d332f2de82f1 req-86935f4b-e096-4a1c-b278-a45a8c272b69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3886780f-7115-4500-9cdd-6e5aae5d95f9] Detach interface failed, port_id=aa9c9321-95f5-459f-816a-f038233bdb6f, reason: Instance 3886780f-7115-4500-9cdd-6e5aae5d95f9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.966 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:25 np0005539564 nova_compute[226295]: 2025-11-29 08:00:25.967 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.024 226310 DEBUG oslo_concurrency.processutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.124 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/522656446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.518 226310 DEBUG oslo_concurrency.processutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.527 226310 DEBUG nova.compute.provider_tree [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.544 226310 DEBUG nova.scheduler.client.report [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.585 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.655 226310 INFO nova.scheduler.client.report [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Deleted allocations for instance 3886780f-7115-4500-9cdd-6e5aae5d95f9#033[00m
Nov 29 03:00:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:26.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.790 226310 DEBUG oslo_concurrency.lockutils [None req-7938c4ca-6dc5-4c80-8982-f20da3116e42 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "3886780f-7115-4500-9cdd-6e5aae5d95f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:26.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:26 np0005539564 nova_compute[226295]: 2025-11-29 08:00:26.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e239 e239: 3 total, 3 up, 3 in
Nov 29 03:00:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:28.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:28.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:30.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:31 np0005539564 nova_compute[226295]: 2025-11-29 08:00:31.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:31 np0005539564 nova_compute[226295]: 2025-11-29 08:00:31.265 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:31 np0005539564 nova_compute[226295]: 2025-11-29 08:00:31.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:31 np0005539564 nova_compute[226295]: 2025-11-29 08:00:31.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:00:31 np0005539564 nova_compute[226295]: 2025-11-29 08:00:31.969 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:32 np0005539564 nova_compute[226295]: 2025-11-29 08:00:32.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:32.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:32.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:34 np0005539564 nova_compute[226295]: 2025-11-29 08:00:34.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:34.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:35 np0005539564 nova_compute[226295]: 2025-11-29 08:00:35.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:35 np0005539564 nova_compute[226295]: 2025-11-29 08:00:35.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:00:35 np0005539564 nova_compute[226295]: 2025-11-29 08:00:35.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:00:35 np0005539564 nova_compute[226295]: 2025-11-29 08:00:35.379 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:00:36 np0005539564 nova_compute[226295]: 2025-11-29 08:00:36.158 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:36 np0005539564 podman[249126]: 2025-11-29 08:00:36.262386082 +0000 UTC m=+0.408193614 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:00:36 np0005539564 nova_compute[226295]: 2025-11-29 08:00:36.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:36 np0005539564 podman[249126]: 2025-11-29 08:00:36.562238433 +0000 UTC m=+0.708045885 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:00:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:36.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:36.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:36 np0005539564 nova_compute[226295]: 2025-11-29 08:00:36.942 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:36 np0005539564 nova_compute[226295]: 2025-11-29 08:00:36.942 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:36 np0005539564 nova_compute[226295]: 2025-11-29 08:00:36.960 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:37.076 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:37.077 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.082 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.109 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.109 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.116 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.116 226310 INFO nova.compute.claims [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.251 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4066026751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.690 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.698 226310 DEBUG nova.compute.provider_tree [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.718 226310 DEBUG nova.scheduler.client.report [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.754 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.755 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.829 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.829 226310 DEBUG nova.network.neutron [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.861 226310 INFO nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:00:37 np0005539564 nova_compute[226295]: 2025-11-29 08:00:37.887 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.032 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.033 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.034 226310 INFO nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Creating image(s)#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.062 226310 DEBUG nova.storage.rbd_utils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.098 226310 DEBUG nova.storage.rbd_utils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.139 226310 DEBUG nova.storage.rbd_utils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.143 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.238 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.239 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.241 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.241 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.279 226310 DEBUG nova.storage.rbd_utils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.283 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.373 226310 DEBUG nova.policy [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d4e5ab1ae494327abcb3693ba332586', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6fce027870d041328a9b9968bfe90665', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:00:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:00:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.661 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:38 np0005539564 nova_compute[226295]: 2025-11-29 08:00:38.770 226310 DEBUG nova.storage.rbd_utils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] resizing rbd image 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:00:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:38.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:38.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:39 np0005539564 nova_compute[226295]: 2025-11-29 08:00:39.227 226310 DEBUG nova.objects.instance [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'migration_context' on Instance uuid 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:39 np0005539564 nova_compute[226295]: 2025-11-29 08:00:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:39 np0005539564 nova_compute[226295]: 2025-11-29 08:00:39.354 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:00:39 np0005539564 nova_compute[226295]: 2025-11-29 08:00:39.355 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Ensure instance console log exists: /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:00:39 np0005539564 nova_compute[226295]: 2025-11-29 08:00:39.356 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:39 np0005539564 nova_compute[226295]: 2025-11-29 08:00:39.356 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:39 np0005539564 nova_compute[226295]: 2025-11-29 08:00:39.357 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:39 np0005539564 nova_compute[226295]: 2025-11-29 08:00:39.471 226310 DEBUG nova.network.neutron [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Successfully created port: 31fb5a22-c39e-440e-b8e6-681b1ea8baed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.378 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.379 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:40 np0005539564 podman[249567]: 2025-11-29 08:00:40.514419517 +0000 UTC m=+0.096707738 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:40 np0005539564 podman[249568]: 2025-11-29 08:00:40.514336405 +0000 UTC m=+0.095868235 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:00:40 np0005539564 podman[249565]: 2025-11-29 08:00:40.557947374 +0000 UTC m=+0.137596813 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.575 226310 DEBUG nova.network.neutron [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Successfully updated port: 31fb5a22-c39e-440e-b8e6-681b1ea8baed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.596 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.597 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquired lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.597 226310 DEBUG nova.network.neutron [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.705 226310 DEBUG nova.compute.manager [req-59115c49-93c1-46eb-b226-a3614d4dc6eb req-518e264e-d3bb-448d-b69c-a1d6a56cbd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-changed-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.706 226310 DEBUG nova.compute.manager [req-59115c49-93c1-46eb-b226-a3614d4dc6eb req-518e264e-d3bb-448d-b69c-a1d6a56cbd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Refreshing instance network info cache due to event network-changed-31fb5a22-c39e-440e-b8e6-681b1ea8baed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.707 226310 DEBUG oslo_concurrency.lockutils [req-59115c49-93c1-46eb-b226-a3614d4dc6eb req-518e264e-d3bb-448d-b69c-a1d6a56cbd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:40.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2556255479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.853 226310 DEBUG nova.network.neutron [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:40.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:40 np0005539564 nova_compute[226295]: 2025-11-29 08:00:40.887 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.146 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.149 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4676MB free_disk=20.877899169921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.149 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.150 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.159 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.221 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.222 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.222 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:00:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e240 e240: 3 total, 3 up, 3 in
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.257 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3165533345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.751 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.758 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.777 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.809 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.809 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.852 226310 DEBUG nova.network.neutron [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updating instance_info_cache with network_info: [{"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.873 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Releasing lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.874 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Instance network_info: |[{"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.875 226310 DEBUG oslo_concurrency.lockutils [req-59115c49-93c1-46eb-b226-a3614d4dc6eb req-518e264e-d3bb-448d-b69c-a1d6a56cbd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.875 226310 DEBUG nova.network.neutron [req-59115c49-93c1-46eb-b226-a3614d4dc6eb req-518e264e-d3bb-448d-b69c-a1d6a56cbd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Refreshing network info cache for port 31fb5a22-c39e-440e-b8e6-681b1ea8baed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.881 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Start _get_guest_xml network_info=[{"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.885 226310 WARNING nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.891 226310 DEBUG nova.virt.libvirt.host [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.892 226310 DEBUG nova.virt.libvirt.host [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.896 226310 DEBUG nova.virt.libvirt.host [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.897 226310 DEBUG nova.virt.libvirt.host [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.900 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.900 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.901 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.902 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.902 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.903 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.903 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.903 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.904 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.904 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.905 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.905 226310 DEBUG nova.virt.hardware [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:00:41 np0005539564 nova_compute[226295]: 2025-11-29 08:00:41.912 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.078 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e241 e241: 3 total, 3 up, 3 in
Nov 29 03:00:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2429097201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.433 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.480 226310 DEBUG nova.storage.rbd_utils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.486 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:42.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3539241879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.980 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.982 226310 DEBUG nova.virt.libvirt.vif [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2089001117',display_name='tempest-SecurityGroupsTestJSON-server-2089001117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2089001117',id=62,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-a4qh74ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:37Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=179785bb-7a72-4a5f-b2a6-ba2b4a10cba2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.983 226310 DEBUG nova.network.os_vif_util [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.984 226310 DEBUG nova.network.os_vif_util [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:42 np0005539564 nova_compute[226295]: 2025-11-29 08:00:42.986 226310 DEBUG nova.objects.instance [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'pci_devices' on Instance uuid 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.005 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <uuid>179785bb-7a72-4a5f-b2a6-ba2b4a10cba2</uuid>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <name>instance-0000003e</name>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <nova:name>tempest-SecurityGroupsTestJSON-server-2089001117</nova:name>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:00:41</nova:creationTime>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <nova:user uuid="8d4e5ab1ae494327abcb3693ba332586">tempest-SecurityGroupsTestJSON-1868555561-project-member</nova:user>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <nova:project uuid="6fce027870d041328a9b9968bfe90665">tempest-SecurityGroupsTestJSON-1868555561</nova:project>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <nova:port uuid="31fb5a22-c39e-440e-b8e6-681b1ea8baed">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <entry name="serial">179785bb-7a72-4a5f-b2a6-ba2b4a10cba2</entry>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <entry name="uuid">179785bb-7a72-4a5f-b2a6-ba2b4a10cba2</entry>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk.config">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:d7:5a:51"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <target dev="tap31fb5a22-c3"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/console.log" append="off"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:00:43 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:00:43 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:00:43 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:00:43 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.007 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Preparing to wait for external event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.008 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.008 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.009 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.010 226310 DEBUG nova.virt.libvirt.vif [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2089001117',display_name='tempest-SecurityGroupsTestJSON-server-2089001117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2089001117',id=62,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-a4qh74ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:37Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=179785bb-7a72-4a5f-b2a6-ba2b4a10cba2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.011 226310 DEBUG nova.network.os_vif_util [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.012 226310 DEBUG nova.network.os_vif_util [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.013 226310 DEBUG os_vif [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.014 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.015 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.015 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.020 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.020 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31fb5a22-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.021 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31fb5a22-c3, col_values=(('external_ids', {'iface-id': '31fb5a22-c39e-440e-b8e6-681b1ea8baed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:5a:51', 'vm-uuid': '179785bb-7a72-4a5f-b2a6-ba2b4a10cba2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:43 np0005539564 NetworkManager[48997]: <info>  [1764403243.0243] manager: (tap31fb5a22-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.027 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.031 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.032 226310 INFO os_vif [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3')#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.249 226310 DEBUG nova.network.neutron [req-59115c49-93c1-46eb-b226-a3614d4dc6eb req-518e264e-d3bb-448d-b69c-a1d6a56cbd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updated VIF entry in instance network info cache for port 31fb5a22-c39e-440e-b8e6-681b1ea8baed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.250 226310 DEBUG nova.network.neutron [req-59115c49-93c1-46eb-b226-a3614d4dc6eb req-518e264e-d3bb-448d-b69c-a1d6a56cbd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updating instance_info_cache with network_info: [{"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.264 226310 DEBUG oslo_concurrency.lockutils [req-59115c49-93c1-46eb-b226-a3614d4dc6eb req-518e264e-d3bb-448d-b69c-a1d6a56cbd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.528 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.528 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.529 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] No VIF found with MAC fa:16:3e:d7:5a:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.529 226310 INFO nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Using config drive#033[00m
Nov 29 03:00:43 np0005539564 nova_compute[226295]: 2025-11-29 08:00:43.564 226310 DEBUG nova.storage.rbd_utils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e242 e242: 3 total, 3 up, 3 in
Nov 29 03:00:44 np0005539564 nova_compute[226295]: 2025-11-29 08:00:44.563 226310 INFO nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Creating config drive at /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/disk.config#033[00m
Nov 29 03:00:44 np0005539564 nova_compute[226295]: 2025-11-29 08:00:44.570 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppaf36q7v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:44 np0005539564 nova_compute[226295]: 2025-11-29 08:00:44.726 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppaf36q7v" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:44.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:44.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:45.079 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:46 np0005539564 nova_compute[226295]: 2025-11-29 08:00:46.317 226310 DEBUG nova.storage.rbd_utils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:46 np0005539564 nova_compute[226295]: 2025-11-29 08:00:46.322 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/disk.config 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:46 np0005539564 nova_compute[226295]: 2025-11-29 08:00:46.359 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:00:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 23K writes, 95K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 23K writes, 7166 syncs, 3.24 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9917 writes, 38K keys, 9917 commit groups, 1.0 writes per commit group, ingest: 36.66 MB, 0.06 MB/s#012Interval WAL: 9917 writes, 3678 syncs, 2.70 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:00:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:46.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:46.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:48 np0005539564 nova_compute[226295]: 2025-11-29 08:00:48.024 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:48 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 03:00:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:48.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:48.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:50.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:51 np0005539564 nova_compute[226295]: 2025-11-29 08:00:51.207 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:52.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:52.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:53 np0005539564 nova_compute[226295]: 2025-11-29 08:00:53.027 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:54 np0005539564 nova_compute[226295]: 2025-11-29 08:00:54.759 226310 DEBUG oslo_concurrency.processutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/disk.config 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:54 np0005539564 nova_compute[226295]: 2025-11-29 08:00:54.760 226310 INFO nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Deleting local config drive /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:00:54 np0005539564 virtqemud[225880]: End of file while reading data: Input/output error
Nov 29 03:00:54 np0005539564 nova_compute[226295]: 2025-11-29 08:00:54.803 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:54.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:54 np0005539564 kernel: tap31fb5a22-c3: entered promiscuous mode
Nov 29 03:00:54 np0005539564 NetworkManager[48997]: <info>  [1764403254.8392] manager: (tap31fb5a22-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Nov 29 03:00:54 np0005539564 nova_compute[226295]: 2025-11-29 08:00:54.839 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:54Z|00170|binding|INFO|Claiming lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed for this chassis.
Nov 29 03:00:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:54Z|00171|binding|INFO|31fb5a22-c39e-440e-b8e6-681b1ea8baed: Claiming fa:16:3e:d7:5a:51 10.100.0.7
Nov 29 03:00:54 np0005539564 nova_compute[226295]: 2025-11-29 08:00:54.842 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.852 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:5a:51 10.100.0.7'], port_security=['fa:16:3e:d7:5a:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '179785bb-7a72-4a5f-b2a6-ba2b4a10cba2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2063759-3e65-4e4b-b3aa-6d737d865479', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fce027870d041328a9b9968bfe90665', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90056085-c762-483e-89c7-ac78dc504f10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb6126a-1e4d-4a00-9500-8124c46f02a3, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=31fb5a22-c39e-440e-b8e6-681b1ea8baed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.854 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 31fb5a22-c39e-440e-b8e6-681b1ea8baed in datapath b2063759-3e65-4e4b-b3aa-6d737d865479 bound to our chassis#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.855 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2063759-3e65-4e4b-b3aa-6d737d865479#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.878 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[af6e938b-ed4a-4d09-be3b-2b4a502aa54f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:54 np0005539564 systemd-udevd[249807]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.880 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2063759-31 in ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.883 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2063759-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.883 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2fe420-84f7-489f-a51b-23eb077f1c24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.884 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bed42b12-ffef-4dd7-9bcf-c4524888d09d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:54 np0005539564 systemd-machined[190128]: New machine qemu-27-instance-0000003e.
Nov 29 03:00:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:54.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:54 np0005539564 systemd[1]: Started Virtual Machine qemu-27-instance-0000003e.
Nov 29 03:00:54 np0005539564 NetworkManager[48997]: <info>  [1764403254.9012] device (tap31fb5a22-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:00:54 np0005539564 NetworkManager[48997]: <info>  [1764403254.9024] device (tap31fb5a22-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.901 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[c859372e-632b-4ca2-a1bc-ba5031cbc5b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:54 np0005539564 nova_compute[226295]: 2025-11-29 08:00:54.908 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:54Z|00172|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed ovn-installed in OVS
Nov 29 03:00:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:54Z|00173|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed up in Southbound
Nov 29 03:00:54 np0005539564 nova_compute[226295]: 2025-11-29 08:00:54.913 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.924 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5a7168-a400-48fe-b35b-f81ed3b6824f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.963 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1f648994-0e6d-4538-8fce-7a8d1f5ca275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:54.969 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[03b706cd-dbc6-4487-8def-13e2e435922c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:54 np0005539564 NetworkManager[48997]: <info>  [1764403254.9711] manager: (tapb2063759-30): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.008 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0e11cb-c867-4dcc-af73-e4fdd2ad3085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.011 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a0135474-1c97-46b6-a82f-5d970b43a42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 NetworkManager[48997]: <info>  [1764403255.0412] device (tapb2063759-30): carrier: link connected
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.046 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5369d2-fa77-4f69-8356-f6f9d6a97172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.072 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e31d03-1316-4c7d-825b-d467d72fa4c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2063759-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:76:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622972, 'reachable_time': 33842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249840, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.092 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3251d431-80bb-4539-8ff9-7542dd57dfe4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:7680'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622972, 'tstamp': 622972}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249841, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.116 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a71f83f9-d40a-4a47-91d5-580a1dbe9a7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2063759-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:76:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622972, 'reachable_time': 33842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249842, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.158 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4520d9ec-3db0-4539-b827-5f2a703de4c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.232 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6e30af04-54c3-45c5-a0c4-137d1eca4b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.235 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2063759-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.235 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.236 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2063759-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:55 np0005539564 NetworkManager[48997]: <info>  [1764403255.2392] manager: (tapb2063759-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 29 03:00:55 np0005539564 kernel: tapb2063759-30: entered promiscuous mode
Nov 29 03:00:55 np0005539564 nova_compute[226295]: 2025-11-29 08:00:55.238 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.242 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2063759-30, col_values=(('external_ids', {'iface-id': '56d6fe86-a22b-4b4c-87cc-d5e908ba5810'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:55 np0005539564 nova_compute[226295]: 2025-11-29 08:00:55.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:00:55Z|00174|binding|INFO|Releasing lport 56d6fe86-a22b-4b4c-87cc-d5e908ba5810 from this chassis (sb_readonly=0)
Nov 29 03:00:55 np0005539564 nova_compute[226295]: 2025-11-29 08:00:55.257 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.259 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.261 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[58ce79ac-7fa8-4308-b69e-4ebee9e69861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.262 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-b2063759-3e65-4e4b-b3aa-6d737d865479
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID b2063759-3e65-4e4b-b3aa-6d737d865479
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:00:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:00:55.265 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'env', 'PROCESS_TAG=haproxy-b2063759-3e65-4e4b-b3aa-6d737d865479', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2063759-3e65-4e4b-b3aa-6d737d865479.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:00:55 np0005539564 nova_compute[226295]: 2025-11-29 08:00:55.312 226310 DEBUG nova.compute.manager [req-2df9836f-a58b-4655-a000-7a4b7426c850 req-a27baf23-84ca-49d5-8540-bd4d84ff5938 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:55 np0005539564 nova_compute[226295]: 2025-11-29 08:00:55.313 226310 DEBUG oslo_concurrency.lockutils [req-2df9836f-a58b-4655-a000-7a4b7426c850 req-a27baf23-84ca-49d5-8540-bd4d84ff5938 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:55 np0005539564 nova_compute[226295]: 2025-11-29 08:00:55.313 226310 DEBUG oslo_concurrency.lockutils [req-2df9836f-a58b-4655-a000-7a4b7426c850 req-a27baf23-84ca-49d5-8540-bd4d84ff5938 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:55 np0005539564 nova_compute[226295]: 2025-11-29 08:00:55.313 226310 DEBUG oslo_concurrency.lockutils [req-2df9836f-a58b-4655-a000-7a4b7426c850 req-a27baf23-84ca-49d5-8540-bd4d84ff5938 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:55 np0005539564 nova_compute[226295]: 2025-11-29 08:00:55.314 226310 DEBUG nova.compute.manager [req-2df9836f-a58b-4655-a000-7a4b7426c850 req-a27baf23-84ca-49d5-8540-bd4d84ff5938 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Processing event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:00:55 np0005539564 podman[249899]: 2025-11-29 08:00:55.684128725 +0000 UTC m=+0.028563214 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:00:56 np0005539564 nova_compute[226295]: 2025-11-29 08:00:56.210 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:56.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:56.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:57 np0005539564 nova_compute[226295]: 2025-11-29 08:00:57.434 226310 DEBUG nova.compute.manager [req-5a2081d5-9b3a-414e-bd3a-d63f8c92838a req-eaba5b17-3dde-40ef-84ee-d1bee9de7fc1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:57 np0005539564 nova_compute[226295]: 2025-11-29 08:00:57.435 226310 DEBUG oslo_concurrency.lockutils [req-5a2081d5-9b3a-414e-bd3a-d63f8c92838a req-eaba5b17-3dde-40ef-84ee-d1bee9de7fc1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:57 np0005539564 nova_compute[226295]: 2025-11-29 08:00:57.436 226310 DEBUG oslo_concurrency.lockutils [req-5a2081d5-9b3a-414e-bd3a-d63f8c92838a req-eaba5b17-3dde-40ef-84ee-d1bee9de7fc1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:57 np0005539564 nova_compute[226295]: 2025-11-29 08:00:57.436 226310 DEBUG oslo_concurrency.lockutils [req-5a2081d5-9b3a-414e-bd3a-d63f8c92838a req-eaba5b17-3dde-40ef-84ee-d1bee9de7fc1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:57 np0005539564 nova_compute[226295]: 2025-11-29 08:00:57.437 226310 DEBUG nova.compute.manager [req-5a2081d5-9b3a-414e-bd3a-d63f8c92838a req-eaba5b17-3dde-40ef-84ee-d1bee9de7fc1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:57 np0005539564 nova_compute[226295]: 2025-11-29 08:00:57.437 226310 WARNING nova.compute.manager [req-5a2081d5-9b3a-414e-bd3a-d63f8c92838a req-eaba5b17-3dde-40ef-84ee-d1bee9de7fc1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.029 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e243 e243: 3 total, 3 up, 3 in
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.621 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.624 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403258.6188033, 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.624 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.630 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.634 226310 INFO nova.virt.libvirt.driver [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Instance spawned successfully.#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.635 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.658 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.663 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.669 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.669 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.670 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.670 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.670 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.671 226310 DEBUG nova.virt.libvirt.driver [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.704 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.704 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403258.622905, 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.705 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.736 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.741 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403258.6278627, 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.741 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:00:58 np0005539564 podman[249899]: 2025-11-29 08:00:58.745992244 +0000 UTC m=+3.090426753 container create 7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.763 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.768 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.772 226310 INFO nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Took 20.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.773 226310 DEBUG nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.804 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:58.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.846 226310 INFO nova.compute.manager [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Took 21.77 seconds to build instance.#033[00m
Nov 29 03:00:58 np0005539564 nova_compute[226295]: 2025-11-29 08:00:58.863 226310 DEBUG oslo_concurrency.lockutils [None req-c7df8277-df14-4795-8d1e-209a47ba451d 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:00:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:58.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:59 np0005539564 systemd[1]: Started libpod-conmon-7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0.scope.
Nov 29 03:00:59 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:00:59 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cb4a89f8d6f5146e7dda4c0aff439a6a458ca7d1e603bd911fd521859e806e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:59 np0005539564 podman[249899]: 2025-11-29 08:00:59.791856576 +0000 UTC m=+4.136291085 container init 7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:59 np0005539564 podman[249899]: 2025-11-29 08:00:59.804059267 +0000 UTC m=+4.148493766 container start 7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:00:59 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[249932]: [NOTICE]   (249936) : New worker (249938) forked
Nov 29 03:00:59 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[249932]: [NOTICE]   (249936) : Loading success.
Nov 29 03:01:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:00.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:00.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:01 np0005539564 nova_compute[226295]: 2025-11-29 08:01:01.212 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:02 np0005539564 nova_compute[226295]: 2025-11-29 08:01:02.714 226310 DEBUG nova.compute.manager [req-57fca4a9-5e95-43d2-b81f-1f86df620bba req-cd600455-f43b-4aeb-a48b-31a53b7084df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-changed-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:02 np0005539564 nova_compute[226295]: 2025-11-29 08:01:02.715 226310 DEBUG nova.compute.manager [req-57fca4a9-5e95-43d2-b81f-1f86df620bba req-cd600455-f43b-4aeb-a48b-31a53b7084df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Refreshing instance network info cache due to event network-changed-31fb5a22-c39e-440e-b8e6-681b1ea8baed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:02 np0005539564 nova_compute[226295]: 2025-11-29 08:01:02.716 226310 DEBUG oslo_concurrency.lockutils [req-57fca4a9-5e95-43d2-b81f-1f86df620bba req-cd600455-f43b-4aeb-a48b-31a53b7084df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:02 np0005539564 nova_compute[226295]: 2025-11-29 08:01:02.716 226310 DEBUG oslo_concurrency.lockutils [req-57fca4a9-5e95-43d2-b81f-1f86df620bba req-cd600455-f43b-4aeb-a48b-31a53b7084df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:02 np0005539564 nova_compute[226295]: 2025-11-29 08:01:02.716 226310 DEBUG nova.network.neutron [req-57fca4a9-5e95-43d2-b81f-1f86df620bba req-cd600455-f43b-4aeb-a48b-31a53b7084df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Refreshing network info cache for port 31fb5a22-c39e-440e-b8e6-681b1ea8baed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:02.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:01:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:01:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:02.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:03 np0005539564 nova_compute[226295]: 2025-11-29 08:01:03.031 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:03.711 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:03.712 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:03.713 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:04.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:04 np0005539564 nova_compute[226295]: 2025-11-29 08:01:04.889 226310 DEBUG oslo_concurrency.lockutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:04 np0005539564 nova_compute[226295]: 2025-11-29 08:01:04.890 226310 DEBUG oslo_concurrency.lockutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:04 np0005539564 nova_compute[226295]: 2025-11-29 08:01:04.890 226310 INFO nova.compute.manager [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Rebooting instance#033[00m
Nov 29 03:01:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:04.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:05 np0005539564 nova_compute[226295]: 2025-11-29 08:01:05.078 226310 DEBUG oslo_concurrency.lockutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:05 np0005539564 nova_compute[226295]: 2025-11-29 08:01:05.334 226310 DEBUG nova.network.neutron [req-57fca4a9-5e95-43d2-b81f-1f86df620bba req-cd600455-f43b-4aeb-a48b-31a53b7084df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updated VIF entry in instance network info cache for port 31fb5a22-c39e-440e-b8e6-681b1ea8baed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:05 np0005539564 nova_compute[226295]: 2025-11-29 08:01:05.335 226310 DEBUG nova.network.neutron [req-57fca4a9-5e95-43d2-b81f-1f86df620bba req-cd600455-f43b-4aeb-a48b-31a53b7084df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updating instance_info_cache with network_info: [{"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:05 np0005539564 nova_compute[226295]: 2025-11-29 08:01:05.379 226310 DEBUG oslo_concurrency.lockutils [req-57fca4a9-5e95-43d2-b81f-1f86df620bba req-cd600455-f43b-4aeb-a48b-31a53b7084df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:05 np0005539564 nova_compute[226295]: 2025-11-29 08:01:05.381 226310 DEBUG oslo_concurrency.lockutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquired lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:05 np0005539564 nova_compute[226295]: 2025-11-29 08:01:05.381 226310 DEBUG nova.network.neutron [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:06 np0005539564 nova_compute[226295]: 2025-11-29 08:01:06.214 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:06.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:06.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:07 np0005539564 nova_compute[226295]: 2025-11-29 08:01:07.706 226310 DEBUG nova.network.neutron [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updating instance_info_cache with network_info: [{"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:07 np0005539564 nova_compute[226295]: 2025-11-29 08:01:07.736 226310 DEBUG oslo_concurrency.lockutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Releasing lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:07 np0005539564 nova_compute[226295]: 2025-11-29 08:01:07.738 226310 DEBUG nova.compute.manager [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.035 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 kernel: tap31fb5a22-c3 (unregistering): left promiscuous mode
Nov 29 03:01:08 np0005539564 NetworkManager[48997]: <info>  [1764403268.2243] device (tap31fb5a22-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00175|binding|INFO|Releasing lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed from this chassis (sb_readonly=0)
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00176|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed down in Southbound
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00177|binding|INFO|Removing iface tap31fb5a22-c3 ovn-installed in OVS
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.247 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.252 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:5a:51 10.100.0.7'], port_security=['fa:16:3e:d7:5a:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '179785bb-7a72-4a5f-b2a6-ba2b4a10cba2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2063759-3e65-4e4b-b3aa-6d737d865479', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fce027870d041328a9b9968bfe90665', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90056085-c762-483e-89c7-ac78dc504f10 ab4b8eb5-f93a-42e6-b64e-9e0384452452', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb6126a-1e4d-4a00-9500-8124c46f02a3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=31fb5a22-c39e-440e-b8e6-681b1ea8baed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.254 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 31fb5a22-c39e-440e-b8e6-681b1ea8baed in datapath b2063759-3e65-4e4b-b3aa-6d737d865479 unbound from our chassis#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.257 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2063759-3e65-4e4b-b3aa-6d737d865479, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.259 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0ebd13-5087-40e3-a86a-2caef4b10289]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.260 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 namespace which is not needed anymore#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 29 03:01:08 np0005539564 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003e.scope: Consumed 10.140s CPU time.
Nov 29 03:01:08 np0005539564 systemd-machined[190128]: Machine qemu-27-instance-0000003e terminated.
Nov 29 03:01:08 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[249932]: [NOTICE]   (249936) : haproxy version is 2.8.14-c23fe91
Nov 29 03:01:08 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[249932]: [NOTICE]   (249936) : path to executable is /usr/sbin/haproxy
Nov 29 03:01:08 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[249932]: [WARNING]  (249936) : Exiting Master process...
Nov 29 03:01:08 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[249932]: [WARNING]  (249936) : Exiting Master process...
Nov 29 03:01:08 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[249932]: [ALERT]    (249936) : Current worker (249938) exited with code 143 (Terminated)
Nov 29 03:01:08 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[249932]: [WARNING]  (249936) : All workers exited. Exiting... (0)
Nov 29 03:01:08 np0005539564 systemd[1]: libpod-7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0.scope: Deactivated successfully.
Nov 29 03:01:08 np0005539564 podman[250033]: 2025-11-29 08:01:08.468223187 +0000 UTC m=+0.056259952 container died 7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:01:08 np0005539564 kernel: tap31fb5a22-c3: entered promiscuous mode
Nov 29 03:01:08 np0005539564 NetworkManager[48997]: <info>  [1764403268.4918] manager: (tap31fb5a22-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.492 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00178|binding|INFO|Claiming lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed for this chassis.
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00179|binding|INFO|31fb5a22-c39e-440e-b8e6-681b1ea8baed: Claiming fa:16:3e:d7:5a:51 10.100.0.7
Nov 29 03:01:08 np0005539564 kernel: tap31fb5a22-c3 (unregistering): left promiscuous mode
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.504 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:5a:51 10.100.0.7'], port_security=['fa:16:3e:d7:5a:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '179785bb-7a72-4a5f-b2a6-ba2b4a10cba2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2063759-3e65-4e4b-b3aa-6d737d865479', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fce027870d041328a9b9968bfe90665', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90056085-c762-483e-89c7-ac78dc504f10 ab4b8eb5-f93a-42e6-b64e-9e0384452452', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb6126a-1e4d-4a00-9500-8124c46f02a3, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=31fb5a22-c39e-440e-b8e6-681b1ea8baed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:08 np0005539564 systemd[1]: var-lib-containers-storage-overlay-3cb4a89f8d6f5146e7dda4c0aff439a6a458ca7d1e603bd911fd521859e806e7-merged.mount: Deactivated successfully.
Nov 29 03:01:08 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0-userdata-shm.mount: Deactivated successfully.
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00180|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed ovn-installed in OVS
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00181|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed up in Southbound
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00182|binding|INFO|Releasing lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed from this chassis (sb_readonly=1)
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00183|if_status|INFO|Not setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed down as sb is readonly
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00184|binding|INFO|Removing iface tap31fb5a22-c3 ovn-installed in OVS
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00185|binding|INFO|Releasing lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed from this chassis (sb_readonly=0)
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.522 226310 DEBUG nova.compute.manager [req-f5d7d4c2-9726-4541-8375-57555d1c7175 req-3fa678e1-c0c4-44e1-8e0d-311e61a4f87b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:08Z|00186|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed down in Southbound
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.523 226310 DEBUG oslo_concurrency.lockutils [req-f5d7d4c2-9726-4541-8375-57555d1c7175 req-3fa678e1-c0c4-44e1-8e0d-311e61a4f87b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.524 226310 DEBUG oslo_concurrency.lockutils [req-f5d7d4c2-9726-4541-8375-57555d1c7175 req-3fa678e1-c0c4-44e1-8e0d-311e61a4f87b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.525 226310 DEBUG oslo_concurrency.lockutils [req-f5d7d4c2-9726-4541-8375-57555d1c7175 req-3fa678e1-c0c4-44e1-8e0d-311e61a4f87b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.526 226310 DEBUG nova.compute.manager [req-f5d7d4c2-9726-4541-8375-57555d1c7175 req-3fa678e1-c0c4-44e1-8e0d-311e61a4f87b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:08 np0005539564 podman[250033]: 2025-11-29 08:01:08.526694689 +0000 UTC m=+0.114731434 container cleanup 7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.526 226310 WARNING nova.compute.manager [req-f5d7d4c2-9726-4541-8375-57555d1c7175 req-3fa678e1-c0c4-44e1-8e0d-311e61a4f87b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.527 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.528 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:5a:51 10.100.0.7'], port_security=['fa:16:3e:d7:5a:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '179785bb-7a72-4a5f-b2a6-ba2b4a10cba2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2063759-3e65-4e4b-b3aa-6d737d865479', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fce027870d041328a9b9968bfe90665', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90056085-c762-483e-89c7-ac78dc504f10 ab4b8eb5-f93a-42e6-b64e-9e0384452452', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb6126a-1e4d-4a00-9500-8124c46f02a3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=31fb5a22-c39e-440e-b8e6-681b1ea8baed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.533 226310 INFO nova.virt.libvirt.driver [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Instance destroyed successfully.#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.534 226310 DEBUG nova.objects.instance [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'resources' on Instance uuid 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.537 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 systemd[1]: libpod-conmon-7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0.scope: Deactivated successfully.
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.550 226310 DEBUG nova.virt.libvirt.vif [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2089001117',display_name='tempest-SecurityGroupsTestJSON-server-2089001117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2089001117',id=62,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:58Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-a4qh74ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:07Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=179785bb-7a72-4a5f-b2a6-ba2b4a10cba2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.551 226310 DEBUG nova.network.os_vif_util [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.555 226310 DEBUG nova.network.os_vif_util [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.556 226310 DEBUG os_vif [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.557 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.558 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31fb5a22-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.559 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.560 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.560 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.563 226310 INFO os_vif [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3')#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.573 226310 DEBUG nova.virt.libvirt.driver [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Start _get_guest_xml network_info=[{"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.580 226310 WARNING nova.virt.libvirt.driver [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.588 226310 DEBUG nova.virt.libvirt.host [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.589 226310 DEBUG nova.virt.libvirt.host [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.593 226310 DEBUG nova.virt.libvirt.host [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.593 226310 DEBUG nova.virt.libvirt.host [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.595 226310 DEBUG nova.virt.libvirt.driver [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.595 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.596 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.597 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.597 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.598 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.598 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.598 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.599 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.599 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.600 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.600 226310 DEBUG nova.virt.hardware [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.600 226310 DEBUG nova.objects.instance [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:08 np0005539564 podman[250065]: 2025-11-29 08:01:08.617877706 +0000 UTC m=+0.049187702 container remove 7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.619 226310 DEBUG oslo_concurrency.processutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.626 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[917f44fa-80b3-4365-b45a-f015577ee6fc]: (4, ('Sat Nov 29 08:01:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 (7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0)\n7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0\nSat Nov 29 08:01:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 (7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0)\n7a9b0d804ba16e65aec732ae2fb5990d0590aa3e6ae9451663b672bcf16abef0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.629 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d419a698-b52b-4b75-8e05-77ea22191edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.630 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2063759-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:08 np0005539564 kernel: tapb2063759-30: left promiscuous mode
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.655 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 nova_compute[226295]: 2025-11-29 08:01:08.663 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.667 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[12e56f90-dc3b-4f47-83ba-a1537b6ed077]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.683 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4fc268-099f-47cd-a0b0-715634a02be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.684 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef19288-bda0-4407-86e3-9239e91eb420]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.708 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[131eecd9-3f1c-4593-bdd8-485dc6b10b5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622963, 'reachable_time': 30067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250081, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 systemd[1]: run-netns-ovnmeta\x2db2063759\x2d3e65\x2d4e4b\x2db3aa\x2d6d737d865479.mount: Deactivated successfully.
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.713 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.714 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[269328ea-692f-432b-9b10-d989a12db692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.715 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 31fb5a22-c39e-440e-b8e6-681b1ea8baed in datapath b2063759-3e65-4e4b-b3aa-6d737d865479 unbound from our chassis#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.718 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2063759-3e65-4e4b-b3aa-6d737d865479, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.719 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[210da22a-5bdd-48ff-94cb-d77c3cdd072d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.720 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 31fb5a22-c39e-440e-b8e6-681b1ea8baed in datapath b2063759-3e65-4e4b-b3aa-6d737d865479 unbound from our chassis#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.723 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2063759-3e65-4e4b-b3aa-6d737d865479, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:08.723 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf4f7b6-1921-48c5-bcd0-de0f08114eeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:08.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:08.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/299655272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.088 226310 DEBUG oslo_concurrency.processutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.133 226310 DEBUG oslo_concurrency.processutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/951801490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.590 226310 DEBUG oslo_concurrency.processutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.594 226310 DEBUG nova.virt.libvirt.vif [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2089001117',display_name='tempest-SecurityGroupsTestJSON-server-2089001117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2089001117',id=62,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:58Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-a4qh74ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:07Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=179785bb-7a72-4a5f-b2a6-ba2b4a10cba2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.595 226310 DEBUG nova.network.os_vif_util [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.597 226310 DEBUG nova.network.os_vif_util [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.600 226310 DEBUG nova.objects.instance [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'pci_devices' on Instance uuid 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.624 226310 DEBUG nova.virt.libvirt.driver [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <uuid>179785bb-7a72-4a5f-b2a6-ba2b4a10cba2</uuid>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <name>instance-0000003e</name>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <nova:name>tempest-SecurityGroupsTestJSON-server-2089001117</nova:name>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:01:08</nova:creationTime>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <nova:user uuid="8d4e5ab1ae494327abcb3693ba332586">tempest-SecurityGroupsTestJSON-1868555561-project-member</nova:user>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <nova:project uuid="6fce027870d041328a9b9968bfe90665">tempest-SecurityGroupsTestJSON-1868555561</nova:project>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <nova:port uuid="31fb5a22-c39e-440e-b8e6-681b1ea8baed">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <entry name="serial">179785bb-7a72-4a5f-b2a6-ba2b4a10cba2</entry>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <entry name="uuid">179785bb-7a72-4a5f-b2a6-ba2b4a10cba2</entry>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_disk.config">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:d7:5a:51"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <target dev="tap31fb5a22-c3"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2/console.log" append="off"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:01:09 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:01:09 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:01:09 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:01:09 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.626 226310 DEBUG nova.virt.libvirt.driver [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.627 226310 DEBUG nova.virt.libvirt.driver [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.629 226310 DEBUG nova.virt.libvirt.vif [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2089001117',display_name='tempest-SecurityGroupsTestJSON-server-2089001117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2089001117',id=62,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:58Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-a4qh74ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:07Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=179785bb-7a72-4a5f-b2a6-ba2b4a10cba2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.629 226310 DEBUG nova.network.os_vif_util [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.631 226310 DEBUG nova.network.os_vif_util [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.631 226310 DEBUG os_vif [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.633 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.634 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.635 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.640 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.640 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31fb5a22-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.641 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31fb5a22-c3, col_values=(('external_ids', {'iface-id': '31fb5a22-c39e-440e-b8e6-681b1ea8baed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:5a:51', 'vm-uuid': '179785bb-7a72-4a5f-b2a6-ba2b4a10cba2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.643 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539564 NetworkManager[48997]: <info>  [1764403269.6449] manager: (tap31fb5a22-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.646 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.650 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.652 226310 INFO os_vif [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3')#033[00m
Nov 29 03:01:09 np0005539564 kernel: tap31fb5a22-c3: entered promiscuous mode
Nov 29 03:01:09 np0005539564 systemd-udevd[250012]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:09Z|00187|binding|INFO|Claiming lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed for this chassis.
Nov 29 03:01:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:09Z|00188|binding|INFO|31fb5a22-c39e-440e-b8e6-681b1ea8baed: Claiming fa:16:3e:d7:5a:51 10.100.0.7
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.784 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539564 NetworkManager[48997]: <info>  [1764403269.7866] manager: (tap31fb5a22-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 29 03:01:09 np0005539564 NetworkManager[48997]: <info>  [1764403269.8093] device (tap31fb5a22-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:09 np0005539564 NetworkManager[48997]: <info>  [1764403269.8108] device (tap31fb5a22-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:09Z|00189|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed ovn-installed in OVS
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.819 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539564 nova_compute[226295]: 2025-11-29 08:01:09.821 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:09Z|00190|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed up in Southbound
Nov 29 03:01:09 np0005539564 systemd-machined[190128]: New machine qemu-28-instance-0000003e.
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.825 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:5a:51 10.100.0.7'], port_security=['fa:16:3e:d7:5a:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '179785bb-7a72-4a5f-b2a6-ba2b4a10cba2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2063759-3e65-4e4b-b3aa-6d737d865479', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fce027870d041328a9b9968bfe90665', 'neutron:revision_number': '7', 'neutron:security_group_ids': '90056085-c762-483e-89c7-ac78dc504f10 ab4b8eb5-f93a-42e6-b64e-9e0384452452', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb6126a-1e4d-4a00-9500-8124c46f02a3, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=31fb5a22-c39e-440e-b8e6-681b1ea8baed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.827 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 31fb5a22-c39e-440e-b8e6-681b1ea8baed in datapath b2063759-3e65-4e4b-b3aa-6d737d865479 bound to our chassis#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.829 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2063759-3e65-4e4b-b3aa-6d737d865479#033[00m
Nov 29 03:01:09 np0005539564 systemd[1]: Started Virtual Machine qemu-28-instance-0000003e.
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.841 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4bed22-3b6d-4d45-9901-bd390970fce6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.843 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2063759-31 in ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.845 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2063759-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.845 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a809b8f8-31ee-4081-b581-1c3298149bab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.846 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[faaa6045-7900-4e8b-9767-968d4056fa7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.862 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[01be17e0-2a78-43b4-83fc-beab1415d65b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.889 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e07a1519-128a-4893-946e-e22bc2055bb9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.928 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[114a733a-5c91-4241-94f9-67c59b49ea11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.934 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[845bca43-27d6-44d7-8d1f-3dba95ab8cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:09 np0005539564 NetworkManager[48997]: <info>  [1764403269.9358] manager: (tapb2063759-30): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.988 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e67d4475-9dda-466c-b30f-22fb6397891a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:09.994 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[57e187e2-c359-4ccd-9fd4-a504fe921b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:10 np0005539564 NetworkManager[48997]: <info>  [1764403270.0265] device (tapb2063759-30): carrier: link connected
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.033 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[fedc705a-353b-4074-8c4c-7f9705b7b47e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.057 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[395354d7-a094-4f9c-a935-5fef80ab694a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2063759-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:76:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624470, 'reachable_time': 27860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250188, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.074 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ba53bf-46ea-4b15-a68d-c39710057ca1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:7680'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624470, 'tstamp': 624470}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250189, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.094 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[94dd83a1-799f-433b-b37b-61af72cc4fb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2063759-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:76:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624470, 'reachable_time': 27860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250190, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.126 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ddee87ca-6e09-4770-bf4d-2194c79dbe12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.193 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[33215d7d-1f93-40c4-89e9-c487761f04dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.195 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2063759-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.195 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.196 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2063759-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.247 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539564 kernel: tapb2063759-30: entered promiscuous mode
Nov 29 03:01:10 np0005539564 NetworkManager[48997]: <info>  [1764403270.2481] manager: (tapb2063759-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.249 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.251 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2063759-30, col_values=(('external_ids', {'iface-id': '56d6fe86-a22b-4b4c-87cc-d5e908ba5810'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.252 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:10Z|00191|binding|INFO|Releasing lport 56d6fe86-a22b-4b4c-87cc-d5e908ba5810 from this chassis (sb_readonly=0)
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.265 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.266 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.267 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad93c94-3eeb-47a5-a1ec-839f76513dbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.268 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-b2063759-3e65-4e4b-b3aa-6d737d865479
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID b2063759-3e65-4e4b-b3aa-6d737d865479
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:01:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:10.269 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'env', 'PROCESS_TAG=haproxy-b2063759-3e65-4e4b-b3aa-6d737d865479', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2063759-3e65-4e4b-b3aa-6d737d865479.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.560 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.561 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403270.559475, 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.561 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.563 226310 DEBUG nova.compute.manager [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.568 226310 INFO nova.virt.libvirt.driver [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Instance rebooted successfully.#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.569 226310 DEBUG nova.compute.manager [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.593 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.596 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.620 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.621 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403270.5605958, 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.621 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.640 226310 DEBUG oslo_concurrency.lockutils [None req-a500fb9a-940f-428b-9fde-4a4691af07a1 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.645 226310 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.646 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.646 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.646 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.646 226310 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.647 226310 WARNING nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.647 226310 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.647 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.647 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.648 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.648 226310 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.648 226310 WARNING nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.648 226310 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.648 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.649 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.649 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.649 226310 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.649 226310 WARNING nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.649 226310 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.650 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.650 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.650 226310 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.650 226310 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.650 226310 WARNING nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.652 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:10 np0005539564 nova_compute[226295]: 2025-11-29 08:01:10.656 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:10 np0005539564 podman[250264]: 2025-11-29 08:01:10.745733168 +0000 UTC m=+0.077405544 container create bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:01:10 np0005539564 systemd[1]: Started libpod-conmon-bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da.scope.
Nov 29 03:01:10 np0005539564 podman[250264]: 2025-11-29 08:01:10.702211452 +0000 UTC m=+0.033883888 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:01:10 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:01:10 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a392a1a28df7d20b8535f22020076dabdf637105c6f4eaa8f2ccdd1eea70c326/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:10.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:10 np0005539564 podman[250264]: 2025-11-29 08:01:10.842764204 +0000 UTC m=+0.174436550 container init bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:01:10 np0005539564 podman[250264]: 2025-11-29 08:01:10.851404947 +0000 UTC m=+0.183077283 container start bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:01:10 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[250282]: [NOTICE]   (250321) : New worker (250334) forked
Nov 29 03:01:10 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[250282]: [NOTICE]   (250321) : Loading success.
Nov 29 03:01:10 np0005539564 podman[250279]: 2025-11-29 08:01:10.877193625 +0000 UTC m=+0.074501776 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:10 np0005539564 podman[250278]: 2025-11-29 08:01:10.905149451 +0000 UTC m=+0.111710733 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd)
Nov 29 03:01:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:10.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:10 np0005539564 podman[250277]: 2025-11-29 08:01:10.941382361 +0000 UTC m=+0.141944910 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:01:11 np0005539564 nova_compute[226295]: 2025-11-29 08:01:11.217 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:12.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.880 226310 DEBUG nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.882 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.882 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.883 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.883 226310 DEBUG nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.883 226310 WARNING nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.884 226310 DEBUG nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.884 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.884 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.885 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.885 226310 DEBUG nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.885 226310 WARNING nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.886 226310 DEBUG nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.886 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.886 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.887 226310 DEBUG oslo_concurrency.lockutils [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.887 226310 DEBUG nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:12 np0005539564 nova_compute[226295]: 2025-11-29 08:01:12.887 226310 WARNING nova.compute.manager [req-23e85367-732f-4a10-8d76-7072b01699a9 req-c8a00adf-78b4-4588-9ed6-e0377c324827 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:12.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:14 np0005539564 nova_compute[226295]: 2025-11-29 08:01:14.645 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:14 np0005539564 nova_compute[226295]: 2025-11-29 08:01:14.678 226310 DEBUG nova.compute.manager [req-c80b3b72-07aa-4c12-ba9d-3137f4a37eb7 req-944d71cf-4f9c-490a-b408-81f896220a17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-changed-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:14 np0005539564 nova_compute[226295]: 2025-11-29 08:01:14.678 226310 DEBUG nova.compute.manager [req-c80b3b72-07aa-4c12-ba9d-3137f4a37eb7 req-944d71cf-4f9c-490a-b408-81f896220a17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Refreshing instance network info cache due to event network-changed-31fb5a22-c39e-440e-b8e6-681b1ea8baed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:14 np0005539564 nova_compute[226295]: 2025-11-29 08:01:14.679 226310 DEBUG oslo_concurrency.lockutils [req-c80b3b72-07aa-4c12-ba9d-3137f4a37eb7 req-944d71cf-4f9c-490a-b408-81f896220a17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:14 np0005539564 nova_compute[226295]: 2025-11-29 08:01:14.679 226310 DEBUG oslo_concurrency.lockutils [req-c80b3b72-07aa-4c12-ba9d-3137f4a37eb7 req-944d71cf-4f9c-490a-b408-81f896220a17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:14 np0005539564 nova_compute[226295]: 2025-11-29 08:01:14.680 226310 DEBUG nova.network.neutron [req-c80b3b72-07aa-4c12-ba9d-3137f4a37eb7 req-944d71cf-4f9c-490a-b408-81f896220a17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Refreshing network info cache for port 31fb5a22-c39e-440e-b8e6-681b1ea8baed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:14.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:14.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:15 np0005539564 nova_compute[226295]: 2025-11-29 08:01:15.707 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:15 np0005539564 nova_compute[226295]: 2025-11-29 08:01:15.708 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:15 np0005539564 nova_compute[226295]: 2025-11-29 08:01:15.708 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:15 np0005539564 nova_compute[226295]: 2025-11-29 08:01:15.709 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:15 np0005539564 nova_compute[226295]: 2025-11-29 08:01:15.709 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:15 np0005539564 nova_compute[226295]: 2025-11-29 08:01:15.710 226310 INFO nova.compute.manager [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Terminating instance#033[00m
Nov 29 03:01:15 np0005539564 nova_compute[226295]: 2025-11-29 08:01:15.712 226310 DEBUG nova.compute.manager [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:01:15 np0005539564 kernel: tap31fb5a22-c3 (unregistering): left promiscuous mode
Nov 29 03:01:15 np0005539564 NetworkManager[48997]: <info>  [1764403275.9948] device (tap31fb5a22-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:16Z|00192|binding|INFO|Releasing lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed from this chassis (sb_readonly=0)
Nov 29 03:01:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:16Z|00193|binding|INFO|Setting lport 31fb5a22-c39e-440e-b8e6-681b1ea8baed down in Southbound
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.003 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:16Z|00194|binding|INFO|Removing iface tap31fb5a22-c3 ovn-installed in OVS
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.006 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.010 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:5a:51 10.100.0.7'], port_security=['fa:16:3e:d7:5a:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '179785bb-7a72-4a5f-b2a6-ba2b4a10cba2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2063759-3e65-4e4b-b3aa-6d737d865479', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fce027870d041328a9b9968bfe90665', 'neutron:revision_number': '10', 'neutron:security_group_ids': '5340cc5f-917b-4548-adf8-4cf28483b4be 90056085-c762-483e-89c7-ac78dc504f10 ab4b8eb5-f93a-42e6-b64e-9e0384452452', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb6126a-1e4d-4a00-9500-8124c46f02a3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=31fb5a22-c39e-440e-b8e6-681b1ea8baed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.012 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 31fb5a22-c39e-440e-b8e6-681b1ea8baed in datapath b2063759-3e65-4e4b-b3aa-6d737d865479 unbound from our chassis#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.014 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2063759-3e65-4e4b-b3aa-6d737d865479, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.015 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6bacb9a6-a8e7-4e70-86ac-9c124bed1be6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.016 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 namespace which is not needed anymore#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.028 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 29 03:01:16 np0005539564 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003e.scope: Consumed 6.151s CPU time.
Nov 29 03:01:16 np0005539564 systemd-machined[190128]: Machine qemu-28-instance-0000003e terminated.
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.146 226310 INFO nova.virt.libvirt.driver [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Instance destroyed successfully.#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.148 226310 DEBUG nova.objects.instance [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'resources' on Instance uuid 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.165 226310 DEBUG nova.virt.libvirt.vif [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2089001117',display_name='tempest-SecurityGroupsTestJSON-server-2089001117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2089001117',id=62,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:58Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-a4qh74ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:10Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=179785bb-7a72-4a5f-b2a6-ba2b4a10cba2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.165 226310 DEBUG nova.network.os_vif_util [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.166 226310 DEBUG nova.network.os_vif_util [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.167 226310 DEBUG os_vif [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.170 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.171 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31fb5a22-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.173 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.175 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.177 226310 INFO os_vif [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:5a:51,bridge_name='br-int',has_traffic_filtering=True,id=31fb5a22-c39e-440e-b8e6-681b1ea8baed,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fb5a22-c3')#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[250282]: [NOTICE]   (250321) : haproxy version is 2.8.14-c23fe91
Nov 29 03:01:16 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[250282]: [NOTICE]   (250321) : path to executable is /usr/sbin/haproxy
Nov 29 03:01:16 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[250282]: [WARNING]  (250321) : Exiting Master process...
Nov 29 03:01:16 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[250282]: [WARNING]  (250321) : Exiting Master process...
Nov 29 03:01:16 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[250282]: [ALERT]    (250321) : Current worker (250334) exited with code 143 (Terminated)
Nov 29 03:01:16 np0005539564 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[250282]: [WARNING]  (250321) : All workers exited. Exiting... (0)
Nov 29 03:01:16 np0005539564 systemd[1]: libpod-bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da.scope: Deactivated successfully.
Nov 29 03:01:16 np0005539564 podman[250381]: 2025-11-29 08:01:16.241642392 +0000 UTC m=+0.108253849 container died bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:01:16 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da-userdata-shm.mount: Deactivated successfully.
Nov 29 03:01:16 np0005539564 systemd[1]: var-lib-containers-storage-overlay-a392a1a28df7d20b8535f22020076dabdf637105c6f4eaa8f2ccdd1eea70c326-merged.mount: Deactivated successfully.
Nov 29 03:01:16 np0005539564 podman[250381]: 2025-11-29 08:01:16.320874455 +0000 UTC m=+0.187485872 container cleanup bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:01:16 np0005539564 systemd[1]: libpod-conmon-bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da.scope: Deactivated successfully.
Nov 29 03:01:16 np0005539564 podman[250440]: 2025-11-29 08:01:16.39974781 +0000 UTC m=+0.046477279 container remove bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.405 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3940af36-8798-4c46-a289-d6118d3936f1]: (4, ('Sat Nov 29 08:01:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 (bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da)\nbd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da\nSat Nov 29 08:01:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 (bd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da)\nbd2e52594888ed76688a11ea32d54eb979b43eb90b3115dd963940841191b9da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.407 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[35639bf2-c359-472c-8198-1c5ce3d55b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.408 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2063759-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.410 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 kernel: tapb2063759-30: left promiscuous mode
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.427 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.430 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1f73a77b-1ea5-467a-893e-409078adb212]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.451 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5a644e-a803-4d6f-9e13-d3cb8021f7f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.453 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aff0c414-8f48-45d2-bc44-dad3b2f0fbe7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.472 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a9135734-439e-4dff-a1be-20b0e4e4c613]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624460, 'reachable_time': 20420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250454, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539564 systemd[1]: run-netns-ovnmeta\x2db2063759\x2d3e65\x2d4e4b\x2db3aa\x2d6d737d865479.mount: Deactivated successfully.
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.477 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:01:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:16.478 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[96f56024-79be-455a-9a22-1fd4cefbac4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.649 226310 DEBUG nova.compute.manager [req-3c7a9dd1-2f63-4757-9386-635955416dda req-f2a02aaa-0b80-44b0-b1a3-cb82c240cad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.649 226310 DEBUG oslo_concurrency.lockutils [req-3c7a9dd1-2f63-4757-9386-635955416dda req-f2a02aaa-0b80-44b0-b1a3-cb82c240cad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.649 226310 DEBUG oslo_concurrency.lockutils [req-3c7a9dd1-2f63-4757-9386-635955416dda req-f2a02aaa-0b80-44b0-b1a3-cb82c240cad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.650 226310 DEBUG oslo_concurrency.lockutils [req-3c7a9dd1-2f63-4757-9386-635955416dda req-f2a02aaa-0b80-44b0-b1a3-cb82c240cad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.650 226310 DEBUG nova.compute.manager [req-3c7a9dd1-2f63-4757-9386-635955416dda req-f2a02aaa-0b80-44b0-b1a3-cb82c240cad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.650 226310 DEBUG nova.compute.manager [req-3c7a9dd1-2f63-4757-9386-635955416dda req-f2a02aaa-0b80-44b0-b1a3-cb82c240cad2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-unplugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.692 226310 INFO nova.virt.libvirt.driver [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Deleting instance files /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_del#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.693 226310 INFO nova.virt.libvirt.driver [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Deletion of /var/lib/nova/instances/179785bb-7a72-4a5f-b2a6-ba2b4a10cba2_del complete#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.774 226310 INFO nova.compute.manager [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.775 226310 DEBUG oslo.service.loopingcall [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.775 226310 DEBUG nova.compute.manager [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:01:16 np0005539564 nova_compute[226295]: 2025-11-29 08:01:16.775 226310 DEBUG nova.network.neutron [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:01:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:16.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:16.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:17 np0005539564 nova_compute[226295]: 2025-11-29 08:01:17.143 226310 DEBUG nova.network.neutron [req-c80b3b72-07aa-4c12-ba9d-3137f4a37eb7 req-944d71cf-4f9c-490a-b408-81f896220a17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updated VIF entry in instance network info cache for port 31fb5a22-c39e-440e-b8e6-681b1ea8baed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:17 np0005539564 nova_compute[226295]: 2025-11-29 08:01:17.144 226310 DEBUG nova.network.neutron [req-c80b3b72-07aa-4c12-ba9d-3137f4a37eb7 req-944d71cf-4f9c-490a-b408-81f896220a17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updating instance_info_cache with network_info: [{"id": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "address": "fa:16:3e:d7:5a:51", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fb5a22-c3", "ovs_interfaceid": "31fb5a22-c39e-440e-b8e6-681b1ea8baed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:17 np0005539564 nova_compute[226295]: 2025-11-29 08:01:17.160 226310 DEBUG oslo_concurrency.lockutils [req-c80b3b72-07aa-4c12-ba9d-3137f4a37eb7 req-944d71cf-4f9c-490a-b408-81f896220a17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:17 np0005539564 nova_compute[226295]: 2025-11-29 08:01:17.625 226310 DEBUG nova.network.neutron [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:17 np0005539564 nova_compute[226295]: 2025-11-29 08:01:17.643 226310 INFO nova.compute.manager [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Took 0.87 seconds to deallocate network for instance.#033[00m
Nov 29 03:01:17 np0005539564 nova_compute[226295]: 2025-11-29 08:01:17.695 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:17 np0005539564 nova_compute[226295]: 2025-11-29 08:01:17.696 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:17 np0005539564 nova_compute[226295]: 2025-11-29 08:01:17.772 226310 DEBUG oslo_concurrency.processutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1927726628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.261 226310 DEBUG oslo_concurrency.processutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.271 226310 DEBUG nova.compute.provider_tree [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.295 226310 DEBUG nova.scheduler.client.report [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.332 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.386 226310 INFO nova.scheduler.client.report [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Deleted allocations for instance 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.476 226310 DEBUG oslo_concurrency.lockutils [None req-7a4c3e51-6791-489a-9a10-3cb35d9411c3 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.819 226310 DEBUG nova.compute.manager [req-15607edf-b027-4bda-8421-1859b35add38 req-ecc3e286-9713-4ec2-b417-2c6597b923f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.820 226310 DEBUG oslo_concurrency.lockutils [req-15607edf-b027-4bda-8421-1859b35add38 req-ecc3e286-9713-4ec2-b417-2c6597b923f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.821 226310 DEBUG oslo_concurrency.lockutils [req-15607edf-b027-4bda-8421-1859b35add38 req-ecc3e286-9713-4ec2-b417-2c6597b923f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.822 226310 DEBUG oslo_concurrency.lockutils [req-15607edf-b027-4bda-8421-1859b35add38 req-ecc3e286-9713-4ec2-b417-2c6597b923f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "179785bb-7a72-4a5f-b2a6-ba2b4a10cba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.822 226310 DEBUG nova.compute.manager [req-15607edf-b027-4bda-8421-1859b35add38 req-ecc3e286-9713-4ec2-b417-2c6597b923f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] No waiting events found dispatching network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.822 226310 WARNING nova.compute.manager [req-15607edf-b027-4bda-8421-1859b35add38 req-ecc3e286-9713-4ec2-b417-2c6597b923f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received unexpected event network-vif-plugged-31fb5a22-c39e-440e-b8e6-681b1ea8baed for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:01:18 np0005539564 nova_compute[226295]: 2025-11-29 08:01:18.844 226310 DEBUG nova.compute.manager [req-1ad30eed-3ee9-4d38-b7c8-01d708308e8e req-fcb63804-2d20-48f2-aa57-2dcad2fe4b38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Received event network-vif-deleted-31fb5a22-c39e-440e-b8e6-681b1ea8baed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:18.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:18.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:20.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:20.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:21 np0005539564 nova_compute[226295]: 2025-11-29 08:01:21.175 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:21 np0005539564 nova_compute[226295]: 2025-11-29 08:01:21.221 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e244 e244: 3 total, 3 up, 3 in
Nov 29 03:01:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:22.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:22.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:24.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:24.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.177 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.203 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "441c7c0c-4457-414d-8c62-68dda0364b56" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.204 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.223 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.226 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.297 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.298 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.305 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.305 226310 INFO nova.compute.claims [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:26.322367) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286322421, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1539, "num_deletes": 511, "total_data_size": 2681835, "memory_usage": 2736192, "flush_reason": "Manual Compaction"}
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 29 03:01:26 np0005539564 nova_compute[226295]: 2025-11-29 08:01:26.397 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286607806, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1183926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34317, "largest_seqno": 35851, "table_properties": {"data_size": 1178372, "index_size": 2309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17020, "raw_average_key_size": 20, "raw_value_size": 1164511, "raw_average_value_size": 1370, "num_data_blocks": 101, "num_entries": 850, "num_filter_entries": 850, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403191, "oldest_key_time": 1764403191, "file_creation_time": 1764403286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 285521 microseconds, and 8667 cpu microseconds.
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:01:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:26.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:26.607881) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1183926 bytes OK
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:26.607958) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:26.917647) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:26.917721) EVENT_LOG_v1 {"time_micros": 1764403286917702, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:26.917755) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2673551, prev total WAL file size 2689620, number of live WAL files 2.
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:26.919776) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303130' seq:72057594037927935, type:22 .. '6D6772737461740031323631' seq:0, type:0; will stop at (end)
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1156KB)], [63(11MB)]
Nov 29 03:01:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286919825, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 12887931, "oldest_snapshot_seqno": -1}
Nov 29 03:01:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:26.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6414 keys, 9454039 bytes, temperature: kUnknown
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403287087029, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 9454039, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9411695, "index_size": 25210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 165970, "raw_average_key_size": 25, "raw_value_size": 9296899, "raw_average_value_size": 1449, "num_data_blocks": 1003, "num_entries": 6414, "num_filter_entries": 6414, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:27.087343) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9454039 bytes
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:27.090121) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.0 rd, 56.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.2 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(18.9) write-amplify(8.0) OK, records in: 7417, records dropped: 1003 output_compression: NoCompression
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:27.090147) EVENT_LOG_v1 {"time_micros": 1764403287090133, "job": 38, "event": "compaction_finished", "compaction_time_micros": 167290, "compaction_time_cpu_micros": 23636, "output_level": 6, "num_output_files": 1, "total_output_size": 9454039, "num_input_records": 7417, "num_output_records": 6414, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403287090556, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403287095253, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:26.919557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:27.095414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:27.095428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:27.095432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:27.095437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:01:27.095442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3977386029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.193 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.795s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.200 226310 DEBUG nova.compute.provider_tree [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.216 226310 DEBUG nova.scheduler.client.report [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.249 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.250 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.302 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.303 226310 DEBUG nova.network.neutron [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.340 226310 INFO nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.358 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.525 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.527 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.527 226310 INFO nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Creating image(s)#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.564 226310 DEBUG nova.storage.rbd_utils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 441c7c0c-4457-414d-8c62-68dda0364b56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.600 226310 DEBUG nova.storage.rbd_utils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 441c7c0c-4457-414d-8c62-68dda0364b56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.634 226310 DEBUG nova.storage.rbd_utils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 441c7c0c-4457-414d-8c62-68dda0364b56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.638 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.674 226310 DEBUG nova.policy [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7d59bea260d4752aa29379967636c0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.716 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.717 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.718 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.719 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.756 226310 DEBUG nova.storage.rbd_utils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 441c7c0c-4457-414d-8c62-68dda0364b56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:27 np0005539564 nova_compute[226295]: 2025-11-29 08:01:27.762 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 441c7c0c-4457-414d-8c62-68dda0364b56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1018320197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:01:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1018320197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.128 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 441c7c0c-4457-414d-8c62-68dda0364b56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.250 226310 DEBUG nova.storage.rbd_utils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] resizing rbd image 441c7c0c-4457-414d-8c62-68dda0364b56_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.405 226310 DEBUG nova.objects.instance [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 441c7c0c-4457-414d-8c62-68dda0364b56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.458 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.459 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Ensure instance console log exists: /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.460 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.460 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.460 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:28 np0005539564 nova_compute[226295]: 2025-11-29 08:01:28.537 226310 DEBUG nova.network.neutron [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Successfully created port: 7a721704-a73d-481b-865f-39574594741f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:01:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:28.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:29 np0005539564 nova_compute[226295]: 2025-11-29 08:01:29.703 226310 DEBUG nova.network.neutron [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Successfully updated port: 7a721704-a73d-481b-865f-39574594741f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:01:29 np0005539564 nova_compute[226295]: 2025-11-29 08:01:29.958 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "refresh_cache-441c7c0c-4457-414d-8c62-68dda0364b56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:29 np0005539564 nova_compute[226295]: 2025-11-29 08:01:29.958 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquired lock "refresh_cache-441c7c0c-4457-414d-8c62-68dda0364b56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:29 np0005539564 nova_compute[226295]: 2025-11-29 08:01:29.959 226310 DEBUG nova.network.neutron [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:29 np0005539564 nova_compute[226295]: 2025-11-29 08:01:29.965 226310 DEBUG nova.compute.manager [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received event network-changed-7a721704-a73d-481b-865f-39574594741f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:29 np0005539564 nova_compute[226295]: 2025-11-29 08:01:29.965 226310 DEBUG nova.compute.manager [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Refreshing instance network info cache due to event network-changed-7a721704-a73d-481b-865f-39574594741f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:29 np0005539564 nova_compute[226295]: 2025-11-29 08:01:29.965 226310 DEBUG oslo_concurrency.lockutils [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-441c7c0c-4457-414d-8c62-68dda0364b56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:29 np0005539564 nova_compute[226295]: 2025-11-29 08:01:29.971 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:29.971 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:29.973 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:01:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:29.973 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539564 nova_compute[226295]: 2025-11-29 08:01:30.342 226310 DEBUG nova.network.neutron [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:01:30 np0005539564 nova_compute[226295]: 2025-11-29 08:01:30.356 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e245 e245: 3 total, 3 up, 3 in
Nov 29 03:01:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:30.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:30.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:31 np0005539564 nova_compute[226295]: 2025-11-29 08:01:31.145 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403276.1434872, 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:31 np0005539564 nova_compute[226295]: 2025-11-29 08:01:31.145 226310 INFO nova.compute.manager [-] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:01:31 np0005539564 nova_compute[226295]: 2025-11-29 08:01:31.174 226310 DEBUG nova.compute.manager [None req-7f65945b-e684-4b8b-bd80-a57a1768d77a - - - - - -] [instance: 179785bb-7a72-4a5f-b2a6-ba2b4a10cba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:31 np0005539564 nova_compute[226295]: 2025-11-29 08:01:31.180 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:31 np0005539564 nova_compute[226295]: 2025-11-29 08:01:31.263 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:32 np0005539564 nova_compute[226295]: 2025-11-29 08:01:32.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:32 np0005539564 nova_compute[226295]: 2025-11-29 08:01:32.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:01:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:32.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.459 226310 DEBUG nova.network.neutron [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Updating instance_info_cache with network_info: [{"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.499 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Releasing lock "refresh_cache-441c7c0c-4457-414d-8c62-68dda0364b56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.500 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Instance network_info: |[{"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.500 226310 DEBUG oslo_concurrency.lockutils [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-441c7c0c-4457-414d-8c62-68dda0364b56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.501 226310 DEBUG nova.network.neutron [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Refreshing network info cache for port 7a721704-a73d-481b-865f-39574594741f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.503 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Start _get_guest_xml network_info=[{"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.508 226310 WARNING nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.523 226310 DEBUG nova.virt.libvirt.host [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.523 226310 DEBUG nova.virt.libvirt.host [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.526 226310 DEBUG nova.virt.libvirt.host [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.527 226310 DEBUG nova.virt.libvirt.host [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.528 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.528 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.528 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.529 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.529 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.529 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.529 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.529 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.530 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.530 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.530 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.530 226310 DEBUG nova.virt.hardware [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:01:33 np0005539564 nova_compute[226295]: 2025-11-29 08:01:33.533 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1470695734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.013 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.049 226310 DEBUG nova.storage.rbd_utils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 441c7c0c-4457-414d-8c62-68dda0364b56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.054 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.534 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.537 226310 DEBUG nova.virt.libvirt.vif [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-314491992',display_name='tempest-ImagesTestJSON-server-314491992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-314491992',id=65,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-20tvm0jd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:27Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=441c7c0c-4457-414d-8c62-68dda0364b56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.537 226310 DEBUG nova.network.os_vif_util [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.539 226310 DEBUG nova.network.os_vif_util [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:83:da,bridge_name='br-int',has_traffic_filtering=True,id=7a721704-a73d-481b-865f-39574594741f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a721704-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:34 np0005539564 nova_compute[226295]: 2025-11-29 08:01:34.541 226310 DEBUG nova.objects.instance [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 441c7c0c-4457-414d-8c62-68dda0364b56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:01:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 6799 writes, 35K keys, 6799 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 6799 writes, 6799 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1639 writes, 8538 keys, 1639 commit groups, 1.0 writes per commit group, ingest: 16.52 MB, 0.03 MB/s#012Interval WAL: 1639 writes, 1639 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.8      3.21              0.15        19    0.169       0      0       0.0       0.0#012  L6      1/0    9.02 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.6     32.0     26.2      6.09              0.53        18    0.338     98K    10K       0.0       0.0#012 Sum      1/0    9.02 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.6     21.0     21.9      9.30              0.68        37    0.251     98K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.2     30.8     30.3      1.89              0.20        10    0.189     33K   3553       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0     32.0     26.2      6.09              0.53        18    0.338     98K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.8      3.21              0.15        18    0.178       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.043, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.07 MB/s write, 0.19 GB read, 0.07 MB/s read, 9.3 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 22.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000203 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1200,21.43 MB,7.04941%) FilterBlock(37,284.73 KB,0.0914674%) IndexBlock(37,493.89 KB,0.158656%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:01:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:34.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.600 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <uuid>441c7c0c-4457-414d-8c62-68dda0364b56</uuid>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <name>instance-00000041</name>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <nova:name>tempest-ImagesTestJSON-server-314491992</nova:name>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:01:33</nova:creationTime>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <nova:user uuid="f7d59bea260d4752aa29379967636c0b">tempest-ImagesTestJSON-911260095-project-member</nova:user>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <nova:project uuid="4d8c5b7e3ca74bc1880eb616b04711f7">tempest-ImagesTestJSON-911260095</nova:project>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <nova:port uuid="7a721704-a73d-481b-865f-39574594741f">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <entry name="serial">441c7c0c-4457-414d-8c62-68dda0364b56</entry>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <entry name="uuid">441c7c0c-4457-414d-8c62-68dda0364b56</entry>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/441c7c0c-4457-414d-8c62-68dda0364b56_disk">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/441c7c0c-4457-414d-8c62-68dda0364b56_disk.config">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:53:83:da"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <target dev="tap7a721704-a7"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56/console.log" append="off"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:01:35 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:01:35 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:01:35 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:01:35 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.602 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Preparing to wait for external event network-vif-plugged-7a721704-a73d-481b-865f-39574594741f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.602 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.603 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.603 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.604 226310 DEBUG nova.virt.libvirt.vif [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-314491992',display_name='tempest-ImagesTestJSON-server-314491992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-314491992',id=65,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-20tvm0jd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:27Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=441c7c0c-4457-414d-8c62-68dda0364b56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.605 226310 DEBUG nova.network.os_vif_util [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.606 226310 DEBUG nova.network.os_vif_util [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:83:da,bridge_name='br-int',has_traffic_filtering=True,id=7a721704-a73d-481b-865f-39574594741f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a721704-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.606 226310 DEBUG os_vif [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:83:da,bridge_name='br-int',has_traffic_filtering=True,id=7a721704-a73d-481b-865f-39574594741f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a721704-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.607 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.608 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.609 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.613 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.614 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.615 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.616 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a721704-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.617 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a721704-a7, col_values=(('external_ids', {'iface-id': '7a721704-a73d-481b-865f-39574594741f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:83:da', 'vm-uuid': '441c7c0c-4457-414d-8c62-68dda0364b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.619 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539564 NetworkManager[48997]: <info>  [1764403295.6215] manager: (tap7a721704-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.623 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.629 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539564 nova_compute[226295]: 2025-11-29 08:01:35.631 226310 INFO os_vif [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:83:da,bridge_name='br-int',has_traffic_filtering=True,id=7a721704-a73d-481b-865f-39574594741f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a721704-a7')#033[00m
Nov 29 03:01:36 np0005539564 nova_compute[226295]: 2025-11-29 08:01:36.265 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:36 np0005539564 nova_compute[226295]: 2025-11-29 08:01:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:36 np0005539564 nova_compute[226295]: 2025-11-29 08:01:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:36.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:36 np0005539564 nova_compute[226295]: 2025-11-29 08:01:36.928 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:36 np0005539564 nova_compute[226295]: 2025-11-29 08:01:36.929 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:36 np0005539564 nova_compute[226295]: 2025-11-29 08:01:36.930 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No VIF found with MAC fa:16:3e:53:83:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:36 np0005539564 nova_compute[226295]: 2025-11-29 08:01:36.931 226310 INFO nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Using config drive#033[00m
Nov 29 03:01:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:36.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:37 np0005539564 nova_compute[226295]: 2025-11-29 08:01:37.867 226310 DEBUG nova.storage.rbd_utils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 441c7c0c-4457-414d-8c62-68dda0364b56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:38 np0005539564 nova_compute[226295]: 2025-11-29 08:01:38.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:38 np0005539564 nova_compute[226295]: 2025-11-29 08:01:38.833 226310 DEBUG nova.network.neutron [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Updated VIF entry in instance network info cache for port 7a721704-a73d-481b-865f-39574594741f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:38 np0005539564 nova_compute[226295]: 2025-11-29 08:01:38.833 226310 DEBUG nova.network.neutron [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Updating instance_info_cache with network_info: [{"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:38.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:39 np0005539564 nova_compute[226295]: 2025-11-29 08:01:39.431 226310 DEBUG oslo_concurrency.lockutils [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-441c7c0c-4457-414d-8c62-68dda0364b56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:39 np0005539564 nova_compute[226295]: 2025-11-29 08:01:39.889 226310 INFO nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Creating config drive at /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56/disk.config#033[00m
Nov 29 03:01:39 np0005539564 nova_compute[226295]: 2025-11-29 08:01:39.895 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1eb86wq_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:40 np0005539564 nova_compute[226295]: 2025-11-29 08:01:40.033 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1eb86wq_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:40 np0005539564 nova_compute[226295]: 2025-11-29 08:01:40.090 226310 DEBUG nova.storage.rbd_utils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 441c7c0c-4457-414d-8c62-68dda0364b56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:40 np0005539564 nova_compute[226295]: 2025-11-29 08:01:40.096 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56/disk.config 441c7c0c-4457-414d-8c62-68dda0364b56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:40 np0005539564 nova_compute[226295]: 2025-11-29 08:01:40.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:40 np0005539564 nova_compute[226295]: 2025-11-29 08:01:40.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:01:40 np0005539564 nova_compute[226295]: 2025-11-29 08:01:40.620 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:40.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:40.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:41 np0005539564 nova_compute[226295]: 2025-11-29 08:01:41.304 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:41 np0005539564 podman[250788]: 2025-11-29 08:01:41.569431173 +0000 UTC m=+0.095917848 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:01:41 np0005539564 podman[250789]: 2025-11-29 08:01:41.592761803 +0000 UTC m=+0.111112919 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:01:41 np0005539564 podman[250787]: 2025-11-29 08:01:41.618500587 +0000 UTC m=+0.151712113 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:41 np0005539564 nova_compute[226295]: 2025-11-29 08:01:41.854 226310 DEBUG oslo_concurrency.processutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56/disk.config 441c7c0c-4457-414d-8c62-68dda0364b56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.758s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:41 np0005539564 nova_compute[226295]: 2025-11-29 08:01:41.855 226310 INFO nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Deleting local config drive /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56/disk.config because it was imported into RBD.#033[00m
Nov 29 03:01:41 np0005539564 kernel: tap7a721704-a7: entered promiscuous mode
Nov 29 03:01:41 np0005539564 NetworkManager[48997]: <info>  [1764403301.9351] manager: (tap7a721704-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Nov 29 03:01:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:41Z|00195|binding|INFO|Claiming lport 7a721704-a73d-481b-865f-39574594741f for this chassis.
Nov 29 03:01:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:41Z|00196|binding|INFO|7a721704-a73d-481b-865f-39574594741f: Claiming fa:16:3e:53:83:da 10.100.0.4
Nov 29 03:01:41 np0005539564 nova_compute[226295]: 2025-11-29 08:01:41.936 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:41 np0005539564 nova_compute[226295]: 2025-11-29 08:01:41.939 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:41 np0005539564 nova_compute[226295]: 2025-11-29 08:01:41.942 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:41 np0005539564 systemd-udevd[250862]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:41 np0005539564 NetworkManager[48997]: <info>  [1764403301.9876] device (tap7a721704-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:41 np0005539564 NetworkManager[48997]: <info>  [1764403301.9889] device (tap7a721704-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:42 np0005539564 nova_compute[226295]: 2025-11-29 08:01:42.029 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:42Z|00197|binding|INFO|Setting lport 7a721704-a73d-481b-865f-39574594741f ovn-installed in OVS
Nov 29 03:01:42 np0005539564 nova_compute[226295]: 2025-11-29 08:01:42.034 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:42 np0005539564 systemd-machined[190128]: New machine qemu-29-instance-00000041.
Nov 29 03:01:42 np0005539564 systemd[1]: Started Virtual Machine qemu-29-instance-00000041.
Nov 29 03:01:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:42.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:42.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.213 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403303.21279, 441c7c0c-4457-414d-8c62-68dda0364b56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.214 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] VM Started (Lifecycle Event)#033[00m
Nov 29 03:01:43 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:43Z|00198|binding|INFO|Setting lport 7a721704-a73d-481b-865f-39574594741f up in Southbound
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.769 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:83:da 10.100.0.4'], port_security=['fa:16:3e:53:83:da 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '441c7c0c-4457-414d-8c62-68dda0364b56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7a721704-a73d-481b-865f-39574594741f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.771 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7a721704-a73d-481b-865f-39574594741f in datapath 7471f45a-da60-4567-a888-2a87ff526609 bound to our chassis#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.773 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7471f45a-da60-4567-a888-2a87ff526609#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.791 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[364de3e1-1bdd-403b-93da-9a14e546784a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.792 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7471f45a-d1 in ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.796 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7471f45a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.796 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdbfba6-f8e0-49d6-8868-a971eb3aa923]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.798 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd9859d-b849-4d25-abc4-29fed73b45f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.813 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[6b682626-88d8-4ca9-af8e-a61b5fc5cb43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.845 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e305c141-443b-47fa-91b8-8332024cfb37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.850 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.851 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.851 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.859 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.865 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403303.2133636, 441c7c0c-4457-414d-8c62-68dda0364b56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.865 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.894 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfedd81-f49f-421d-a1fd-41d1ad096d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.899 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.901 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[344dc558-0e3f-4daf-b3ca-3eb541e44434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 NetworkManager[48997]: <info>  [1764403303.9043] manager: (tap7471f45a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.905 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:43 np0005539564 nova_compute[226295]: 2025-11-29 08:01:43.939 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.940 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[73688c99-130f-4247-8c73-89f1ce4f54d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.945 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4492e168-0421-427e-a06f-5d8e041cca5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539564 NetworkManager[48997]: <info>  [1764403303.9777] device (tap7471f45a-d0): carrier: link connected
Nov 29 03:01:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:43.983 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f37819b9-f0ee-4942-8ff6-10e24f325d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.012 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49643585-8904-403b-b4e9-f251639ec59d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627865, 'reachable_time': 20567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250940, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.037 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7bc542-5650-42be-9e8b-9424a254c097]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:d764'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 627865, 'tstamp': 627865}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250941, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.060 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[098421be-ebfc-4e9d-8c56-03fcb2b91ddc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627865, 'reachable_time': 20567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250942, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.099 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[34a5bd00-705f-4054-9aaa-d1aba4ae88cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.197 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[99ce2f14-c7ea-4358-98fe-c50e2bafa30a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.199 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.200 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.201 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7471f45a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:44 np0005539564 kernel: tap7471f45a-d0: entered promiscuous mode
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.204 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:44 np0005539564 NetworkManager[48997]: <info>  [1764403304.2052] manager: (tap7471f45a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.209 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7471f45a-d0, col_values=(('external_ids', {'iface-id': '06264566-5ffe-42a3-ad44-b3f54b7d79bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:44Z|00199|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.211 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.212 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.213 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.214 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4d7bc2-fe52-41b6-bb10-eb0dca99ffae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.216 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-7471f45a-da60-4567-a888-2a87ff526609
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 7471f45a-da60-4567-a888-2a87ff526609
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:01:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:01:44.218 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'env', 'PROCESS_TAG=haproxy-7471f45a-da60-4567-a888-2a87ff526609', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7471f45a-da60-4567-a888-2a87ff526609.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.406 226310 DEBUG nova.compute.manager [req-dc3559e1-9843-45a1-b381-2217535039c3 req-9f287068-974b-443f-988b-b4d921f592c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received event network-vif-plugged-7a721704-a73d-481b-865f-39574594741f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.407 226310 DEBUG oslo_concurrency.lockutils [req-dc3559e1-9843-45a1-b381-2217535039c3 req-9f287068-974b-443f-988b-b4d921f592c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.407 226310 DEBUG oslo_concurrency.lockutils [req-dc3559e1-9843-45a1-b381-2217535039c3 req-9f287068-974b-443f-988b-b4d921f592c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.407 226310 DEBUG oslo_concurrency.lockutils [req-dc3559e1-9843-45a1-b381-2217535039c3 req-9f287068-974b-443f-988b-b4d921f592c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.407 226310 DEBUG nova.compute.manager [req-dc3559e1-9843-45a1-b381-2217535039c3 req-9f287068-974b-443f-988b-b4d921f592c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Processing event network-vif-plugged-7a721704-a73d-481b-865f-39574594741f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.408 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.411 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403304.411208, 441c7c0c-4457-414d-8c62-68dda0364b56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.411 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.426 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.429 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.432 226310 INFO nova.virt.libvirt.driver [-] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Instance spawned successfully.#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.432 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.435 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.455 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.455 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.456 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.456 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.457 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.458 226310 DEBUG nova.virt.libvirt.driver [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.462 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.560 226310 INFO nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Took 17.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.561 226310 DEBUG nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.663 226310 INFO nova.compute.manager [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Took 18.39 seconds to build instance.#033[00m
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.709 226310 DEBUG oslo_concurrency.lockutils [None req-5ed3ee19-606d-485e-bfd5-821c77f37132 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:44 np0005539564 podman[250975]: 2025-11-29 08:01:44.623928595 +0000 UTC m=+0.038424417 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:01:44 np0005539564 podman[250975]: 2025-11-29 08:01:44.816489789 +0000 UTC m=+0.230985641 container create 91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:44 np0005539564 systemd[1]: Started libpod-conmon-91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8.scope.
Nov 29 03:01:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:44.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.895 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:44 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:01:44 np0005539564 nova_compute[226295]: 2025-11-29 08:01:44.897 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:44 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790c0bf1c55e1f5ae7b74a057c2cb6b3c66fb94008ac26f0e0868304cacaeead/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:44 np0005539564 podman[250975]: 2025-11-29 08:01:44.918782988 +0000 UTC m=+0.333278870 container init 91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:01:44 np0005539564 podman[250975]: 2025-11-29 08:01:44.928290235 +0000 UTC m=+0.342786087 container start 91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:01:44 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[250990]: [NOTICE]   (250994) : New worker (250996) forked
Nov 29 03:01:44 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[250990]: [NOTICE]   (250994) : Loading success.
Nov 29 03:01:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:45 np0005539564 nova_compute[226295]: 2025-11-29 08:01:45.215 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:45 np0005539564 nova_compute[226295]: 2025-11-29 08:01:45.216 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:45 np0005539564 nova_compute[226295]: 2025-11-29 08:01:45.217 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:45 np0005539564 nova_compute[226295]: 2025-11-29 08:01:45.217 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:01:45 np0005539564 nova_compute[226295]: 2025-11-29 08:01:45.218 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:45 np0005539564 nova_compute[226295]: 2025-11-29 08:01:45.666 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/808961810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:45 np0005539564 nova_compute[226295]: 2025-11-29 08:01:45.721 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:46 np0005539564 nova_compute[226295]: 2025-11-29 08:01:46.307 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:46.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.019 226310 DEBUG nova.compute.manager [req-b5030ae8-d02d-4a70-94a8-8d125949d29b req-4a44da0f-dd4c-4e7d-9e78-d209519fa442 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received event network-vif-plugged-7a721704-a73d-481b-865f-39574594741f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.020 226310 DEBUG oslo_concurrency.lockutils [req-b5030ae8-d02d-4a70-94a8-8d125949d29b req-4a44da0f-dd4c-4e7d-9e78-d209519fa442 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.020 226310 DEBUG oslo_concurrency.lockutils [req-b5030ae8-d02d-4a70-94a8-8d125949d29b req-4a44da0f-dd4c-4e7d-9e78-d209519fa442 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.020 226310 DEBUG oslo_concurrency.lockutils [req-b5030ae8-d02d-4a70-94a8-8d125949d29b req-4a44da0f-dd4c-4e7d-9e78-d209519fa442 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.020 226310 DEBUG nova.compute.manager [req-b5030ae8-d02d-4a70-94a8-8d125949d29b req-4a44da0f-dd4c-4e7d-9e78-d209519fa442 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] No waiting events found dispatching network-vif-plugged-7a721704-a73d-481b-865f-39574594741f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.021 226310 WARNING nova.compute.manager [req-b5030ae8-d02d-4a70-94a8-8d125949d29b req-4a44da0f-dd4c-4e7d-9e78-d209519fa442 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received unexpected event network-vif-plugged-7a721704-a73d-481b-865f-39574594741f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.130 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.131 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.314 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.315 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4626MB free_disk=20.921955108642578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.315 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.316 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:47 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:47Z|00200|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.480 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.553 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 441c7c0c-4457-414d-8c62-68dda0364b56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.554 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.554 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:01:47 np0005539564 nova_compute[226295]: 2025-11-29 08:01:47.683 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1890790093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:48 np0005539564 nova_compute[226295]: 2025-11-29 08:01:48.149 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:48 np0005539564 nova_compute[226295]: 2025-11-29 08:01:48.156 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:48 np0005539564 nova_compute[226295]: 2025-11-29 08:01:48.333 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:48 np0005539564 nova_compute[226295]: 2025-11-29 08:01:48.408 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:01:48 np0005539564 nova_compute[226295]: 2025-11-29 08:01:48.409 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:48.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:50 np0005539564 nova_compute[226295]: 2025-11-29 08:01:50.101 226310 DEBUG nova.compute.manager [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:50 np0005539564 nova_compute[226295]: 2025-11-29 08:01:50.178 226310 INFO nova.compute.manager [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] instance snapshotting#033[00m
Nov 29 03:01:50 np0005539564 nova_compute[226295]: 2025-11-29 08:01:50.671 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539564 nova_compute[226295]: 2025-11-29 08:01:50.776 226310 INFO nova.virt.libvirt.driver [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Beginning live snapshot process#033[00m
Nov 29 03:01:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:50.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:51 np0005539564 nova_compute[226295]: 2025-11-29 08:01:51.310 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:52 np0005539564 nova_compute[226295]: 2025-11-29 08:01:52.306 226310 DEBUG nova.virt.libvirt.imagebackend [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:01:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:52.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:52.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:53 np0005539564 nova_compute[226295]: 2025-11-29 08:01:53.155 226310 DEBUG nova.storage.rbd_utils [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(235f122e94aa4c23870a5e455f7626db) on rbd image(441c7c0c-4457-414d-8c62-68dda0364b56_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:01:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e246 e246: 3 total, 3 up, 3 in
Nov 29 03:01:54 np0005539564 nova_compute[226295]: 2025-11-29 08:01:54.764 226310 DEBUG nova.storage.rbd_utils [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] cloning vms/441c7c0c-4457-414d-8c62-68dda0364b56_disk@235f122e94aa4c23870a5e455f7626db to images/4e6c48a5-2a39-4c6b-9d29-a38ef7a8b4b6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:01:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:54.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:54 np0005539564 nova_compute[226295]: 2025-11-29 08:01:54.932 226310 DEBUG nova.storage.rbd_utils [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] flattening images/4e6c48a5-2a39-4c6b-9d29-a38ef7a8b4b6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:01:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:54.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:55 np0005539564 nova_compute[226295]: 2025-11-29 08:01:55.279 226310 DEBUG nova.storage.rbd_utils [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] removing snapshot(235f122e94aa4c23870a5e455f7626db) on rbd image(441c7c0c-4457-414d-8c62-68dda0364b56_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:01:55 np0005539564 nova_compute[226295]: 2025-11-29 08:01:55.714 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e247 e247: 3 total, 3 up, 3 in
Nov 29 03:01:55 np0005539564 nova_compute[226295]: 2025-11-29 08:01:55.782 226310 DEBUG nova.storage.rbd_utils [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(snap) on rbd image(4e6c48a5-2a39-4c6b-9d29-a38ef7a8b4b6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:01:56 np0005539564 nova_compute[226295]: 2025-11-29 08:01:56.312 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e248 e248: 3 total, 3 up, 3 in
Nov 29 03:01:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:56.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 4e6c48a5-2a39-4c6b-9d29-a38ef7a8b4b6 could not be found.
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 4e6c48a5-2a39-4c6b-9d29-a38ef7a8b4b6
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver 
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver 
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 4e6c48a5-2a39-4c6b-9d29-a38ef7a8b4b6 could not be found.
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.297 226310 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.347 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:57 np0005539564 nova_compute[226295]: 2025-11-29 08:01:57.368 226310 DEBUG nova.storage.rbd_utils [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] removing snapshot(snap) on rbd image(4e6c48a5-2a39-4c6b-9d29-a38ef7a8b4b6) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:01:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e249 e249: 3 total, 3 up, 3 in
Nov 29 03:01:58 np0005539564 nova_compute[226295]: 2025-11-29 08:01:58.205 226310 WARNING nova.compute.manager [None req-d61e8fb5-3c9d-4a1b-82eb-5c27da10c97f f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Image not found during snapshot: nova.exception.ImageNotFound: Image 4e6c48a5-2a39-4c6b-9d29-a38ef7a8b4b6 could not be found.#033[00m
Nov 29 03:01:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:58.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:58 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:58Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:83:da 10.100.0.4
Nov 29 03:01:58 np0005539564 ovn_controller[130591]: 2025-11-29T08:01:58Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:83:da 10.100.0.4
Nov 29 03:01:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:01:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:59.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:00 np0005539564 nova_compute[226295]: 2025-11-29 08:02:00.718 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:00.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.316 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.720 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "441c7c0c-4457-414d-8c62-68dda0364b56" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.721 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.721 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.722 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.722 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.724 226310 INFO nova.compute.manager [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Terminating instance#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.726 226310 DEBUG nova.compute.manager [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:02:01 np0005539564 kernel: tap7a721704-a7 (unregistering): left promiscuous mode
Nov 29 03:02:01 np0005539564 NetworkManager[48997]: <info>  [1764403321.7927] device (tap7a721704-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:02:01 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:01Z|00201|binding|INFO|Releasing lport 7a721704-a73d-481b-865f-39574594741f from this chassis (sb_readonly=0)
Nov 29 03:02:01 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:01Z|00202|binding|INFO|Setting lport 7a721704-a73d-481b-865f-39574594741f down in Southbound
Nov 29 03:02:01 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:01Z|00203|binding|INFO|Removing iface tap7a721704-a7 ovn-installed in OVS
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.802 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.806 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:01.819 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:83:da 10.100.0.4'], port_security=['fa:16:3e:53:83:da 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '441c7c0c-4457-414d-8c62-68dda0364b56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7a721704-a73d-481b-865f-39574594741f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:01.821 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7a721704-a73d-481b-865f-39574594741f in datapath 7471f45a-da60-4567-a888-2a87ff526609 unbound from our chassis#033[00m
Nov 29 03:02:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:01.823 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7471f45a-da60-4567-a888-2a87ff526609, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:02:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:01.825 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3dbc5a-5ded-49f8-b226-3b37852d8069]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:01.827 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace which is not needed anymore#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.836 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:01 np0005539564 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000041.scope: Deactivated successfully.
Nov 29 03:02:01 np0005539564 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000041.scope: Consumed 15.247s CPU time.
Nov 29 03:02:01 np0005539564 systemd-machined[190128]: Machine qemu-29-instance-00000041 terminated.
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.980 226310 INFO nova.virt.libvirt.driver [-] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Instance destroyed successfully.#033[00m
Nov 29 03:02:01 np0005539564 nova_compute[226295]: 2025-11-29 08:02:01.982 226310 DEBUG nova.objects.instance [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'resources' on Instance uuid 441c7c0c-4457-414d-8c62-68dda0364b56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.009 226310 DEBUG nova.virt.libvirt.vif [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-314491992',display_name='tempest-ImagesTestJSON-server-314491992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-314491992',id=65,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:44Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-20tvm0jd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:58Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=441c7c0c-4457-414d-8c62-68dda0364b56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.009 226310 DEBUG nova.network.os_vif_util [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "7a721704-a73d-481b-865f-39574594741f", "address": "fa:16:3e:53:83:da", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a721704-a7", "ovs_interfaceid": "7a721704-a73d-481b-865f-39574594741f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.010 226310 DEBUG nova.network.os_vif_util [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:83:da,bridge_name='br-int',has_traffic_filtering=True,id=7a721704-a73d-481b-865f-39574594741f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a721704-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.010 226310 DEBUG os_vif [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:83:da,bridge_name='br-int',has_traffic_filtering=True,id=7a721704-a73d-481b-865f-39574594741f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a721704-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.012 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.013 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a721704-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.014 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:02 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[250990]: [NOTICE]   (250994) : haproxy version is 2.8.14-c23fe91
Nov 29 03:02:02 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[250990]: [NOTICE]   (250994) : path to executable is /usr/sbin/haproxy
Nov 29 03:02:02 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[250990]: [WARNING]  (250994) : Exiting Master process...
Nov 29 03:02:02 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[250990]: [WARNING]  (250994) : Exiting Master process...
Nov 29 03:02:02 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[250990]: [ALERT]    (250994) : Current worker (250996) exited with code 143 (Terminated)
Nov 29 03:02:02 np0005539564 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[250990]: [WARNING]  (250994) : All workers exited. Exiting... (0)
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.017 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:02 np0005539564 systemd[1]: libpod-91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8.scope: Deactivated successfully.
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.020 226310 INFO os_vif [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:83:da,bridge_name='br-int',has_traffic_filtering=True,id=7a721704-a73d-481b-865f-39574594741f,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a721704-a7')#033[00m
Nov 29 03:02:02 np0005539564 podman[251253]: 2025-11-29 08:02:02.026657631 +0000 UTC m=+0.067679746 container died 91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:02:02 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8-userdata-shm.mount: Deactivated successfully.
Nov 29 03:02:02 np0005539564 systemd[1]: var-lib-containers-storage-overlay-790c0bf1c55e1f5ae7b74a057c2cb6b3c66fb94008ac26f0e0868304cacaeead-merged.mount: Deactivated successfully.
Nov 29 03:02:02 np0005539564 podman[251253]: 2025-11-29 08:02:02.080118113 +0000 UTC m=+0.121140238 container cleanup 91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:02:02 np0005539564 systemd[1]: libpod-conmon-91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8.scope: Deactivated successfully.
Nov 29 03:02:02 np0005539564 podman[251308]: 2025-11-29 08:02:02.150508302 +0000 UTC m=+0.048487999 container remove 91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.158 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f80c6a-d35e-4e0c-958c-1f4be0a7605a]: (4, ('Sat Nov 29 08:02:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8)\n91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8\nSat Nov 29 08:02:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8)\n91642ea7f648954f7a52f5cac66920cf13383548d00c096a5c30fd161fcd05a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.161 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[38444331-9826-4e79-bbb1-1308b01a1010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.164 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:02 np0005539564 kernel: tap7471f45a-d0: left promiscuous mode
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.167 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.181 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.185 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa76c97-4830-4e0c-a354-0531d559eb20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.196 226310 DEBUG nova.compute.manager [req-36b1efc4-0ad5-4463-a136-d8d9be33971b req-3db66c0d-2997-467d-8b1f-2965f1d633df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received event network-vif-unplugged-7a721704-a73d-481b-865f-39574594741f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.197 226310 DEBUG oslo_concurrency.lockutils [req-36b1efc4-0ad5-4463-a136-d8d9be33971b req-3db66c0d-2997-467d-8b1f-2965f1d633df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.197 226310 DEBUG oslo_concurrency.lockutils [req-36b1efc4-0ad5-4463-a136-d8d9be33971b req-3db66c0d-2997-467d-8b1f-2965f1d633df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.197 226310 DEBUG oslo_concurrency.lockutils [req-36b1efc4-0ad5-4463-a136-d8d9be33971b req-3db66c0d-2997-467d-8b1f-2965f1d633df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.198 226310 DEBUG nova.compute.manager [req-36b1efc4-0ad5-4463-a136-d8d9be33971b req-3db66c0d-2997-467d-8b1f-2965f1d633df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] No waiting events found dispatching network-vif-unplugged-7a721704-a73d-481b-865f-39574594741f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:02:02 np0005539564 nova_compute[226295]: 2025-11-29 08:02:02.198 226310 DEBUG nova.compute.manager [req-36b1efc4-0ad5-4463-a136-d8d9be33971b req-3db66c0d-2997-467d-8b1f-2965f1d633df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received event network-vif-unplugged-7a721704-a73d-481b-865f-39574594741f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.204 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[13fde345-dcc3-4115-b9e2-b0698d9633dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.205 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d904f79d-f872-4317-81fe-26ee95a0f623]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.224 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aca7dcf3-39d9-4ca3-a8d2-80446c55cc2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627856, 'reachable_time': 39487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251337, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:02 np0005539564 systemd[1]: run-netns-ovnmeta\x2d7471f45a\x2dda60\x2d4567\x2da888\x2d2a87ff526609.mount: Deactivated successfully.
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.228 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:02:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:02.228 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[7efd09e9-1095-41ad-82a5-3417baff1c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:02.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:03.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:03.711 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:03.712 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:03.712 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:04 np0005539564 nova_compute[226295]: 2025-11-29 08:02:04.532 226310 DEBUG nova.compute.manager [req-47bbba97-b7bd-458d-ba88-924ea7aaabc1 req-290a7e92-1979-4de8-a1dd-e7713090cd04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received event network-vif-plugged-7a721704-a73d-481b-865f-39574594741f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:04 np0005539564 nova_compute[226295]: 2025-11-29 08:02:04.533 226310 DEBUG oslo_concurrency.lockutils [req-47bbba97-b7bd-458d-ba88-924ea7aaabc1 req-290a7e92-1979-4de8-a1dd-e7713090cd04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:04 np0005539564 nova_compute[226295]: 2025-11-29 08:02:04.533 226310 DEBUG oslo_concurrency.lockutils [req-47bbba97-b7bd-458d-ba88-924ea7aaabc1 req-290a7e92-1979-4de8-a1dd-e7713090cd04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:04 np0005539564 nova_compute[226295]: 2025-11-29 08:02:04.533 226310 DEBUG oslo_concurrency.lockutils [req-47bbba97-b7bd-458d-ba88-924ea7aaabc1 req-290a7e92-1979-4de8-a1dd-e7713090cd04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:04 np0005539564 nova_compute[226295]: 2025-11-29 08:02:04.533 226310 DEBUG nova.compute.manager [req-47bbba97-b7bd-458d-ba88-924ea7aaabc1 req-290a7e92-1979-4de8-a1dd-e7713090cd04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] No waiting events found dispatching network-vif-plugged-7a721704-a73d-481b-865f-39574594741f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:02:04 np0005539564 nova_compute[226295]: 2025-11-29 08:02:04.534 226310 WARNING nova.compute.manager [req-47bbba97-b7bd-458d-ba88-924ea7aaabc1 req-290a7e92-1979-4de8-a1dd-e7713090cd04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received unexpected event network-vif-plugged-7a721704-a73d-481b-865f-39574594741f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:02:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:02:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:02:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:04.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:05.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:05 np0005539564 nova_compute[226295]: 2025-11-29 08:02:05.111 226310 INFO nova.virt.libvirt.driver [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Deleting instance files /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56_del#033[00m
Nov 29 03:02:05 np0005539564 nova_compute[226295]: 2025-11-29 08:02:05.113 226310 INFO nova.virt.libvirt.driver [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Deletion of /var/lib/nova/instances/441c7c0c-4457-414d-8c62-68dda0364b56_del complete#033[00m
Nov 29 03:02:05 np0005539564 nova_compute[226295]: 2025-11-29 08:02:05.225 226310 INFO nova.compute.manager [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Took 3.50 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:02:05 np0005539564 nova_compute[226295]: 2025-11-29 08:02:05.225 226310 DEBUG oslo.service.loopingcall [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:02:05 np0005539564 nova_compute[226295]: 2025-11-29 08:02:05.226 226310 DEBUG nova.compute.manager [-] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:02:05 np0005539564 nova_compute[226295]: 2025-11-29 08:02:05.226 226310 DEBUG nova.network.neutron [-] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:02:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 e250: 3 total, 3 up, 3 in
Nov 29 03:02:06 np0005539564 nova_compute[226295]: 2025-11-29 08:02:06.319 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:06 np0005539564 nova_compute[226295]: 2025-11-29 08:02:06.485 226310 DEBUG nova.network.neutron [-] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:06 np0005539564 nova_compute[226295]: 2025-11-29 08:02:06.513 226310 INFO nova.compute.manager [-] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Took 1.29 seconds to deallocate network for instance.#033[00m
Nov 29 03:02:06 np0005539564 nova_compute[226295]: 2025-11-29 08:02:06.566 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:06 np0005539564 nova_compute[226295]: 2025-11-29 08:02:06.566 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:06 np0005539564 nova_compute[226295]: 2025-11-29 08:02:06.663 226310 DEBUG oslo_concurrency.processutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:06 np0005539564 nova_compute[226295]: 2025-11-29 08:02:06.699 226310 DEBUG nova.compute.manager [req-cc61ef6e-b5f2-42cb-b25c-5f1a759ad8b2 req-998e1c79-68cd-4c4e-96d1-5c24e0fffe18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Received event network-vif-deleted-7a721704-a73d-481b-865f-39574594741f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:06.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:07.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:07 np0005539564 nova_compute[226295]: 2025-11-29 08:02:07.070 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2464851194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:07 np0005539564 nova_compute[226295]: 2025-11-29 08:02:07.219 226310 DEBUG oslo_concurrency.processutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:07 np0005539564 nova_compute[226295]: 2025-11-29 08:02:07.228 226310 DEBUG nova.compute.provider_tree [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:07 np0005539564 nova_compute[226295]: 2025-11-29 08:02:07.450 226310 DEBUG nova.scheduler.client.report [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:07 np0005539564 nova_compute[226295]: 2025-11-29 08:02:07.472 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:07 np0005539564 nova_compute[226295]: 2025-11-29 08:02:07.546 226310 INFO nova.scheduler.client.report [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Deleted allocations for instance 441c7c0c-4457-414d-8c62-68dda0364b56#033[00m
Nov 29 03:02:07 np0005539564 nova_compute[226295]: 2025-11-29 08:02:07.644 226310 DEBUG oslo_concurrency.lockutils [None req-c8b3807d-1c2e-41cf-bc9b-3e160fccdfc1 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "441c7c0c-4457-414d-8c62-68dda0364b56" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:08.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:09.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:10.766 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:10 np0005539564 nova_compute[226295]: 2025-11-29 08:02:10.766 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:10.768 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:02:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:10.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:11.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:11 np0005539564 nova_compute[226295]: 2025-11-29 08:02:11.322 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539564 nova_compute[226295]: 2025-11-29 08:02:12.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539564 podman[251483]: 2025-11-29 08:02:12.523173116 +0000 UTC m=+0.071467810 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:02:12 np0005539564 podman[251482]: 2025-11-29 08:02:12.53704051 +0000 UTC m=+0.079266380 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:02:12 np0005539564 podman[251481]: 2025-11-29 08:02:12.560224315 +0000 UTC m=+0.110189643 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:02:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:12.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:13.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:14.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:15.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:15 np0005539564 nova_compute[226295]: 2025-11-29 08:02:15.984 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:16 np0005539564 nova_compute[226295]: 2025-11-29 08:02:16.326 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:16.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:16 np0005539564 nova_compute[226295]: 2025-11-29 08:02:16.977 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403321.9756675, 441c7c0c-4457-414d-8c62-68dda0364b56 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:16 np0005539564 nova_compute[226295]: 2025-11-29 08:02:16.978 226310 INFO nova.compute.manager [-] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:02:17 np0005539564 nova_compute[226295]: 2025-11-29 08:02:17.001 226310 DEBUG nova.compute.manager [None req-90aa6b71-87fc-48a7-8a73-fde9af45665d - - - - - -] [instance: 441c7c0c-4457-414d-8c62-68dda0364b56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:17 np0005539564 nova_compute[226295]: 2025-11-29 08:02:17.078 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:17.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:18.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:19.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:19.771 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:20.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:21.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:21 np0005539564 nova_compute[226295]: 2025-11-29 08:02:21.328 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.663 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.664 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.695 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.837 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.838 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.850 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.851 226310 INFO nova.compute.claims [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:02:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:22.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:22 np0005539564 nova_compute[226295]: 2025-11-29 08:02:22.988 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:23.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.487 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.495 226310 DEBUG nova.compute.provider_tree [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.512 226310 DEBUG nova.scheduler.client.report [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.540 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.541 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.591 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.592 226310 DEBUG nova.network.neutron [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.622 226310 INFO nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.640 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.826 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.828 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.829 226310 INFO nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Creating image(s)#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.874 226310 DEBUG nova.storage.rbd_utils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.922 226310 DEBUG nova.storage.rbd_utils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.958 226310 DEBUG nova.storage.rbd_utils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.963 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:23 np0005539564 nova_compute[226295]: 2025-11-29 08:02:23.994 226310 DEBUG nova.policy [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.044 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.045 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.045 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.046 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.076 226310 DEBUG nova.storage.rbd_utils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.081 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.559 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.674 226310 DEBUG nova.storage.rbd_utils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] resizing rbd image fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.823 226310 DEBUG nova.objects.instance [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'migration_context' on Instance uuid fb5b9f0e-9622-448b-8fa7-6c96fcc794cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.876 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.877 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Ensure instance console log exists: /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.878 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.878 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:24 np0005539564 nova_compute[226295]: 2025-11-29 08:02:24.879 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:24.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:25.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:26 np0005539564 nova_compute[226295]: 2025-11-29 08:02:26.331 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:26.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:27 np0005539564 nova_compute[226295]: 2025-11-29 08:02:27.084 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:27.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:27 np0005539564 nova_compute[226295]: 2025-11-29 08:02:27.322 226310 DEBUG nova.network.neutron [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Successfully created port: 94d7210e-a29d-439a-9e36-bdd02b75076a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:02:28 np0005539564 nova_compute[226295]: 2025-11-29 08:02:28.515 226310 DEBUG nova.network.neutron [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Successfully updated port: 94d7210e-a29d-439a-9e36-bdd02b75076a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:02:28 np0005539564 nova_compute[226295]: 2025-11-29 08:02:28.529 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:28 np0005539564 nova_compute[226295]: 2025-11-29 08:02:28.530 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:28 np0005539564 nova_compute[226295]: 2025-11-29 08:02:28.530 226310 DEBUG nova.network.neutron [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:02:28 np0005539564 nova_compute[226295]: 2025-11-29 08:02:28.703 226310 DEBUG nova.compute.manager [req-47935594-054a-4029-ac05-d5dffbc4e37b req-a308fd1b-7650-4e5e-91d3-45b3e484008e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-changed-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:28 np0005539564 nova_compute[226295]: 2025-11-29 08:02:28.704 226310 DEBUG nova.compute.manager [req-47935594-054a-4029-ac05-d5dffbc4e37b req-a308fd1b-7650-4e5e-91d3-45b3e484008e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing instance network info cache due to event network-changed-94d7210e-a29d-439a-9e36-bdd02b75076a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:28 np0005539564 nova_compute[226295]: 2025-11-29 08:02:28.704 226310 DEBUG oslo_concurrency.lockutils [req-47935594-054a-4029-ac05-d5dffbc4e37b req-a308fd1b-7650-4e5e-91d3-45b3e484008e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:28 np0005539564 nova_compute[226295]: 2025-11-29 08:02:28.854 226310 DEBUG nova.network.neutron [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:02:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:28.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:29.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:30.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:31.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.335 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.727 226310 DEBUG nova.network.neutron [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.760 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.761 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Instance network_info: |[{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.762 226310 DEBUG oslo_concurrency.lockutils [req-47935594-054a-4029-ac05-d5dffbc4e37b req-a308fd1b-7650-4e5e-91d3-45b3e484008e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.762 226310 DEBUG nova.network.neutron [req-47935594-054a-4029-ac05-d5dffbc4e37b req-a308fd1b-7650-4e5e-91d3-45b3e484008e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing network info cache for port 94d7210e-a29d-439a-9e36-bdd02b75076a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.768 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Start _get_guest_xml network_info=[{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.777 226310 WARNING nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.784 226310 DEBUG nova.virt.libvirt.host [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.785 226310 DEBUG nova.virt.libvirt.host [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.793 226310 DEBUG nova.virt.libvirt.host [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.794 226310 DEBUG nova.virt.libvirt.host [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.796 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.796 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.797 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.797 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.797 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.797 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.798 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.798 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.798 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.799 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.799 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.800 226310 DEBUG nova.virt.hardware [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:02:31 np0005539564 nova_compute[226295]: 2025-11-29 08:02:31.803 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.087 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1365884465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.301 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.331 226310 DEBUG nova.storage.rbd_utils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.337 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.395 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/39922280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.817 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.820 226310 DEBUG nova.virt.libvirt.vif [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-881754980',display_name='tempest-tempest.common.compute-instance-881754980',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-881754980',id=67,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-f4q2v3tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=fb5b9f0e-9622-448b-8fa7-6c96fcc794cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.820 226310 DEBUG nova.network.os_vif_util [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.822 226310 DEBUG nova.network.os_vif_util [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:b9:77,bridge_name='br-int',has_traffic_filtering=True,id=94d7210e-a29d-439a-9e36-bdd02b75076a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d7210e-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.823 226310 DEBUG nova.objects.instance [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_devices' on Instance uuid fb5b9f0e-9622-448b-8fa7-6c96fcc794cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.892 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <uuid>fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</uuid>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <name>instance-00000043</name>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <nova:name>tempest-tempest.common.compute-instance-881754980</nova:name>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:02:31</nova:creationTime>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <nova:port uuid="94d7210e-a29d-439a-9e36-bdd02b75076a">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <entry name="serial">fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</entry>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <entry name="uuid">fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</entry>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk.config">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:72:b9:77"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <target dev="tap94d7210e-a2"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/console.log" append="off"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:02:32 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:02:32 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:02:32 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:02:32 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.894 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Preparing to wait for external event network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.894 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.894 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.895 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.896 226310 DEBUG nova.virt.libvirt.vif [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-881754980',display_name='tempest-tempest.common.compute-instance-881754980',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-881754980',id=67,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-f4q2v3tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=fb5b9f0e-9622-448b-8fa7-6c96fcc794cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.896 226310 DEBUG nova.network.os_vif_util [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.897 226310 DEBUG nova.network.os_vif_util [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:b9:77,bridge_name='br-int',has_traffic_filtering=True,id=94d7210e-a29d-439a-9e36-bdd02b75076a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d7210e-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.897 226310 DEBUG os_vif [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:b9:77,bridge_name='br-int',has_traffic_filtering=True,id=94d7210e-a29d-439a-9e36-bdd02b75076a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d7210e-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.898 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.899 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.899 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.903 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94d7210e-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.904 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94d7210e-a2, col_values=(('external_ids', {'iface-id': '94d7210e-a29d-439a-9e36-bdd02b75076a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:b9:77', 'vm-uuid': 'fb5b9f0e-9622-448b-8fa7-6c96fcc794cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:32 np0005539564 NetworkManager[48997]: <info>  [1764403352.9241] manager: (tap94d7210e-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.924 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.927 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.933 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:32 np0005539564 nova_compute[226295]: 2025-11-29 08:02:32.935 226310 INFO os_vif [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:b9:77,bridge_name='br-int',has_traffic_filtering=True,id=94d7210e-a29d-439a-9e36-bdd02b75076a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d7210e-a2')#033[00m
Nov 29 03:02:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:32.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:33 np0005539564 nova_compute[226295]: 2025-11-29 08:02:33.021 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:33 np0005539564 nova_compute[226295]: 2025-11-29 08:02:33.021 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:33 np0005539564 nova_compute[226295]: 2025-11-29 08:02:33.022 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:72:b9:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:02:33 np0005539564 nova_compute[226295]: 2025-11-29 08:02:33.022 226310 INFO nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Using config drive#033[00m
Nov 29 03:02:33 np0005539564 nova_compute[226295]: 2025-11-29 08:02:33.057 226310 DEBUG nova.storage.rbd_utils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:33.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:33 np0005539564 nova_compute[226295]: 2025-11-29 08:02:33.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:33 np0005539564 nova_compute[226295]: 2025-11-29 08:02:33.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.158 226310 INFO nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Creating config drive at /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/disk.config#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.170 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu4xwwwm5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.323 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu4xwwwm5" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.373 226310 DEBUG nova.storage.rbd_utils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.379 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/disk.config fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.881 226310 DEBUG oslo_concurrency.processutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/disk.config fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.882 226310 INFO nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Deleting local config drive /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/disk.config because it was imported into RBD.#033[00m
Nov 29 03:02:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:34.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:34 np0005539564 kernel: tap94d7210e-a2: entered promiscuous mode
Nov 29 03:02:34 np0005539564 NetworkManager[48997]: <info>  [1764403354.9692] manager: (tap94d7210e-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.969 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:34Z|00204|binding|INFO|Claiming lport 94d7210e-a29d-439a-9e36-bdd02b75076a for this chassis.
Nov 29 03:02:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:34Z|00205|binding|INFO|94d7210e-a29d-439a-9e36-bdd02b75076a: Claiming fa:16:3e:72:b9:77 10.100.0.5
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.977 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.980 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:34 np0005539564 nova_compute[226295]: 2025-11-29 08:02:34.983 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 NetworkManager[48997]: <info>  [1764403355.0017] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Nov 29 03:02:35 np0005539564 NetworkManager[48997]: <info>  [1764403355.0022] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.001 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 systemd-machined[190128]: New machine qemu-30-instance-00000043.
Nov 29 03:02:35 np0005539564 systemd[1]: Started Virtual Machine qemu-30-instance-00000043.
Nov 29 03:02:35 np0005539564 systemd-udevd[251920]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.048 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:b9:77 10.100.0.5'], port_security=['fa:16:3e:72:b9:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fb5b9f0e-9622-448b-8fa7-6c96fcc794cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf7ccb70-ed00-453b-b589-5d95da7defbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=94d7210e-a29d-439a-9e36-bdd02b75076a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.050 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 94d7210e-a29d-439a-9e36-bdd02b75076a in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.052 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:02:35 np0005539564 NetworkManager[48997]: <info>  [1764403355.0683] device (tap94d7210e-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:02:35 np0005539564 NetworkManager[48997]: <info>  [1764403355.0692] device (tap94d7210e-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.070 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0018ad76-fa4c-456f-9e47-42ac1304bf56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.071 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap738e99b4-b1 in ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.074 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap738e99b4-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.075 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8ebb7e-9f0e-42c5-97da-68e2ee7d4dd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.076 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc20fb5-c1ee-475b-adac-67d7a42aa0c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.095 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[75c7ea04-0bef-4c3a-aaf5-e6920465edcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:35.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.128 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0c139a2e-d195-4678-a32d-8ba8fdf4590a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.168 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d59e71-419b-462f-ac12-476c3b281f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 systemd-udevd[251923]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:02:35 np0005539564 NetworkManager[48997]: <info>  [1764403355.1781] manager: (tap738e99b4-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.176 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b661dd7f-1747-40cf-9c50-11f5f9bdeb93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.208 226310 DEBUG nova.network.neutron [req-47935594-054a-4029-ac05-d5dffbc4e37b req-a308fd1b-7650-4e5e-91d3-45b3e484008e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updated VIF entry in instance network info cache for port 94d7210e-a29d-439a-9e36-bdd02b75076a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.209 226310 DEBUG nova.network.neutron [req-47935594-054a-4029-ac05-d5dffbc4e37b req-a308fd1b-7650-4e5e-91d3-45b3e484008e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.211 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[79a607d9-2b4e-43ae-9fec-b7b7c24e1d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.214 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b23d63c8-3e41-4962-b0fc-32ac2e978b05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.226 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 NetworkManager[48997]: <info>  [1764403355.2364] device (tap738e99b4-b0): carrier: link connected
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.243 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[34573ef9-ab91-4afb-a938-7286408b1124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.252 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.264 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f77aba5b-f1ef-463d-aee9-8ff33b05083f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632991, 'reachable_time': 33612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251953, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:35Z|00206|binding|INFO|Setting lport 94d7210e-a29d-439a-9e36-bdd02b75076a ovn-installed in OVS
Nov 29 03:02:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:35Z|00207|binding|INFO|Setting lport 94d7210e-a29d-439a-9e36-bdd02b75076a up in Southbound
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.287 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0f27bc52-b79e-4374-adfa-cfbd20d2250e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:bee3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632991, 'tstamp': 632991}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251954, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.299 226310 DEBUG oslo_concurrency.lockutils [req-47935594-054a-4029-ac05-d5dffbc4e37b req-a308fd1b-7650-4e5e-91d3-45b3e484008e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.309 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b52c3e-074d-47a5-bfa6-92ad538e63c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632991, 'reachable_time': 33612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251955, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.353 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49ceb3e6-f2a4-4c70-ae4e-67a7b1b2e196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.425 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6e81b8cd-c718-4c11-ad26-6b4b89a20cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.427 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.427 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.428 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.430 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 NetworkManager[48997]: <info>  [1764403355.4306] manager: (tap738e99b4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Nov 29 03:02:35 np0005539564 kernel: tap738e99b4-b0: entered promiscuous mode
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.433 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.435 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:35Z|00208|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.436 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.453 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.455 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.457 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[62232cfb-136a-4dea-a158-93508c4ffe5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.458 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-738e99b4-b58e-4eff-b209-c4aa3748c994
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 738e99b4-b58e-4eff-b209-c4aa3748c994
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:02:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:35.460 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'env', 'PROCESS_TAG=haproxy-738e99b4-b58e-4eff-b209-c4aa3748c994', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/738e99b4-b58e-4eff-b209-c4aa3748c994.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.607 226310 DEBUG nova.compute.manager [req-8330d287-f8fc-47aa-8d6c-00ce35b4d6c4 req-0281599c-eb6c-4a7f-aef2-ca8aaa976520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.608 226310 DEBUG oslo_concurrency.lockutils [req-8330d287-f8fc-47aa-8d6c-00ce35b4d6c4 req-0281599c-eb6c-4a7f-aef2-ca8aaa976520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.609 226310 DEBUG oslo_concurrency.lockutils [req-8330d287-f8fc-47aa-8d6c-00ce35b4d6c4 req-0281599c-eb6c-4a7f-aef2-ca8aaa976520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.610 226310 DEBUG oslo_concurrency.lockutils [req-8330d287-f8fc-47aa-8d6c-00ce35b4d6c4 req-0281599c-eb6c-4a7f-aef2-ca8aaa976520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:35 np0005539564 nova_compute[226295]: 2025-11-29 08:02:35.610 226310 DEBUG nova.compute.manager [req-8330d287-f8fc-47aa-8d6c-00ce35b4d6c4 req-0281599c-eb6c-4a7f-aef2-ca8aaa976520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Processing event network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:02:36 np0005539564 podman[251987]: 2025-11-29 08:02:35.908943704 +0000 UTC m=+0.043052313 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:02:36 np0005539564 podman[251987]: 2025-11-29 08:02:36.17316788 +0000 UTC m=+0.307276439 container create b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:02:36 np0005539564 systemd[1]: Started libpod-conmon-b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b.scope.
Nov 29 03:02:36 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:02:36 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d62aca359b73303445f0c00c0f95de6a857f01c9ba830216964a081760013fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:36 np0005539564 podman[251987]: 2025-11-29 08:02:36.299175879 +0000 UTC m=+0.433284448 container init b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:02:36 np0005539564 podman[251987]: 2025-11-29 08:02:36.310526385 +0000 UTC m=+0.444634994 container start b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.337 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:36 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[252038]: [NOTICE]   (252047) : New worker (252050) forked
Nov 29 03:02:36 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[252038]: [NOTICE]   (252047) : Loading success.
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.399 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.400 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403356.398285, fb5b9f0e-9622-448b-8fa7-6c96fcc794cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.400 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] VM Started (Lifecycle Event)#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.406 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.410 226310 INFO nova.virt.libvirt.driver [-] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Instance spawned successfully.#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.410 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.426 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.430 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.500 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.502 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.502 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.503 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.504 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.505 226310 DEBUG nova.virt.libvirt.driver [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.526 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.527 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403356.398621, fb5b9f0e-9622-448b-8fa7-6c96fcc794cf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.528 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.644 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.650 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403356.4031746, fb5b9f0e-9622-448b-8fa7-6c96fcc794cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.650 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.702 226310 INFO nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Took 12.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.703 226310 DEBUG nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.706 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.717 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.816 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:02:36 np0005539564 nova_compute[226295]: 2025-11-29 08:02:36.844 226310 INFO nova.compute.manager [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Took 14.05 seconds to build instance.#033[00m
Nov 29 03:02:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:36.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.026 226310 DEBUG oslo_concurrency.lockutils [None req-ea6d816f-0d17-4b50-bd4b-71f40f27c3d9 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:37.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.508 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.509 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.509 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.509 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fb5b9f0e-9622-448b-8fa7-6c96fcc794cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.769 226310 DEBUG nova.compute.manager [req-b360c10b-587b-4eca-bf38-555a12cbbf70 req-e9fecc08-1eec-4795-bdf6-1f5d95bf34e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.770 226310 DEBUG oslo_concurrency.lockutils [req-b360c10b-587b-4eca-bf38-555a12cbbf70 req-e9fecc08-1eec-4795-bdf6-1f5d95bf34e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.770 226310 DEBUG oslo_concurrency.lockutils [req-b360c10b-587b-4eca-bf38-555a12cbbf70 req-e9fecc08-1eec-4795-bdf6-1f5d95bf34e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.771 226310 DEBUG oslo_concurrency.lockutils [req-b360c10b-587b-4eca-bf38-555a12cbbf70 req-e9fecc08-1eec-4795-bdf6-1f5d95bf34e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.772 226310 DEBUG nova.compute.manager [req-b360c10b-587b-4eca-bf38-555a12cbbf70 req-e9fecc08-1eec-4795-bdf6-1f5d95bf34e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] No waiting events found dispatching network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.772 226310 WARNING nova.compute.manager [req-b360c10b-587b-4eca-bf38-555a12cbbf70 req-e9fecc08-1eec-4795-bdf6-1f5d95bf34e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received unexpected event network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:02:37 np0005539564 nova_compute[226295]: 2025-11-29 08:02:37.925 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:38.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:39.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:39 np0005539564 nova_compute[226295]: 2025-11-29 08:02:39.314 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:39 np0005539564 nova_compute[226295]: 2025-11-29 08:02:39.675 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:39 np0005539564 nova_compute[226295]: 2025-11-29 08:02:39.691 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:39 np0005539564 nova_compute[226295]: 2025-11-29 08:02:39.692 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:02:39 np0005539564 nova_compute[226295]: 2025-11-29 08:02:39.694 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:39 np0005539564 nova_compute[226295]: 2025-11-29 08:02:39.695 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:40 np0005539564 nova_compute[226295]: 2025-11-29 08:02:40.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:40.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:41.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:41 np0005539564 nova_compute[226295]: 2025-11-29 08:02:41.339 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:42 np0005539564 nova_compute[226295]: 2025-11-29 08:02:42.953 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:42.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:43.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.372 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.373 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.488 226310 DEBUG nova.compute.manager [req-289dde87-e5e2-41b2-ad65-ce8fb9f3fe5a req-c772fd74-bf31-4143-8bc1-061869b71837 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-changed-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.490 226310 DEBUG nova.compute.manager [req-289dde87-e5e2-41b2-ad65-ce8fb9f3fe5a req-c772fd74-bf31-4143-8bc1-061869b71837 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing instance network info cache due to event network-changed-94d7210e-a29d-439a-9e36-bdd02b75076a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.490 226310 DEBUG oslo_concurrency.lockutils [req-289dde87-e5e2-41b2-ad65-ce8fb9f3fe5a req-c772fd74-bf31-4143-8bc1-061869b71837 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.491 226310 DEBUG oslo_concurrency.lockutils [req-289dde87-e5e2-41b2-ad65-ce8fb9f3fe5a req-c772fd74-bf31-4143-8bc1-061869b71837 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.491 226310 DEBUG nova.network.neutron [req-289dde87-e5e2-41b2-ad65-ce8fb9f3fe5a req-c772fd74-bf31-4143-8bc1-061869b71837 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing network info cache for port 94d7210e-a29d-439a-9e36-bdd02b75076a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:02:43 np0005539564 podman[252064]: 2025-11-29 08:02:43.548118033 +0000 UTC m=+0.087224134 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:02:43 np0005539564 podman[252063]: 2025-11-29 08:02:43.561949406 +0000 UTC m=+0.102220279 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:02:43 np0005539564 podman[252062]: 2025-11-29 08:02:43.578649176 +0000 UTC m=+0.126107152 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:43 np0005539564 nova_compute[226295]: 2025-11-29 08:02:43.911 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.012 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.013 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.231 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.233 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4513MB free_disk=20.9168701171875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.233 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.233 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.384 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance fb5b9f0e-9622-448b-8fa7-6c96fcc794cf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.385 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.386 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.412 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.496 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.497 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.520 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.543 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.583 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:44 np0005539564 nova_compute[226295]: 2025-11-29 08:02:44.804 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:44.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2126488096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:45 np0005539564 nova_compute[226295]: 2025-11-29 08:02:45.077 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:45 np0005539564 nova_compute[226295]: 2025-11-29 08:02:45.085 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:45.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:45 np0005539564 nova_compute[226295]: 2025-11-29 08:02:45.149 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:45 np0005539564 nova_compute[226295]: 2025-11-29 08:02:45.177 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:02:45 np0005539564 nova_compute[226295]: 2025-11-29 08:02:45.177 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:45 np0005539564 nova_compute[226295]: 2025-11-29 08:02:45.365 226310 DEBUG nova.network.neutron [req-289dde87-e5e2-41b2-ad65-ce8fb9f3fe5a req-c772fd74-bf31-4143-8bc1-061869b71837 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updated VIF entry in instance network info cache for port 94d7210e-a29d-439a-9e36-bdd02b75076a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:02:45 np0005539564 nova_compute[226295]: 2025-11-29 08:02:45.366 226310 DEBUG nova.network.neutron [req-289dde87-e5e2-41b2-ad65-ce8fb9f3fe5a req-c772fd74-bf31-4143-8bc1-061869b71837 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:45 np0005539564 nova_compute[226295]: 2025-11-29 08:02:45.429 226310 DEBUG oslo_concurrency.lockutils [req-289dde87-e5e2-41b2-ad65-ce8fb9f3fe5a req-c772fd74-bf31-4143-8bc1-061869b71837 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:46 np0005539564 nova_compute[226295]: 2025-11-29 08:02:46.345 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:46 np0005539564 nova_compute[226295]: 2025-11-29 08:02:46.588 226310 DEBUG nova.compute.manager [req-10dfc7e4-6ace-47f4-b197-c5bbe072b3f3 req-d8f73c77-4606-4e3c-94f0-fb7bd7ad8d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-changed-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:46 np0005539564 nova_compute[226295]: 2025-11-29 08:02:46.589 226310 DEBUG nova.compute.manager [req-10dfc7e4-6ace-47f4-b197-c5bbe072b3f3 req-d8f73c77-4606-4e3c-94f0-fb7bd7ad8d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing instance network info cache due to event network-changed-94d7210e-a29d-439a-9e36-bdd02b75076a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:46 np0005539564 nova_compute[226295]: 2025-11-29 08:02:46.589 226310 DEBUG oslo_concurrency.lockutils [req-10dfc7e4-6ace-47f4-b197-c5bbe072b3f3 req-d8f73c77-4606-4e3c-94f0-fb7bd7ad8d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:46 np0005539564 nova_compute[226295]: 2025-11-29 08:02:46.590 226310 DEBUG oslo_concurrency.lockutils [req-10dfc7e4-6ace-47f4-b197-c5bbe072b3f3 req-d8f73c77-4606-4e3c-94f0-fb7bd7ad8d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:46 np0005539564 nova_compute[226295]: 2025-11-29 08:02:46.590 226310 DEBUG nova.network.neutron [req-10dfc7e4-6ace-47f4-b197-c5bbe072b3f3 req-d8f73c77-4606-4e3c-94f0-fb7bd7ad8d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing network info cache for port 94d7210e-a29d-439a-9e36-bdd02b75076a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:02:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:46.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:47.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:47 np0005539564 nova_compute[226295]: 2025-11-29 08:02:47.956 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:48.112 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:48 np0005539564 nova_compute[226295]: 2025-11-29 08:02:48.113 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:48.114 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:02:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:48.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:50.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:51.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:51 np0005539564 nova_compute[226295]: 2025-11-29 08:02:51.348 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:51 np0005539564 nova_compute[226295]: 2025-11-29 08:02:51.402 226310 DEBUG nova.network.neutron [req-10dfc7e4-6ace-47f4-b197-c5bbe072b3f3 req-d8f73c77-4606-4e3c-94f0-fb7bd7ad8d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updated VIF entry in instance network info cache for port 94d7210e-a29d-439a-9e36-bdd02b75076a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:02:51 np0005539564 nova_compute[226295]: 2025-11-29 08:02:51.403 226310 DEBUG nova.network.neutron [req-10dfc7e4-6ace-47f4-b197-c5bbe072b3f3 req-d8f73c77-4606-4e3c-94f0-fb7bd7ad8d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:51 np0005539564 nova_compute[226295]: 2025-11-29 08:02:51.428 226310 DEBUG oslo_concurrency.lockutils [req-10dfc7e4-6ace-47f4-b197-c5bbe072b3f3 req-d8f73c77-4606-4e3c-94f0-fb7bd7ad8d60 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 03:02:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:52.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:53 np0005539564 nova_compute[226295]: 2025-11-29 08:02:53.006 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:53.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:54Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:b9:77 10.100.0.5
Nov 29 03:02:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:02:54Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:b9:77 10.100.0.5
Nov 29 03:02:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:54.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:02:55.116 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:55.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:56 np0005539564 nova_compute[226295]: 2025-11-29 08:02:56.351 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:56.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:57.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:58 np0005539564 nova_compute[226295]: 2025-11-29 08:02:58.009 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539564 nova_compute[226295]: 2025-11-29 08:02:58.169 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:58.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:02:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:59.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:00.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:01.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:01 np0005539564 nova_compute[226295]: 2025-11-29 08:03:01.355 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:02.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:03 np0005539564 nova_compute[226295]: 2025-11-29 08:03:03.014 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:03.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:03.712 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:03.713 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:03.713 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:04.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:05.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:06 np0005539564 nova_compute[226295]: 2025-11-29 08:03:06.358 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:07.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:07.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:08 np0005539564 nova_compute[226295]: 2025-11-29 08:03:08.017 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:09.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:11.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:11.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:11 np0005539564 nova_compute[226295]: 2025-11-29 08:03:11.362 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:13.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:13 np0005539564 nova_compute[226295]: 2025-11-29 08:03:13.021 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:13.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:13 np0005539564 podman[252193]: 2025-11-29 08:03:13.753858945 +0000 UTC m=+0.061488189 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:03:13 np0005539564 podman[252191]: 2025-11-29 08:03:13.832203039 +0000 UTC m=+0.149450042 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:03:13 np0005539564 podman[252192]: 2025-11-29 08:03:13.839928957 +0000 UTC m=+0.146309887 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:03:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:15.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:15.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:16 np0005539564 nova_compute[226295]: 2025-11-29 08:03:16.366 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:17.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:17.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.559524) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397559569, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1462, "num_deletes": 254, "total_data_size": 3226254, "memory_usage": 3273392, "flush_reason": "Manual Compaction"}
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397579896, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 2117411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35856, "largest_seqno": 37313, "table_properties": {"data_size": 2111085, "index_size": 3525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13942, "raw_average_key_size": 20, "raw_value_size": 2098245, "raw_average_value_size": 3090, "num_data_blocks": 154, "num_entries": 679, "num_filter_entries": 679, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403286, "oldest_key_time": 1764403286, "file_creation_time": 1764403397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 20477 microseconds, and 9779 cpu microseconds.
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.579996) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 2117411 bytes OK
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.580028) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.582236) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.582265) EVENT_LOG_v1 {"time_micros": 1764403397582255, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.582294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 3219366, prev total WAL file size 3219366, number of live WAL files 2.
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.584314) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(2067KB)], [66(9232KB)]
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397584408, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 11571450, "oldest_snapshot_seqno": -1}
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6564 keys, 9586057 bytes, temperature: kUnknown
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397715522, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 9586057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9542664, "index_size": 25873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 169844, "raw_average_key_size": 25, "raw_value_size": 9425238, "raw_average_value_size": 1435, "num_data_blocks": 1027, "num_entries": 6564, "num_filter_entries": 6564, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.716052) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9586057 bytes
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.723444) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.2 rd, 73.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.0 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.0) write-amplify(4.5) OK, records in: 7093, records dropped: 529 output_compression: NoCompression
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.723478) EVENT_LOG_v1 {"time_micros": 1764403397723463, "job": 40, "event": "compaction_finished", "compaction_time_micros": 131245, "compaction_time_cpu_micros": 22297, "output_level": 6, "num_output_files": 1, "total_output_size": 9586057, "num_input_records": 7093, "num_output_records": 6564, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397724389, "job": 40, "event": "table_file_deletion", "file_number": 68}
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397728327, "job": 40, "event": "table_file_deletion", "file_number": 66}
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.584077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.728383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.728391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.728394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.728397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:03:17.728401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:03:18 np0005539564 nova_compute[226295]: 2025-11-29 08:03:18.025 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2705964177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:19.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:19.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:21.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:21 np0005539564 nova_compute[226295]: 2025-11-29 08:03:21.368 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:22 np0005539564 nova_compute[226295]: 2025-11-29 08:03:22.798 226310 DEBUG oslo_concurrency.lockutils [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-c8838967-6481-4acd-b59f-0be782c9a361" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:22 np0005539564 nova_compute[226295]: 2025-11-29 08:03:22.799 226310 DEBUG oslo_concurrency.lockutils [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-c8838967-6481-4acd-b59f-0be782c9a361" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:22 np0005539564 nova_compute[226295]: 2025-11-29 08:03:22.800 226310 DEBUG nova.objects.instance [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid fb5b9f0e-9622-448b-8fa7-6c96fcc794cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:23.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:23 np0005539564 nova_compute[226295]: 2025-11-29 08:03:23.050 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:23.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:25.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:25.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:26 np0005539564 nova_compute[226295]: 2025-11-29 08:03:26.370 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:26 np0005539564 nova_compute[226295]: 2025-11-29 08:03:26.458 226310 DEBUG nova.objects.instance [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_requests' on Instance uuid fb5b9f0e-9622-448b-8fa7-6c96fcc794cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:26 np0005539564 nova_compute[226295]: 2025-11-29 08:03:26.472 226310 DEBUG nova.network.neutron [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:27.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:27.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:27 np0005539564 nova_compute[226295]: 2025-11-29 08:03:27.308 226310 DEBUG nova.compute.manager [req-33c736af-0b2e-4fe9-b8dd-8459053aab51 req-50000829-e926-4c46-a95c-f100b0c3b0ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-changed-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:27 np0005539564 nova_compute[226295]: 2025-11-29 08:03:27.309 226310 DEBUG nova.compute.manager [req-33c736af-0b2e-4fe9-b8dd-8459053aab51 req-50000829-e926-4c46-a95c-f100b0c3b0ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing instance network info cache due to event network-changed-94d7210e-a29d-439a-9e36-bdd02b75076a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:03:27 np0005539564 nova_compute[226295]: 2025-11-29 08:03:27.309 226310 DEBUG oslo_concurrency.lockutils [req-33c736af-0b2e-4fe9-b8dd-8459053aab51 req-50000829-e926-4c46-a95c-f100b0c3b0ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:27 np0005539564 nova_compute[226295]: 2025-11-29 08:03:27.310 226310 DEBUG oslo_concurrency.lockutils [req-33c736af-0b2e-4fe9-b8dd-8459053aab51 req-50000829-e926-4c46-a95c-f100b0c3b0ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:27 np0005539564 nova_compute[226295]: 2025-11-29 08:03:27.310 226310 DEBUG nova.network.neutron [req-33c736af-0b2e-4fe9-b8dd-8459053aab51 req-50000829-e926-4c46-a95c-f100b0c3b0ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing network info cache for port 94d7210e-a29d-439a-9e36-bdd02b75076a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:03:27 np0005539564 nova_compute[226295]: 2025-11-29 08:03:27.348 226310 DEBUG nova.policy [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:03:28 np0005539564 nova_compute[226295]: 2025-11-29 08:03:28.054 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:29.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:29.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.473 226310 DEBUG nova.network.neutron [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Successfully updated port: c8838967-6481-4acd-b59f-0be782c9a361 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.494 226310 DEBUG oslo_concurrency.lockutils [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.584 226310 DEBUG nova.compute.manager [req-af21ac2c-591f-4daa-9a42-a5cc8100320b req-931a1395-966a-4a90-9642-fdfdcf6240a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-changed-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.584 226310 DEBUG nova.compute.manager [req-af21ac2c-591f-4daa-9a42-a5cc8100320b req-931a1395-966a-4a90-9642-fdfdcf6240a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing instance network info cache due to event network-changed-c8838967-6481-4acd-b59f-0be782c9a361. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.584 226310 DEBUG oslo_concurrency.lockutils [req-af21ac2c-591f-4daa-9a42-a5cc8100320b req-931a1395-966a-4a90-9642-fdfdcf6240a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.629 226310 DEBUG nova.network.neutron [req-33c736af-0b2e-4fe9-b8dd-8459053aab51 req-50000829-e926-4c46-a95c-f100b0c3b0ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updated VIF entry in instance network info cache for port 94d7210e-a29d-439a-9e36-bdd02b75076a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.629 226310 DEBUG nova.network.neutron [req-33c736af-0b2e-4fe9-b8dd-8459053aab51 req-50000829-e926-4c46-a95c-f100b0c3b0ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.650 226310 DEBUG oslo_concurrency.lockutils [req-33c736af-0b2e-4fe9-b8dd-8459053aab51 req-50000829-e926-4c46-a95c-f100b0c3b0ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.651 226310 DEBUG oslo_concurrency.lockutils [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.651 226310 DEBUG nova.network.neutron [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:03:30 np0005539564 nova_compute[226295]: 2025-11-29 08:03:30.877 226310 WARNING nova.network.neutron [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 03:03:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:31.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:31.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:31 np0005539564 nova_compute[226295]: 2025-11-29 08:03:31.372 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:32 np0005539564 nova_compute[226295]: 2025-11-29 08:03:32.585 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:33.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.057 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.105 226310 DEBUG nova.network.neutron [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.156 226310 DEBUG oslo_concurrency.lockutils [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.159 226310 DEBUG oslo_concurrency.lockutils [req-af21ac2c-591f-4daa-9a42-a5cc8100320b req-931a1395-966a-4a90-9642-fdfdcf6240a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.159 226310 DEBUG nova.network.neutron [req-af21ac2c-591f-4daa-9a42-a5cc8100320b req-931a1395-966a-4a90-9642-fdfdcf6240a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Refreshing network info cache for port c8838967-6481-4acd-b59f-0be782c9a361 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.166 226310 DEBUG nova.virt.libvirt.vif [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-881754980',display_name='tempest-tempest.common.compute-instance-881754980',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-881754980',id=67,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-f4q2v3tm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=fb5b9f0e-9622-448b-8fa7-6c96fcc794cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.167 226310 DEBUG nova.network.os_vif_util [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.169 226310 DEBUG nova.network.os_vif_util [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.170 226310 DEBUG os_vif [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.171 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.171 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.172 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.176 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.177 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8838967-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.178 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8838967-64, col_values=(('external_ids', {'iface-id': 'c8838967-6481-4acd-b59f-0be782c9a361', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:8e:de', 'vm-uuid': 'fb5b9f0e-9622-448b-8fa7-6c96fcc794cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.180 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539564 NetworkManager[48997]: <info>  [1764403413.1819] manager: (tapc8838967-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.183 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.191 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.193 226310 INFO os_vif [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64')#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.194 226310 DEBUG nova.virt.libvirt.vif [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-881754980',display_name='tempest-tempest.common.compute-instance-881754980',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-881754980',id=67,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-f4q2v3tm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=fb5b9f0e-9622-448b-8fa7-6c96fcc794cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.195 226310 DEBUG nova.network.os_vif_util [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.196 226310 DEBUG nova.network.os_vif_util [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.200 226310 DEBUG nova.virt.libvirt.guest [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:b1:8e:de"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <target dev="tapc8838967-64"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]: </interface>
Nov 29 03:03:33 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:03:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:33 np0005539564 kernel: tapc8838967-64: entered promiscuous mode
Nov 29 03:03:33 np0005539564 NetworkManager[48997]: <info>  [1764403413.2235] manager: (tapc8838967-64): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Nov 29 03:03:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:33Z|00209|binding|INFO|Claiming lport c8838967-6481-4acd-b59f-0be782c9a361 for this chassis.
Nov 29 03:03:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:33Z|00210|binding|INFO|c8838967-6481-4acd-b59f-0be782c9a361: Claiming fa:16:3e:b1:8e:de 10.100.0.9
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.223 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.229 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:8e:de 10.100.0.9'], port_security=['fa:16:3e:b1:8e:de 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1620065171', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fb5b9f0e-9622-448b-8fa7-6c96fcc794cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1620065171', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '7', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=c8838967-6481-4acd-b59f-0be782c9a361) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.230 139780 INFO neutron.agent.ovn.metadata.agent [-] Port c8838967-6481-4acd-b59f-0be782c9a361 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.232 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:03:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:33Z|00211|binding|INFO|Setting lport c8838967-6481-4acd-b59f-0be782c9a361 ovn-installed in OVS
Nov 29 03:03:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:33Z|00212|binding|INFO|Setting lport c8838967-6481-4acd-b59f-0be782c9a361 up in Southbound
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.242 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.248 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0b499582-f8ed-4bb1-b9b0-fbabd7bddc65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:33 np0005539564 systemd-udevd[252543]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:03:33 np0005539564 NetworkManager[48997]: <info>  [1764403413.2752] device (tapc8838967-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:03:33 np0005539564 NetworkManager[48997]: <info>  [1764403413.2758] device (tapc8838967-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.288 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f04e89-d100-488d-a4f3-a5cd52c196e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.291 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[735032f2-d92f-4dfe-a276-f9b7c210eaf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.331 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ab26d7f2-ab02-4e8f-ab7a-035cf3d9816c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.358 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc02a30-d857-4a71-8a69-566e2fed8361]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632991, 'reachable_time': 33612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252550, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.380 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffc58e0-5ea2-470f-a8a9-ab87520b25a0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633006, 'tstamp': 633006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252551, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633010, 'tstamp': 633010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252551, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.382 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.385 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.387 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.388 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.388 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:33.389 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.396 226310 DEBUG nova.virt.libvirt.driver [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.397 226310 DEBUG nova.virt.libvirt.driver [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.398 226310 DEBUG nova.virt.libvirt.driver [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:72:b9:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.398 226310 DEBUG nova.virt.libvirt.driver [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:b1:8e:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.451 226310 DEBUG nova.virt.libvirt.guest [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <nova:name>tempest-tempest.common.compute-instance-881754980</nova:name>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:03:33</nova:creationTime>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:port uuid="94d7210e-a29d-439a-9e36-bdd02b75076a">
Nov 29 03:03:33 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    <nova:port uuid="c8838967-6481-4acd-b59f-0be782c9a361">
Nov 29 03:03:33 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:03:33 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:03:33 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:03:33 np0005539564 nova_compute[226295]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:03:33 np0005539564 nova_compute[226295]: 2025-11-29 08:03:33.495 226310 DEBUG oslo_concurrency.lockutils [None req-97d2b501-1654-49ce-8a4a-4a0611becef5 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-c8838967-6481-4acd-b59f-0be782c9a361" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:34 np0005539564 nova_compute[226295]: 2025-11-29 08:03:34.780 226310 DEBUG nova.compute.manager [req-ccb2765e-9556-4554-90c9-991c0b5ae81f req-ef4cd9fb-9b29-4725-9ee3-92868f6fc3d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:34 np0005539564 nova_compute[226295]: 2025-11-29 08:03:34.780 226310 DEBUG oslo_concurrency.lockutils [req-ccb2765e-9556-4554-90c9-991c0b5ae81f req-ef4cd9fb-9b29-4725-9ee3-92868f6fc3d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:34 np0005539564 nova_compute[226295]: 2025-11-29 08:03:34.781 226310 DEBUG oslo_concurrency.lockutils [req-ccb2765e-9556-4554-90c9-991c0b5ae81f req-ef4cd9fb-9b29-4725-9ee3-92868f6fc3d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:34 np0005539564 nova_compute[226295]: 2025-11-29 08:03:34.781 226310 DEBUG oslo_concurrency.lockutils [req-ccb2765e-9556-4554-90c9-991c0b5ae81f req-ef4cd9fb-9b29-4725-9ee3-92868f6fc3d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:34 np0005539564 nova_compute[226295]: 2025-11-29 08:03:34.781 226310 DEBUG nova.compute.manager [req-ccb2765e-9556-4554-90c9-991c0b5ae81f req-ef4cd9fb-9b29-4725-9ee3-92868f6fc3d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] No waiting events found dispatching network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:34 np0005539564 nova_compute[226295]: 2025-11-29 08:03:34.781 226310 WARNING nova.compute.manager [req-ccb2765e-9556-4554-90c9-991c0b5ae81f req-ef4cd9fb-9b29-4725-9ee3-92868f6fc3d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received unexpected event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:35.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:35.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:35Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:8e:de 10.100.0.9
Nov 29 03:03:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:35Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:8e:de 10.100.0.9
Nov 29 03:03:35 np0005539564 nova_compute[226295]: 2025-11-29 08:03:35.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:35 np0005539564 nova_compute[226295]: 2025-11-29 08:03:35.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:35 np0005539564 nova_compute[226295]: 2025-11-29 08:03:35.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:03:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:35 np0005539564 nova_compute[226295]: 2025-11-29 08:03:35.674 226310 DEBUG nova.network.neutron [req-af21ac2c-591f-4daa-9a42-a5cc8100320b req-931a1395-966a-4a90-9642-fdfdcf6240a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updated VIF entry in instance network info cache for port c8838967-6481-4acd-b59f-0be782c9a361. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:03:35 np0005539564 nova_compute[226295]: 2025-11-29 08:03:35.675 226310 DEBUG nova.network.neutron [req-af21ac2c-591f-4daa-9a42-a5cc8100320b req-931a1395-966a-4a90-9642-fdfdcf6240a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:35 np0005539564 nova_compute[226295]: 2025-11-29 08:03:35.710 226310 DEBUG oslo_concurrency.lockutils [req-af21ac2c-591f-4daa-9a42-a5cc8100320b req-931a1395-966a-4a90-9642-fdfdcf6240a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.033 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "f4450ee9-fb6d-4c17-bee5-84291eedd055" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.033 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.065 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.351 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.352 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.360 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.361 226310 INFO nova.compute.claims [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.376 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.388 226310 DEBUG oslo_concurrency.lockutils [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-c8838967-6481-4acd-b59f-0be782c9a361" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.389 226310 DEBUG oslo_concurrency.lockutils [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-c8838967-6481-4acd-b59f-0be782c9a361" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.417 226310 DEBUG nova.objects.instance [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid fb5b9f0e-9622-448b-8fa7-6c96fcc794cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.439 226310 DEBUG nova.virt.libvirt.vif [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-881754980',display_name='tempest-tempest.common.compute-instance-881754980',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-881754980',id=67,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-f4q2v3tm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=fb5b9f0e-9622-448b-8fa7-6c96fcc794cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.439 226310 DEBUG nova.network.os_vif_util [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.441 226310 DEBUG nova.network.os_vif_util [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.452 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.455 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.457 226310 DEBUG nova.virt.libvirt.driver [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Attempting to detach device tapc8838967-64 from instance fb5b9f0e-9622-448b-8fa7-6c96fcc794cf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.457 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:b1:8e:de"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <target dev="tapc8838967-64"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: </interface>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.462 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.469 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <name>instance-00000043</name>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <uuid>fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</uuid>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:name>tempest-tempest.common.compute-instance-881754980</nova:name>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:03:33</nova:creationTime>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:port uuid="94d7210e-a29d-439a-9e36-bdd02b75076a">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:port uuid="c8838967-6481-4acd-b59f-0be782c9a361">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <resource>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <partition>/machine</partition>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </resource>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='serial'>fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='uuid'>fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <feature policy='require' name='x2apic'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <feature policy='require' name='vme'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk' index='2'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='virtio-disk0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk.config' index='1'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='sata0-0-0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pcie.0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.3'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.4'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.5'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.6'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.7'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.8'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.9'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.10'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.11'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.12'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.13'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.14'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.15'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.16'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.17'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.18'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.19'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.20'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.21'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.22'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.23'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.24'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.25'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.26'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='usb'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='ide'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:72:b9:77'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target dev='tap94d7210e-a2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='net0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:b1:8e:de'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target dev='tapc8838967-64'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='net1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/console.log' append='off'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </target>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/console.log' append='off'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </console>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='input0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='input1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='input2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='video0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='watchdog0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </watchdog>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='balloon0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='rng0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <label>system_u:system_r:svirt_t:s0:c658,c925</label>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c658,c925</imagelabel>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <label>+107:+107</label>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.470 226310 INFO nova.virt.libvirt.driver [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully detached device tapc8838967-64 from instance fb5b9f0e-9622-448b-8fa7-6c96fcc794cf from the persistent domain config.#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.471 226310 DEBUG nova.virt.libvirt.driver [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] (1/8): Attempting to detach device tapc8838967-64 with device alias net1 from instance fb5b9f0e-9622-448b-8fa7-6c96fcc794cf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.471 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:b1:8e:de"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <target dev="tapc8838967-64"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: </interface>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.498 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:36 np0005539564 kernel: tapc8838967-64 (unregistering): left promiscuous mode
Nov 29 03:03:36 np0005539564 NetworkManager[48997]: <info>  [1764403416.5942] device (tapc8838967-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:03:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:36Z|00213|binding|INFO|Releasing lport c8838967-6481-4acd-b59f-0be782c9a361 from this chassis (sb_readonly=0)
Nov 29 03:03:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:36Z|00214|binding|INFO|Setting lport c8838967-6481-4acd-b59f-0be782c9a361 down in Southbound
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.606 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:36Z|00215|binding|INFO|Removing iface tapc8838967-64 ovn-installed in OVS
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.620 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764403416.6198044, fb5b9f0e-9622-448b-8fa7-6c96fcc794cf => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.622 226310 DEBUG nova.virt.libvirt.driver [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Start waiting for the detach event from libvirt for device tapc8838967-64 with device alias net1 for instance fb5b9f0e-9622-448b-8fa7-6c96fcc794cf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.624 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.626 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.632 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:8e:de 10.100.0.9'], port_security=['fa:16:3e:b1:8e:de 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1620065171', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fb5b9f0e-9622-448b-8fa7-6c96fcc794cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1620065171', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '9', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=c8838967-6481-4acd-b59f-0be782c9a361) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.634 139780 INFO neutron.agent.ovn.metadata.agent [-] Port c8838967-6481-4acd-b59f-0be782c9a361 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.637 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.644 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <name>instance-00000043</name>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <uuid>fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</uuid>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:name>tempest-tempest.common.compute-instance-881754980</nova:name>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:03:33</nova:creationTime>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:port uuid="94d7210e-a29d-439a-9e36-bdd02b75076a">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:port uuid="c8838967-6481-4acd-b59f-0be782c9a361">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <resource>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <partition>/machine</partition>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </resource>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='serial'>fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='uuid'>fb5b9f0e-9622-448b-8fa7-6c96fcc794cf</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <feature policy='require' name='x2apic'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <feature policy='require' name='vme'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk' index='2'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='virtio-disk0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_disk.config' index='1'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='sata0-0-0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pcie.0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.3'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.4'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.5'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.6'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.7'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.8'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.9'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.10'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.11'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.12'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.13'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.14'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.15'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.16'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.17'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.18'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.19'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.20'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.21'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.22'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.23'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.24'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.25'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='pci.26'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='usb'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='ide'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:72:b9:77'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target dev='tap94d7210e-a2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='net0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/console.log' append='off'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      </target>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf/console.log' append='off'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </console>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='input0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='input1'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='input2'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='video0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='watchdog0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </watchdog>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='balloon0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <alias name='rng0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <label>system_u:system_r:svirt_t:s0:c658,c925</label>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c658,c925</imagelabel>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <label>+107:+107</label>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.645 226310 INFO nova.virt.libvirt.driver [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully detached device tapc8838967-64 from instance fb5b9f0e-9622-448b-8fa7-6c96fcc794cf from the live domain config.#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.651 226310 DEBUG nova.virt.libvirt.vif [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-881754980',display_name='tempest-tempest.common.compute-instance-881754980',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-881754980',id=67,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-f4q2v3tm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=fb5b9f0e-9622-448b-8fa7-6c96fcc794cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.651 226310 DEBUG nova.network.os_vif_util [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.653 226310 DEBUG nova.network.os_vif_util [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.654 226310 DEBUG os_vif [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.657 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[37bebd6e-0499-4c8e-a282-39b3da78607a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.659 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8838967-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.664 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.671 226310 INFO os_vif [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64')#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.672 226310 DEBUG nova.virt.libvirt.guest [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:name>tempest-tempest.common.compute-instance-881754980</nova:name>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:03:36</nova:creationTime>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    <nova:port uuid="94d7210e-a29d-439a-9e36-bdd02b75076a">
Nov 29 03:03:36 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:03:36 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:03:36 np0005539564 nova_compute[226295]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.691 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[54e5a1ed-cd57-4512-86d5-dd3252258469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.694 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a5218c42-2bb9-4014-9dbd-522aa86716f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.738 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[84c7fdb4-abdb-4f61-9b09-a6fc8ee45284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.761 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[53ecc563-de76-4260-bc1e-3ad4882b68ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632991, 'reachable_time': 33612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252584, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.783 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3b34265c-69df-4ae6-ba1d-c6a8e8805a60]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633006, 'tstamp': 633006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252585, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633010, 'tstamp': 633010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252585, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.786 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.788 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.790 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.791 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.791 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:36.792 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.882 226310 DEBUG nova.compute.manager [req-21f401af-ea87-4477-9278-ece8cdc44b54 req-03fc4743-7582-4c50-991b-3b724aabc197 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.883 226310 DEBUG oslo_concurrency.lockutils [req-21f401af-ea87-4477-9278-ece8cdc44b54 req-03fc4743-7582-4c50-991b-3b724aabc197 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.883 226310 DEBUG oslo_concurrency.lockutils [req-21f401af-ea87-4477-9278-ece8cdc44b54 req-03fc4743-7582-4c50-991b-3b724aabc197 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.884 226310 DEBUG oslo_concurrency.lockutils [req-21f401af-ea87-4477-9278-ece8cdc44b54 req-03fc4743-7582-4c50-991b-3b724aabc197 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.884 226310 DEBUG nova.compute.manager [req-21f401af-ea87-4477-9278-ece8cdc44b54 req-03fc4743-7582-4c50-991b-3b724aabc197 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] No waiting events found dispatching network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.885 226310 WARNING nova.compute.manager [req-21f401af-ea87-4477-9278-ece8cdc44b54 req-03fc4743-7582-4c50-991b-3b724aabc197 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received unexpected event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.982 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:36 np0005539564 nova_compute[226295]: 2025-11-29 08:03:36.990 226310 DEBUG nova.compute.provider_tree [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.011 226310 DEBUG nova.scheduler.client.report [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.047 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.048 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.120 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.120 226310 DEBUG nova.network.neutron [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.153 226310 INFO nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.179 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:37.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.307 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.309 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.310 226310 INFO nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Creating image(s)#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.349 226310 DEBUG nova.storage.rbd_utils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image f4450ee9-fb6d-4c17-bee5-84291eedd055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.386 226310 DEBUG nova.storage.rbd_utils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image f4450ee9-fb6d-4c17-bee5-84291eedd055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.421 226310 DEBUG nova.storage.rbd_utils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image f4450ee9-fb6d-4c17-bee5-84291eedd055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.426 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.457 226310 DEBUG nova.policy [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef8e9cc962eb4827954df3c42cc34798', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.462 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.463 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.463 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.485 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.509 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.509 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.510 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.510 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.557 226310 DEBUG nova.storage.rbd_utils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image f4450ee9-fb6d-4c17-bee5-84291eedd055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.563 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f4450ee9-fb6d-4c17-bee5-84291eedd055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.706 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.707 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.709 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.709 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fb5b9f0e-9622-448b-8fa7-6c96fcc794cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:37 np0005539564 nova_compute[226295]: 2025-11-29 08:03:37.978 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f4450ee9-fb6d-4c17-bee5-84291eedd055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:38 np0005539564 nova_compute[226295]: 2025-11-29 08:03:38.084 226310 DEBUG nova.storage.rbd_utils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] resizing rbd image f4450ee9-fb6d-4c17-bee5-84291eedd055_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:03:38 np0005539564 nova_compute[226295]: 2025-11-29 08:03:38.238 226310 DEBUG nova.objects.instance [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'migration_context' on Instance uuid f4450ee9-fb6d-4c17-bee5-84291eedd055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:38 np0005539564 nova_compute[226295]: 2025-11-29 08:03:38.263 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:03:38 np0005539564 nova_compute[226295]: 2025-11-29 08:03:38.264 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Ensure instance console log exists: /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:03:38 np0005539564 nova_compute[226295]: 2025-11-29 08:03:38.265 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:38 np0005539564 nova_compute[226295]: 2025-11-29 08:03:38.265 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:38 np0005539564 nova_compute[226295]: 2025-11-29 08:03:38.265 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:38 np0005539564 nova_compute[226295]: 2025-11-29 08:03:38.348 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:38.349 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:38.350 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.013 226310 DEBUG nova.compute.manager [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-unplugged-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.013 226310 DEBUG oslo_concurrency.lockutils [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.014 226310 DEBUG oslo_concurrency.lockutils [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.014 226310 DEBUG oslo_concurrency.lockutils [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.015 226310 DEBUG nova.compute.manager [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] No waiting events found dispatching network-vif-unplugged-c8838967-6481-4acd-b59f-0be782c9a361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.015 226310 WARNING nova.compute.manager [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received unexpected event network-vif-unplugged-c8838967-6481-4acd-b59f-0be782c9a361 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.015 226310 DEBUG nova.compute.manager [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.016 226310 DEBUG oslo_concurrency.lockutils [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.016 226310 DEBUG oslo_concurrency.lockutils [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.017 226310 DEBUG oslo_concurrency.lockutils [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.017 226310 DEBUG nova.compute.manager [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] No waiting events found dispatching network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.018 226310 WARNING nova.compute.manager [req-69a4cb8f-7177-4a96-b8dd-2464874d94ff req-0e61a90d-72aa-47ed-a8cc-39860b2831f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received unexpected event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:39.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:39.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:39 np0005539564 nova_compute[226295]: 2025-11-29 08:03:39.366 226310 DEBUG nova.network.neutron [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Successfully created port: 263015f4-a368-4943-b126-e3cbfb8f9053 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.207 226310 DEBUG oslo_concurrency.lockutils [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.216 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.217 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.218 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.218 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.219 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.221 226310 INFO nova.compute.manager [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Terminating instance#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.223 226310 DEBUG nova.compute.manager [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.351 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:40 np0005539564 kernel: tap94d7210e-a2 (unregistering): left promiscuous mode
Nov 29 03:03:40 np0005539564 NetworkManager[48997]: <info>  [1764403420.3915] device (tap94d7210e-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:03:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:40Z|00216|binding|INFO|Releasing lport 94d7210e-a29d-439a-9e36-bdd02b75076a from this chassis (sb_readonly=0)
Nov 29 03:03:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:40Z|00217|binding|INFO|Setting lport 94d7210e-a29d-439a-9e36-bdd02b75076a down in Southbound
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.397 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:40Z|00218|binding|INFO|Removing iface tap94d7210e-a2 ovn-installed in OVS
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.399 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:40Z|00219|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.412 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:b9:77 10.100.0.5'], port_security=['fa:16:3e:72:b9:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fb5b9f0e-9622-448b-8fa7-6c96fcc794cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf7ccb70-ed00-453b-b589-5d95da7defbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=94d7210e-a29d-439a-9e36-bdd02b75076a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.414 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 94d7210e-a29d-439a-9e36-bdd02b75076a in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.415 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 738e99b4-b58e-4eff-b209-c4aa3748c994, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.415 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.416 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb064f6-bc3f-4842-95c0-33373a1ed21a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.417 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 namespace which is not needed anymore#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.422 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000043.scope: Deactivated successfully.
Nov 29 03:03:40 np0005539564 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000043.scope: Consumed 18.037s CPU time.
Nov 29 03:03:40 np0005539564 systemd-machined[190128]: Machine qemu-30-instance-00000043 terminated.
Nov 29 03:03:40 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[252038]: [NOTICE]   (252047) : haproxy version is 2.8.14-c23fe91
Nov 29 03:03:40 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[252038]: [NOTICE]   (252047) : path to executable is /usr/sbin/haproxy
Nov 29 03:03:40 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[252038]: [WARNING]  (252047) : Exiting Master process...
Nov 29 03:03:40 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[252038]: [ALERT]    (252047) : Current worker (252050) exited with code 143 (Terminated)
Nov 29 03:03:40 np0005539564 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[252038]: [WARNING]  (252047) : All workers exited. Exiting... (0)
Nov 29 03:03:40 np0005539564 systemd[1]: libpod-b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b.scope: Deactivated successfully.
Nov 29 03:03:40 np0005539564 podman[252777]: 2025-11-29 08:03:40.594158911 +0000 UTC m=+0.057049939 container died b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:03:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:40 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:03:40 np0005539564 systemd[1]: var-lib-containers-storage-overlay-7d62aca359b73303445f0c00c0f95de6a857f01c9ba830216964a081760013fa-merged.mount: Deactivated successfully.
Nov 29 03:03:40 np0005539564 podman[252777]: 2025-11-29 08:03:40.675067254 +0000 UTC m=+0.137958332 container cleanup b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:03:40 np0005539564 systemd[1]: libpod-conmon-b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b.scope: Deactivated successfully.
Nov 29 03:03:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:40Z|00220|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.689 226310 INFO nova.virt.libvirt.driver [-] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Instance destroyed successfully.#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.690 226310 DEBUG nova.objects.instance [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'resources' on Instance uuid fb5b9f0e-9622-448b-8fa7-6c96fcc794cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.706 226310 DEBUG nova.virt.libvirt.vif [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-881754980',display_name='tempest-tempest.common.compute-instance-881754980',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-881754980',id=67,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-f4q2v3tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=fb5b9f0e-9622-448b-8fa7-6c96fcc794cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.706 226310 DEBUG nova.network.os_vif_util [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.707 226310 DEBUG nova.network.os_vif_util [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:b9:77,bridge_name='br-int',has_traffic_filtering=True,id=94d7210e-a29d-439a-9e36-bdd02b75076a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d7210e-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.707 226310 DEBUG os_vif [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:b9:77,bridge_name='br-int',has_traffic_filtering=True,id=94d7210e-a29d-439a-9e36-bdd02b75076a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d7210e-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.709 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.709 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94d7210e-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.710 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.711 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.714 226310 INFO os_vif [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:b9:77,bridge_name='br-int',has_traffic_filtering=True,id=94d7210e-a29d-439a-9e36-bdd02b75076a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d7210e-a2')#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.714 226310 DEBUG nova.virt.libvirt.vif [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-881754980',display_name='tempest-tempest.common.compute-instance-881754980',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-881754980',id=67,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-f4q2v3tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=fb5b9f0e-9622-448b-8fa7-6c96fcc794cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.715 226310 DEBUG nova.network.os_vif_util [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.715 226310 DEBUG nova.network.os_vif_util [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.716 226310 DEBUG os_vif [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.718 226310 DEBUG nova.network.neutron [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Successfully updated port: 263015f4-a368-4943-b126-e3cbfb8f9053 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.719 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.719 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8838967-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.719 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.722 226310 INFO os_vif [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64')#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.762 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-f4450ee9-fb6d-4c17-bee5-84291eedd055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.764 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-f4450ee9-fb6d-4c17-bee5-84291eedd055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.765 226310 DEBUG nova.network.neutron [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:03:40 np0005539564 podman[252818]: 2025-11-29 08:03:40.774268299 +0000 UTC m=+0.062213089 container remove b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.783 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e3eb69ce-aca9-4fc0-bde5-66a7bd0aff75]: (4, ('Sat Nov 29 08:03:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 (b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b)\nb960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b\nSat Nov 29 08:03:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 (b960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b)\nb960a952631bd5b25b12965999ffa2adcae6c09b59627b96438430f5c027877b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.785 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b57b8abf-0f6f-444c-b833-b90822e501cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.786 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:40 np0005539564 kernel: tap738e99b4-b0: left promiscuous mode
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.793 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.800 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.805 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9e314d-af56-43b8-8850-e3a54e2f1fa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.826 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f0197e-018f-4727-9989-e9c5433961f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.828 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cf39f063-0fd2-4759-8f29-390621debe2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.845 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[020ff6f7-f9e5-4b77-830a-cc853edec709]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632984, 'reachable_time': 16485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252850, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:40 np0005539564 systemd[1]: run-netns-ovnmeta\x2d738e99b4\x2db58e\x2d4eff\x2db209\x2dc4aa3748c994.mount: Deactivated successfully.
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.851 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:03:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:40.851 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[84feead1-9447-420d-bb19-9190f22a37bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.874 226310 DEBUG nova.compute.manager [req-360a603f-c09a-49bd-8da1-44a081c11a6d req-16ef910d-2ca5-4f17-841e-02a76857f2f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-unplugged-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.875 226310 DEBUG oslo_concurrency.lockutils [req-360a603f-c09a-49bd-8da1-44a081c11a6d req-16ef910d-2ca5-4f17-841e-02a76857f2f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.876 226310 DEBUG oslo_concurrency.lockutils [req-360a603f-c09a-49bd-8da1-44a081c11a6d req-16ef910d-2ca5-4f17-841e-02a76857f2f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.876 226310 DEBUG oslo_concurrency.lockutils [req-360a603f-c09a-49bd-8da1-44a081c11a6d req-16ef910d-2ca5-4f17-841e-02a76857f2f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.876 226310 DEBUG nova.compute.manager [req-360a603f-c09a-49bd-8da1-44a081c11a6d req-16ef910d-2ca5-4f17-841e-02a76857f2f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] No waiting events found dispatching network-vif-unplugged-94d7210e-a29d-439a-9e36-bdd02b75076a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.877 226310 DEBUG nova.compute.manager [req-360a603f-c09a-49bd-8da1-44a081c11a6d req-16ef910d-2ca5-4f17-841e-02a76857f2f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-unplugged-94d7210e-a29d-439a-9e36-bdd02b75076a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.895 226310 DEBUG nova.compute.manager [req-7dd5477b-4b45-4403-aee6-4cd7cf7a555a req-e343d86d-9b0b-4e6e-a884-556e20d04cef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received event network-changed-263015f4-a368-4943-b126-e3cbfb8f9053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.896 226310 DEBUG nova.compute.manager [req-7dd5477b-4b45-4403-aee6-4cd7cf7a555a req-e343d86d-9b0b-4e6e-a884-556e20d04cef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Refreshing instance network info cache due to event network-changed-263015f4-a368-4943-b126-e3cbfb8f9053. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.896 226310 DEBUG oslo_concurrency.lockutils [req-7dd5477b-4b45-4403-aee6-4cd7cf7a555a req-e343d86d-9b0b-4e6e-a884-556e20d04cef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f4450ee9-fb6d-4c17-bee5-84291eedd055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:40 np0005539564 nova_compute[226295]: 2025-11-29 08:03:40.986 226310 DEBUG nova.network.neutron [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:03:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:41.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:41.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.274 226310 INFO nova.virt.libvirt.driver [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Deleting instance files /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_del#033[00m
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.275 226310 INFO nova.virt.libvirt.driver [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Deletion of /var/lib/nova/instances/fb5b9f0e-9622-448b-8fa7-6c96fcc794cf_del complete#033[00m
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.339 226310 INFO nova.compute.manager [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.340 226310 DEBUG oslo.service.loopingcall [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.340 226310 DEBUG nova.compute.manager [-] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.340 226310 DEBUG nova.network.neutron [-] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.376 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.970 226310 DEBUG nova.network.neutron [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Updating instance_info_cache with network_info: [{"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:41 np0005539564 nova_compute[226295]: 2025-11-29 08:03:41.978 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.003 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.004 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.005 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-f4450ee9-fb6d-4c17-bee5-84291eedd055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.005 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Instance network_info: |[{"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.011 226310 DEBUG oslo_concurrency.lockutils [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.012 226310 DEBUG nova.network.neutron [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.014 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.015 226310 DEBUG oslo_concurrency.lockutils [req-7dd5477b-4b45-4403-aee6-4cd7cf7a555a req-e343d86d-9b0b-4e6e-a884-556e20d04cef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f4450ee9-fb6d-4c17-bee5-84291eedd055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.015 226310 DEBUG nova.network.neutron [req-7dd5477b-4b45-4403-aee6-4cd7cf7a555a req-e343d86d-9b0b-4e6e-a884-556e20d04cef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Refreshing network info cache for port 263015f4-a368-4943-b126-e3cbfb8f9053 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.020 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Start _get_guest_xml network_info=[{"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.024 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.032 226310 WARNING nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.040 226310 DEBUG nova.virt.libvirt.host [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.041 226310 DEBUG nova.virt.libvirt.host [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.046 226310 DEBUG nova.virt.libvirt.host [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.046 226310 DEBUG nova.virt.libvirt.host [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.048 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.048 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.048 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.049 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.049 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.049 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.049 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.049 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.050 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.050 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.050 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.050 226310 DEBUG nova.virt.hardware [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.053 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3670933379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.541 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.584 226310 DEBUG nova.storage.rbd_utils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image f4450ee9-fb6d-4c17-bee5-84291eedd055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:42 np0005539564 nova_compute[226295]: 2025-11-29 08:03:42.590 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.039 226310 DEBUG nova.compute.manager [req-e7d5a958-2803-432a-a8be-c898025cdc11 req-bc8a320c-670d-412a-89e8-618431225cfe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.040 226310 DEBUG oslo_concurrency.lockutils [req-e7d5a958-2803-432a-a8be-c898025cdc11 req-bc8a320c-670d-412a-89e8-618431225cfe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.041 226310 DEBUG oslo_concurrency.lockutils [req-e7d5a958-2803-432a-a8be-c898025cdc11 req-bc8a320c-670d-412a-89e8-618431225cfe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.041 226310 DEBUG oslo_concurrency.lockutils [req-e7d5a958-2803-432a-a8be-c898025cdc11 req-bc8a320c-670d-412a-89e8-618431225cfe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.042 226310 DEBUG nova.compute.manager [req-e7d5a958-2803-432a-a8be-c898025cdc11 req-bc8a320c-670d-412a-89e8-618431225cfe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] No waiting events found dispatching network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.042 226310 WARNING nova.compute.manager [req-e7d5a958-2803-432a-a8be-c898025cdc11 req-bc8a320c-670d-412a-89e8-618431225cfe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received unexpected event network-vif-plugged-94d7210e-a29d-439a-9e36-bdd02b75076a for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:03:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2714462634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:43.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.068 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.070 226310 DEBUG nova.virt.libvirt.vif [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-670290351',display_name='tempest-DeleteServersTestJSON-server-670290351',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-670290351',id=71,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-wvjgm0bw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:37Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=f4450ee9-fb6d-4c17-bee5-84291eedd055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.071 226310 DEBUG nova.network.os_vif_util [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.072 226310 DEBUG nova.network.os_vif_util [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fc:68,bridge_name='br-int',has_traffic_filtering=True,id=263015f4-a368-4943-b126-e3cbfb8f9053,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap263015f4-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.075 226310 DEBUG nova.objects.instance [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid f4450ee9-fb6d-4c17-bee5-84291eedd055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.098 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <uuid>f4450ee9-fb6d-4c17-bee5-84291eedd055</uuid>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <name>instance-00000047</name>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <nova:name>tempest-DeleteServersTestJSON-server-670290351</nova:name>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:03:42</nova:creationTime>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <nova:user uuid="ef8e9cc962eb4827954df3c42cc34798">tempest-DeleteServersTestJSON-69711189-project-member</nova:user>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <nova:project uuid="f8bc2a2616a34ba1a18b3211e406993f">tempest-DeleteServersTestJSON-69711189</nova:project>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <nova:port uuid="263015f4-a368-4943-b126-e3cbfb8f9053">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <entry name="serial">f4450ee9-fb6d-4c17-bee5-84291eedd055</entry>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <entry name="uuid">f4450ee9-fb6d-4c17-bee5-84291eedd055</entry>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/f4450ee9-fb6d-4c17-bee5-84291eedd055_disk">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/f4450ee9-fb6d-4c17-bee5-84291eedd055_disk.config">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:3e:fc:68"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <target dev="tap263015f4-a3"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055/console.log" append="off"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:03:43 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:03:43 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:03:43 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:03:43 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.101 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Preparing to wait for external event network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.101 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.102 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.102 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.103 226310 DEBUG nova.virt.libvirt.vif [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-670290351',display_name='tempest-DeleteServersTestJSON-server-670290351',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-670290351',id=71,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-wvjgm0bw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:37Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=f4450ee9-fb6d-4c17-bee5-84291eedd055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.104 226310 DEBUG nova.network.os_vif_util [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.105 226310 DEBUG nova.network.os_vif_util [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fc:68,bridge_name='br-int',has_traffic_filtering=True,id=263015f4-a368-4943-b126-e3cbfb8f9053,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap263015f4-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.105 226310 DEBUG os_vif [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fc:68,bridge_name='br-int',has_traffic_filtering=True,id=263015f4-a368-4943-b126-e3cbfb8f9053,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap263015f4-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.106 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.106 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.107 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.111 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.111 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap263015f4-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.112 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap263015f4-a3, col_values=(('external_ids', {'iface-id': '263015f4-a368-4943-b126-e3cbfb8f9053', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:fc:68', 'vm-uuid': 'f4450ee9-fb6d-4c17-bee5-84291eedd055'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.113 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:43 np0005539564 NetworkManager[48997]: <info>  [1764403423.1145] manager: (tap263015f4-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.115 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.119 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.119 226310 INFO os_vif [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fc:68,bridge_name='br-int',has_traffic_filtering=True,id=263015f4-a368-4943-b126-e3cbfb8f9053,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap263015f4-a3')#033[00m
Nov 29 03:03:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:43.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.305 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.306 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.306 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:3e:fc:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.307 226310 INFO nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Using config drive#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.353 226310 DEBUG nova.storage.rbd_utils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image f4450ee9-fb6d-4c17-bee5-84291eedd055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.363 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.396 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.397 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.397 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.398 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.399 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.729 226310 DEBUG nova.network.neutron [-] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.756 226310 INFO nova.compute.manager [-] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Took 2.42 seconds to deallocate network for instance.#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.805 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.806 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.862 226310 DEBUG nova.compute.manager [req-30de376e-3363-427e-98bd-996527b9732c req-d3644603-f0b0-48c7-8d03-0e7c75f809c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Received event network-vif-deleted-94d7210e-a29d-439a-9e36-bdd02b75076a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.915 226310 DEBUG oslo_concurrency.processutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3840238596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.966 226310 INFO nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Creating config drive at /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055/disk.config#033[00m
Nov 29 03:03:43 np0005539564 nova_compute[226295]: 2025-11-29 08:03:43.976 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp06scu0tg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.004 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.087 226310 INFO nova.network.neutron [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Port c8838967-6481-4acd-b59f-0be782c9a361 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.088 226310 DEBUG nova.network.neutron [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Updating instance_info_cache with network_info: [{"id": "94d7210e-a29d-439a-9e36-bdd02b75076a", "address": "fa:16:3e:72:b9:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d7210e-a2", "ovs_interfaceid": "94d7210e-a29d-439a-9e36-bdd02b75076a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.096 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.097 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.123 226310 DEBUG oslo_concurrency.lockutils [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.128 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp06scu0tg" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.181 226310 DEBUG nova.storage.rbd_utils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image f4450ee9-fb6d-4c17-bee5-84291eedd055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.186 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055/disk.config f4450ee9-fb6d-4c17-bee5-84291eedd055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.226 226310 DEBUG oslo_concurrency.lockutils [None req-2fb63092-444d-4bae-abf3-22910a5deaaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-fb5b9f0e-9622-448b-8fa7-6c96fcc794cf-c8838967-6481-4acd-b59f-0be782c9a361" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 7.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.409 226310 DEBUG oslo_concurrency.processutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055/disk.config f4450ee9-fb6d-4c17-bee5-84291eedd055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.410 226310 INFO nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Deleting local config drive /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055/disk.config because it was imported into RBD.#033[00m
Nov 29 03:03:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2959529658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:44 np0005539564 kernel: tap263015f4-a3: entered promiscuous mode
Nov 29 03:03:44 np0005539564 NetworkManager[48997]: <info>  [1764403424.4897] manager: (tap263015f4-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/114)
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.491 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.494 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:44Z|00221|binding|INFO|Claiming lport 263015f4-a368-4943-b126-e3cbfb8f9053 for this chassis.
Nov 29 03:03:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:44Z|00222|binding|INFO|263015f4-a368-4943-b126-e3cbfb8f9053: Claiming fa:16:3e:3e:fc:68 10.100.0.14
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.502 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:fc:68 10.100.0.14'], port_security=['fa:16:3e:3e:fc:68 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f4450ee9-fb6d-4c17-bee5-84291eedd055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=263015f4-a368-4943-b126-e3cbfb8f9053) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.503 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 263015f4-a368-4943-b126-e3cbfb8f9053 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 bound to our chassis#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.505 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5e42602-d72e-4beb-864d-714bd1635da9#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.504 226310 DEBUG oslo_concurrency.processutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.520 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[14f978c4-e96f-441d-890b-801d4a7f4308]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.521 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5e42602-d1 in ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.522 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5e42602-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.523 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c45cde-86a7-4bdf-9f56-e7f0bd88a563]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.523 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[79063d74-b18c-4d16-919a-ec50f98325ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.524 226310 DEBUG nova.compute.provider_tree [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:44 np0005539564 systemd-udevd[253079]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.535 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4cd6d9-c393-4f20-92f2-b91994fd828b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.538 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.539 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4652MB free_disk=20.876293182373047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.540 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.542 226310 DEBUG nova.scheduler.client.report [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:44 np0005539564 NetworkManager[48997]: <info>  [1764403424.5544] device (tap263015f4-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:03:44 np0005539564 NetworkManager[48997]: <info>  [1764403424.5560] device (tap263015f4-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.556 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:44 np0005539564 podman[253021]: 2025-11-29 08:03:44.557386486 +0000 UTC m=+0.098628432 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:03:44 np0005539564 systemd-machined[190128]: New machine qemu-31-instance-00000047.
Nov 29 03:03:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:44Z|00223|binding|INFO|Setting lport 263015f4-a368-4943-b126-e3cbfb8f9053 ovn-installed in OVS
Nov 29 03:03:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:44Z|00224|binding|INFO|Setting lport 263015f4-a368-4943-b126-e3cbfb8f9053 up in Southbound
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.562 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:44 np0005539564 podman[253018]: 2025-11-29 08:03:44.56348721 +0000 UTC m=+0.111874719 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.568 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e27e1f95-949e-477b-9b85-554ecf362a25]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 systemd[1]: Started Virtual Machine qemu-31-instance-00000047.
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.585 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.588 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:44 np0005539564 podman[253017]: 2025-11-29 08:03:44.590978462 +0000 UTC m=+0.137192032 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.596 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d74f1f-40fd-4f79-9643-ca892faa36d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.602 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[33c0a3bd-9988-4bf8-9ba8-84e363ae94d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 NetworkManager[48997]: <info>  [1764403424.6039] manager: (tapd5e42602-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/115)
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.633 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6d40a5-5c0d-4526-9ef5-af1524f857b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.636 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[89235b18-8991-46ca-8950-fbe5e146549f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.638 226310 INFO nova.scheduler.client.report [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Deleted allocations for instance fb5b9f0e-9622-448b-8fa7-6c96fcc794cf#033[00m
Nov 29 03:03:44 np0005539564 NetworkManager[48997]: <info>  [1764403424.6574] device (tapd5e42602-d0): carrier: link connected
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.665 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5dd5f1-f464-4b8d-8ff3-6300554f6c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.667 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance f4450ee9-fb6d-4c17-bee5-84291eedd055 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.667 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.667 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.682 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a3044457-ae4e-41ff-9a10-ebe12b690d6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639933, 'reachable_time': 39857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253126, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.685 226310 DEBUG nova.network.neutron [req-7dd5477b-4b45-4403-aee6-4cd7cf7a555a req-e343d86d-9b0b-4e6e-a884-556e20d04cef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Updated VIF entry in instance network info cache for port 263015f4-a368-4943-b126-e3cbfb8f9053. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.686 226310 DEBUG nova.network.neutron [req-7dd5477b-4b45-4403-aee6-4cd7cf7a555a req-e343d86d-9b0b-4e6e-a884-556e20d04cef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Updating instance_info_cache with network_info: [{"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.702 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0005ae05-2f9f-42b7-bf99-e4d1862d7789]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:370b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639933, 'tstamp': 639933}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253127, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.711 226310 DEBUG oslo_concurrency.lockutils [req-7dd5477b-4b45-4403-aee6-4cd7cf7a555a req-e343d86d-9b0b-4e6e-a884-556e20d04cef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f4450ee9-fb6d-4c17-bee5-84291eedd055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.723 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.722 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[137377a0-6971-4e53-9ae3-f2ec1a2fc7e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639933, 'reachable_time': 39857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253128, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.756 226310 DEBUG oslo_concurrency.lockutils [None req-64cf955d-f7b6-48a5-916a-243db1058b36 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "fb5b9f0e-9622-448b-8fa7-6c96fcc794cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.769 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccd99a3-b968-4615-861e-1598e4b0d486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.839 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1c814604-9862-48f2-a28f-4049196d000e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.841 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.841 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.841 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e42602-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.843 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:44 np0005539564 NetworkManager[48997]: <info>  [1764403424.8440] manager: (tapd5e42602-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Nov 29 03:03:44 np0005539564 kernel: tapd5e42602-d0: entered promiscuous mode
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.846 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5e42602-d0, col_values=(('external_ids', {'iface-id': 'b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:44Z|00225|binding|INFO|Releasing lport b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e from this chassis (sb_readonly=0)
Nov 29 03:03:44 np0005539564 nova_compute[226295]: 2025-11-29 08:03:44.859 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.860 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.861 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fb40ba5f-0e68-4e88-a453-2e711c8a6d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.862 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:03:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:44.863 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'env', 'PROCESS_TAG=haproxy-d5e42602-d72e-4beb-864d-714bd1635da9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5e42602-d72e-4beb-864d-714bd1635da9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:03:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:45.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1476560591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.193 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.200 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.217 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:45.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.239 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.239 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:45 np0005539564 podman[253198]: 2025-11-29 08:03:45.29269694 +0000 UTC m=+0.077618825 container create a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:03:45 np0005539564 systemd[1]: Started libpod-conmon-a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670.scope.
Nov 29 03:03:45 np0005539564 podman[253198]: 2025-11-29 08:03:45.250596564 +0000 UTC m=+0.035518529 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.353 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403425.3525774, f4450ee9-fb6d-4c17-bee5-84291eedd055 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.354 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] VM Started (Lifecycle Event)#033[00m
Nov 29 03:03:45 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:03:45 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3626c251746b940e51f4cff987e9c954493cfec273e6ac977f00eff75fbad7e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:45 np0005539564 podman[253198]: 2025-11-29 08:03:45.38762644 +0000 UTC m=+0.172548335 container init a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:03:45 np0005539564 podman[253198]: 2025-11-29 08:03:45.398689399 +0000 UTC m=+0.183611294 container start a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.405 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.412 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403425.3529778, f4450ee9-fb6d-4c17-bee5-84291eedd055 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.412 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:03:45 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253235]: [NOTICE]   (253239) : New worker (253241) forked
Nov 29 03:03:45 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253235]: [NOTICE]   (253239) : Loading success.
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.450 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.456 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.482 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.971 226310 DEBUG nova.compute.manager [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received event network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.972 226310 DEBUG oslo_concurrency.lockutils [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.973 226310 DEBUG oslo_concurrency.lockutils [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.973 226310 DEBUG oslo_concurrency.lockutils [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.974 226310 DEBUG nova.compute.manager [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Processing event network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.974 226310 DEBUG nova.compute.manager [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received event network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.975 226310 DEBUG oslo_concurrency.lockutils [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.975 226310 DEBUG oslo_concurrency.lockutils [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.976 226310 DEBUG oslo_concurrency.lockutils [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.976 226310 DEBUG nova.compute.manager [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] No waiting events found dispatching network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.976 226310 WARNING nova.compute.manager [req-de13420d-7127-40a1-b15e-a5b627918177 req-348c326d-94d4-44bf-97cf-3aeca8ff23fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received unexpected event network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.978 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.983 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403425.9827907, f4450ee9-fb6d-4c17-bee5-84291eedd055 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.983 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.987 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.993 226310 INFO nova.virt.libvirt.driver [-] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Instance spawned successfully.#033[00m
Nov 29 03:03:45 np0005539564 nova_compute[226295]: 2025-11-29 08:03:45.994 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.031 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.042 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.051 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.052 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.053 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.053 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.054 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.055 226310 DEBUG nova.virt.libvirt.driver [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.070 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.219 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.228 226310 INFO nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Took 8.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.229 226310 DEBUG nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.322 226310 INFO nova.compute.manager [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Took 10.04 seconds to build instance.#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.347 226310 DEBUG oslo_concurrency.lockutils [None req-fc060e83-b61b-47b8-93bf-e60bb801752e ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:46 np0005539564 nova_compute[226295]: 2025-11-29 08:03:46.379 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:47.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:47.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:47 np0005539564 nova_compute[226295]: 2025-11-29 08:03:47.247 226310 INFO nova.compute.manager [None req-635e1e9a-e3c1-46f1-b5be-24891d62a692 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Pausing#033[00m
Nov 29 03:03:47 np0005539564 nova_compute[226295]: 2025-11-29 08:03:47.249 226310 DEBUG nova.objects.instance [None req-635e1e9a-e3c1-46f1-b5be-24891d62a692 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'flavor' on Instance uuid f4450ee9-fb6d-4c17-bee5-84291eedd055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:47 np0005539564 nova_compute[226295]: 2025-11-29 08:03:47.284 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403427.283759, f4450ee9-fb6d-4c17-bee5-84291eedd055 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:47 np0005539564 nova_compute[226295]: 2025-11-29 08:03:47.284 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:03:47 np0005539564 nova_compute[226295]: 2025-11-29 08:03:47.289 226310 DEBUG nova.compute.manager [None req-635e1e9a-e3c1-46f1-b5be-24891d62a692 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:47 np0005539564 nova_compute[226295]: 2025-11-29 08:03:47.314 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:47 np0005539564 nova_compute[226295]: 2025-11-29 08:03:47.319 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:47 np0005539564 nova_compute[226295]: 2025-11-29 08:03:47.368 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 03:03:48 np0005539564 nova_compute[226295]: 2025-11-29 08:03:48.116 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:49.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:49.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.120 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "f4450ee9-fb6d-4c17-bee5-84291eedd055" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.121 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.122 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.122 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.123 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.125 226310 INFO nova.compute.manager [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Terminating instance#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.126 226310 DEBUG nova.compute.manager [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:03:50 np0005539564 kernel: tap263015f4-a3 (unregistering): left promiscuous mode
Nov 29 03:03:50 np0005539564 NetworkManager[48997]: <info>  [1764403430.1785] device (tap263015f4-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:03:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:50Z|00226|binding|INFO|Releasing lport 263015f4-a368-4943-b126-e3cbfb8f9053 from this chassis (sb_readonly=0)
Nov 29 03:03:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:50Z|00227|binding|INFO|Setting lport 263015f4-a368-4943-b126-e3cbfb8f9053 down in Southbound
Nov 29 03:03:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:03:50Z|00228|binding|INFO|Removing iface tap263015f4-a3 ovn-installed in OVS
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.198 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.207 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:fc:68 10.100.0.14'], port_security=['fa:16:3e:3e:fc:68 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f4450ee9-fb6d-4c17-bee5-84291eedd055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=263015f4-a368-4943-b126-e3cbfb8f9053) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.211 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 263015f4-a368-4943-b126-e3cbfb8f9053 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 unbound from our chassis#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.215 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5e42602-d72e-4beb-864d-714bd1635da9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.216 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[595c0524-f304-403f-9445-389a0776f50b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.217 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace which is not needed anymore#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.233 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000047.scope: Deactivated successfully.
Nov 29 03:03:50 np0005539564 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000047.scope: Consumed 2.248s CPU time.
Nov 29 03:03:50 np0005539564 systemd-machined[190128]: Machine qemu-31-instance-00000047 terminated.
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.357 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.366 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.376 226310 INFO nova.virt.libvirt.driver [-] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Instance destroyed successfully.#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.376 226310 DEBUG nova.objects.instance [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'resources' on Instance uuid f4450ee9-fb6d-4c17-bee5-84291eedd055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.409 226310 DEBUG nova.virt.libvirt.vif [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-670290351',display_name='tempest-DeleteServersTestJSON-server-670290351',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-670290351',id=71,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:03:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-wvjgm0bw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:03:47Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=f4450ee9-fb6d-4c17-bee5-84291eedd055,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.410 226310 DEBUG nova.network.os_vif_util [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "263015f4-a368-4943-b126-e3cbfb8f9053", "address": "fa:16:3e:3e:fc:68", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap263015f4-a3", "ovs_interfaceid": "263015f4-a368-4943-b126-e3cbfb8f9053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.412 226310 DEBUG nova.network.os_vif_util [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fc:68,bridge_name='br-int',has_traffic_filtering=True,id=263015f4-a368-4943-b126-e3cbfb8f9053,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap263015f4-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.413 226310 DEBUG os_vif [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fc:68,bridge_name='br-int',has_traffic_filtering=True,id=263015f4-a368-4943-b126-e3cbfb8f9053,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap263015f4-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.417 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.417 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap263015f4-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.419 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.421 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.424 226310 INFO os_vif [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fc:68,bridge_name='br-int',has_traffic_filtering=True,id=263015f4-a368-4943-b126-e3cbfb8f9053,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap263015f4-a3')#033[00m
Nov 29 03:03:50 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253235]: [NOTICE]   (253239) : haproxy version is 2.8.14-c23fe91
Nov 29 03:03:50 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253235]: [NOTICE]   (253239) : path to executable is /usr/sbin/haproxy
Nov 29 03:03:50 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253235]: [WARNING]  (253239) : Exiting Master process...
Nov 29 03:03:50 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253235]: [ALERT]    (253239) : Current worker (253241) exited with code 143 (Terminated)
Nov 29 03:03:50 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253235]: [WARNING]  (253239) : All workers exited. Exiting... (0)
Nov 29 03:03:50 np0005539564 systemd[1]: libpod-a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670.scope: Deactivated successfully.
Nov 29 03:03:50 np0005539564 podman[253278]: 2025-11-29 08:03:50.438963756 +0000 UTC m=+0.061529060 container died a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:03:50 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670-userdata-shm.mount: Deactivated successfully.
Nov 29 03:03:50 np0005539564 systemd[1]: var-lib-containers-storage-overlay-3626c251746b940e51f4cff987e9c954493cfec273e6ac977f00eff75fbad7e4-merged.mount: Deactivated successfully.
Nov 29 03:03:50 np0005539564 podman[253278]: 2025-11-29 08:03:50.479842309 +0000 UTC m=+0.102407583 container cleanup a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:03:50 np0005539564 systemd[1]: libpod-conmon-a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670.scope: Deactivated successfully.
Nov 29 03:03:50 np0005539564 podman[253332]: 2025-11-29 08:03:50.561511722 +0000 UTC m=+0.055362274 container remove a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.568 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e05df5e1-3052-47b2-a3fb-8f3089fd7d13]: (4, ('Sat Nov 29 08:03:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670)\na23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670\nSat Nov 29 08:03:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (a23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670)\na23d68f290ea517149a9da4c33b99df7396355fcb44045e18080c9ba7c969670\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.570 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7cef210c-e414-442f-8f47-91b18cc23b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.572 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:50 np0005539564 kernel: tapd5e42602-d0: left promiscuous mode
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.613 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.640 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.644 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ee28d783-6f8e-4975-b2e8-40e5df5e9ce6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.660 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0d037734-251a-4acd-8362-53ea8b6aad17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.662 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[68f109ee-444f-454d-a335-219cde02229c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.679 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[52806bca-644c-4278-bd37-27d65eabda86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639927, 'reachable_time': 19659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253348, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:50 np0005539564 systemd[1]: run-netns-ovnmeta\x2dd5e42602\x2dd72e\x2d4beb\x2d864d\x2d714bd1635da9.mount: Deactivated successfully.
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.683 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:03:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:03:50.683 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[224b53d6-32da-47c3-852e-b1d17fb7346c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.795 226310 DEBUG nova.compute.manager [req-87ba06e7-fb8e-4b41-9ef6-e0f1f27b28bd req-7bf41ce7-be5a-440e-b22e-eedb6538c219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received event network-vif-unplugged-263015f4-a368-4943-b126-e3cbfb8f9053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.795 226310 DEBUG oslo_concurrency.lockutils [req-87ba06e7-fb8e-4b41-9ef6-e0f1f27b28bd req-7bf41ce7-be5a-440e-b22e-eedb6538c219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.796 226310 DEBUG oslo_concurrency.lockutils [req-87ba06e7-fb8e-4b41-9ef6-e0f1f27b28bd req-7bf41ce7-be5a-440e-b22e-eedb6538c219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.796 226310 DEBUG oslo_concurrency.lockutils [req-87ba06e7-fb8e-4b41-9ef6-e0f1f27b28bd req-7bf41ce7-be5a-440e-b22e-eedb6538c219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.796 226310 DEBUG nova.compute.manager [req-87ba06e7-fb8e-4b41-9ef6-e0f1f27b28bd req-7bf41ce7-be5a-440e-b22e-eedb6538c219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] No waiting events found dispatching network-vif-unplugged-263015f4-a368-4943-b126-e3cbfb8f9053 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.796 226310 DEBUG nova.compute.manager [req-87ba06e7-fb8e-4b41-9ef6-e0f1f27b28bd req-7bf41ce7-be5a-440e-b22e-eedb6538c219 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received event network-vif-unplugged-263015f4-a368-4943-b126-e3cbfb8f9053 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.990 226310 INFO nova.virt.libvirt.driver [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Deleting instance files /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055_del#033[00m
Nov 29 03:03:50 np0005539564 nova_compute[226295]: 2025-11-29 08:03:50.991 226310 INFO nova.virt.libvirt.driver [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Deletion of /var/lib/nova/instances/f4450ee9-fb6d-4c17-bee5-84291eedd055_del complete#033[00m
Nov 29 03:03:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:51.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:51 np0005539564 nova_compute[226295]: 2025-11-29 08:03:51.223 226310 INFO nova.compute.manager [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:03:51 np0005539564 nova_compute[226295]: 2025-11-29 08:03:51.223 226310 DEBUG oslo.service.loopingcall [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:03:51 np0005539564 nova_compute[226295]: 2025-11-29 08:03:51.224 226310 DEBUG nova.compute.manager [-] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:03:51 np0005539564 nova_compute[226295]: 2025-11-29 08:03:51.224 226310 DEBUG nova.network.neutron [-] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:03:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:51.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:51 np0005539564 nova_compute[226295]: 2025-11-29 08:03:51.381 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.131 226310 DEBUG nova.network.neutron [-] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.153 226310 INFO nova.compute.manager [-] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Took 0.93 seconds to deallocate network for instance.#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.215 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.215 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.285 226310 DEBUG oslo_concurrency.processutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/472195757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.888 226310 DEBUG oslo_concurrency.processutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.898 226310 DEBUG nova.compute.provider_tree [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.918 226310 DEBUG nova.compute.manager [req-b72b484d-442c-473c-978b-07304ed8b7a8 req-c6dca2f3-cab6-4a45-8c2b-dcd66f810b73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received event network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.919 226310 DEBUG oslo_concurrency.lockutils [req-b72b484d-442c-473c-978b-07304ed8b7a8 req-c6dca2f3-cab6-4a45-8c2b-dcd66f810b73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.920 226310 DEBUG oslo_concurrency.lockutils [req-b72b484d-442c-473c-978b-07304ed8b7a8 req-c6dca2f3-cab6-4a45-8c2b-dcd66f810b73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.921 226310 DEBUG oslo_concurrency.lockutils [req-b72b484d-442c-473c-978b-07304ed8b7a8 req-c6dca2f3-cab6-4a45-8c2b-dcd66f810b73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.922 226310 DEBUG nova.compute.manager [req-b72b484d-442c-473c-978b-07304ed8b7a8 req-c6dca2f3-cab6-4a45-8c2b-dcd66f810b73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] No waiting events found dispatching network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.922 226310 WARNING nova.compute.manager [req-b72b484d-442c-473c-978b-07304ed8b7a8 req-c6dca2f3-cab6-4a45-8c2b-dcd66f810b73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received unexpected event network-vif-plugged-263015f4-a368-4943-b126-e3cbfb8f9053 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.923 226310 DEBUG nova.compute.manager [req-b72b484d-442c-473c-978b-07304ed8b7a8 req-c6dca2f3-cab6-4a45-8c2b-dcd66f810b73 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Received event network-vif-deleted-263015f4-a368-4943-b126-e3cbfb8f9053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.926 226310 DEBUG nova.scheduler.client.report [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.954 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:52 np0005539564 nova_compute[226295]: 2025-11-29 08:03:52.988 226310 INFO nova.scheduler.client.report [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Deleted allocations for instance f4450ee9-fb6d-4c17-bee5-84291eedd055#033[00m
Nov 29 03:03:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:53.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:53 np0005539564 nova_compute[226295]: 2025-11-29 08:03:53.090 226310 DEBUG oslo_concurrency.lockutils [None req-37c49fdf-cd88-4203-af3e-1aa4e04116fa ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "f4450ee9-fb6d-4c17-bee5-84291eedd055" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:53.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:55.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:55.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:55 np0005539564 nova_compute[226295]: 2025-11-29 08:03:55.422 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:55 np0005539564 nova_compute[226295]: 2025-11-29 08:03:55.688 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403420.6875277, fb5b9f0e-9622-448b-8fa7-6c96fcc794cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:55 np0005539564 nova_compute[226295]: 2025-11-29 08:03:55.689 226310 INFO nova.compute.manager [-] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:03:55 np0005539564 nova_compute[226295]: 2025-11-29 08:03:55.719 226310 DEBUG nova.compute.manager [None req-e3c448d9-80cb-47f2-9b8c-f928c52b1ca1 - - - - - -] [instance: fb5b9f0e-9622-448b-8fa7-6c96fcc794cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:56 np0005539564 nova_compute[226295]: 2025-11-29 08:03:56.414 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:56 np0005539564 nova_compute[226295]: 2025-11-29 08:03:56.887 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "5ae9fe1c-0566-4112-ac09-e04deb899d41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:56 np0005539564 nova_compute[226295]: 2025-11-29 08:03:56.890 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:56 np0005539564 nova_compute[226295]: 2025-11-29 08:03:56.924 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:57.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:57 np0005539564 nova_compute[226295]: 2025-11-29 08:03:57.230 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:57 np0005539564 nova_compute[226295]: 2025-11-29 08:03:57.231 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:57.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:57 np0005539564 nova_compute[226295]: 2025-11-29 08:03:57.241 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:57 np0005539564 nova_compute[226295]: 2025-11-29 08:03:57.242 226310 INFO nova.compute.claims [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:03:57 np0005539564 nova_compute[226295]: 2025-11-29 08:03:57.433 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3775957241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:57 np0005539564 nova_compute[226295]: 2025-11-29 08:03:57.929 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:57 np0005539564 nova_compute[226295]: 2025-11-29 08:03:57.936 226310 DEBUG nova.compute.provider_tree [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:57 np0005539564 nova_compute[226295]: 2025-11-29 08:03:57.964 226310 DEBUG nova.scheduler.client.report [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.005 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.006 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.057 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.058 226310 DEBUG nova.network.neutron [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.074 226310 INFO nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.105 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.227 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.229 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.229 226310 INFO nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Creating image(s)#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.265 226310 DEBUG nova.storage.rbd_utils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.304 226310 DEBUG nova.storage.rbd_utils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.344 226310 DEBUG nova.storage.rbd_utils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.349 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.452 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.453 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.454 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.454 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.497 226310 DEBUG nova.storage.rbd_utils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.503 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.537 226310 DEBUG nova.policy [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef8e9cc962eb4827954df3c42cc34798', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.828 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:58 np0005539564 nova_compute[226295]: 2025-11-29 08:03:58.910 226310 DEBUG nova.storage.rbd_utils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] resizing rbd image 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:03:59 np0005539564 nova_compute[226295]: 2025-11-29 08:03:59.042 226310 DEBUG nova.objects.instance [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'migration_context' on Instance uuid 5ae9fe1c-0566-4112-ac09-e04deb899d41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:59 np0005539564 nova_compute[226295]: 2025-11-29 08:03:59.057 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:03:59 np0005539564 nova_compute[226295]: 2025-11-29 08:03:59.058 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Ensure instance console log exists: /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:03:59 np0005539564 nova_compute[226295]: 2025-11-29 08:03:59.059 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:59 np0005539564 nova_compute[226295]: 2025-11-29 08:03:59.060 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:59 np0005539564 nova_compute[226295]: 2025-11-29 08:03:59.060 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:59.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:03:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:59 np0005539564 nova_compute[226295]: 2025-11-29 08:03:59.801 226310 DEBUG nova.network.neutron [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Successfully created port: ff7f077c-a428-4a43-86c8-7e1210f51fab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:04:00 np0005539564 nova_compute[226295]: 2025-11-29 08:04:00.427 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:00 np0005539564 nova_compute[226295]: 2025-11-29 08:04:00.753 226310 DEBUG nova.network.neutron [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Successfully updated port: ff7f077c-a428-4a43-86c8-7e1210f51fab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:04:00 np0005539564 nova_compute[226295]: 2025-11-29 08:04:00.779 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:00 np0005539564 nova_compute[226295]: 2025-11-29 08:04:00.779 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:00 np0005539564 nova_compute[226295]: 2025-11-29 08:04:00.780 226310 DEBUG nova.network.neutron [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:01.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:01 np0005539564 nova_compute[226295]: 2025-11-29 08:04:01.120 226310 DEBUG nova.network.neutron [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:01 np0005539564 nova_compute[226295]: 2025-11-29 08:04:01.183 226310 DEBUG nova.compute.manager [req-d014b0f1-79b3-4c7c-8b2d-f2d8e5b646bc req-11fb323f-a89d-42a3-b9ad-fea6e954d04f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received event network-changed-ff7f077c-a428-4a43-86c8-7e1210f51fab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:01 np0005539564 nova_compute[226295]: 2025-11-29 08:04:01.184 226310 DEBUG nova.compute.manager [req-d014b0f1-79b3-4c7c-8b2d-f2d8e5b646bc req-11fb323f-a89d-42a3-b9ad-fea6e954d04f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Refreshing instance network info cache due to event network-changed-ff7f077c-a428-4a43-86c8-7e1210f51fab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:01 np0005539564 nova_compute[226295]: 2025-11-29 08:04:01.184 226310 DEBUG oslo_concurrency.lockutils [req-d014b0f1-79b3-4c7c-8b2d-f2d8e5b646bc req-11fb323f-a89d-42a3-b9ad-fea6e954d04f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:01.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:01 np0005539564 nova_compute[226295]: 2025-11-29 08:04:01.417 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.290 226310 DEBUG nova.network.neutron [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updating instance_info_cache with network_info: [{"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.340 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.341 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance network_info: |[{"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.342 226310 DEBUG oslo_concurrency.lockutils [req-d014b0f1-79b3-4c7c-8b2d-f2d8e5b646bc req-11fb323f-a89d-42a3-b9ad-fea6e954d04f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.343 226310 DEBUG nova.network.neutron [req-d014b0f1-79b3-4c7c-8b2d-f2d8e5b646bc req-11fb323f-a89d-42a3-b9ad-fea6e954d04f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Refreshing network info cache for port ff7f077c-a428-4a43-86c8-7e1210f51fab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.348 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Start _get_guest_xml network_info=[{"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.356 226310 WARNING nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.362 226310 DEBUG nova.virt.libvirt.host [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.363 226310 DEBUG nova.virt.libvirt.host [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.368 226310 DEBUG nova.virt.libvirt.host [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.370 226310 DEBUG nova.virt.libvirt.host [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.373 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.373 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.374 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.374 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.375 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.375 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.375 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.376 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.376 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.377 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.377 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.377 226310 DEBUG nova.virt.hardware [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.386 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/548541423' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.906 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.949 226310 DEBUG nova.storage.rbd_utils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:02 np0005539564 nova_compute[226295]: 2025-11-29 08:04:02.954 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:03.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:03.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2929208264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.409 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.413 226310 DEBUG nova.virt.libvirt.vif [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1390090872',display_name='tempest-DeleteServersTestJSON-server-1390090872',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1390090872',id=72,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-jpkefmoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:58Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=5ae9fe1c-0566-4112-ac09-e04deb899d41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.413 226310 DEBUG nova.network.os_vif_util [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.414 226310 DEBUG nova.network.os_vif_util [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:18:47,bridge_name='br-int',has_traffic_filtering=True,id=ff7f077c-a428-4a43-86c8-7e1210f51fab,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff7f077c-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.416 226310 DEBUG nova.objects.instance [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ae9fe1c-0566-4112-ac09-e04deb899d41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.471 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <uuid>5ae9fe1c-0566-4112-ac09-e04deb899d41</uuid>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <name>instance-00000048</name>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <nova:name>tempest-DeleteServersTestJSON-server-1390090872</nova:name>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:04:02</nova:creationTime>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <nova:user uuid="ef8e9cc962eb4827954df3c42cc34798">tempest-DeleteServersTestJSON-69711189-project-member</nova:user>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <nova:project uuid="f8bc2a2616a34ba1a18b3211e406993f">tempest-DeleteServersTestJSON-69711189</nova:project>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <nova:port uuid="ff7f077c-a428-4a43-86c8-7e1210f51fab">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <entry name="serial">5ae9fe1c-0566-4112-ac09-e04deb899d41</entry>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <entry name="uuid">5ae9fe1c-0566-4112-ac09-e04deb899d41</entry>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5ae9fe1c-0566-4112-ac09-e04deb899d41_disk">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5ae9fe1c-0566-4112-ac09-e04deb899d41_disk.config">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5e:18:47"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <target dev="tapff7f077c-a4"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41/console.log" append="off"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:04:03 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:04:03 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:04:03 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:04:03 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.474 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Preparing to wait for external event network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.476 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.476 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.477 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.478 226310 DEBUG nova.virt.libvirt.vif [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1390090872',display_name='tempest-DeleteServersTestJSON-server-1390090872',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1390090872',id=72,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-jpkefmoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:58Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=5ae9fe1c-0566-4112-ac09-e04deb899d41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.479 226310 DEBUG nova.network.os_vif_util [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.480 226310 DEBUG nova.network.os_vif_util [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:18:47,bridge_name='br-int',has_traffic_filtering=True,id=ff7f077c-a428-4a43-86c8-7e1210f51fab,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff7f077c-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.481 226310 DEBUG os_vif [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:18:47,bridge_name='br-int',has_traffic_filtering=True,id=ff7f077c-a428-4a43-86c8-7e1210f51fab,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff7f077c-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.482 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.483 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.484 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.490 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.491 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff7f077c-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.492 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff7f077c-a4, col_values=(('external_ids', {'iface-id': 'ff7f077c-a428-4a43-86c8-7e1210f51fab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:18:47', 'vm-uuid': '5ae9fe1c-0566-4112-ac09-e04deb899d41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.495 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539564 NetworkManager[48997]: <info>  [1764403443.4961] manager: (tapff7f077c-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.501 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.503 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.504 226310 INFO os_vif [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:18:47,bridge_name='br-int',has_traffic_filtering=True,id=ff7f077c-a428-4a43-86c8-7e1210f51fab,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff7f077c-a4')#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.580 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.581 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.581 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:5e:18:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.582 226310 INFO nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Using config drive#033[00m
Nov 29 03:04:03 np0005539564 nova_compute[226295]: 2025-11-29 08:04:03.614 226310 DEBUG nova.storage.rbd_utils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:03.713 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:03.714 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:03.714 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:04 np0005539564 nova_compute[226295]: 2025-11-29 08:04:04.645 226310 INFO nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Creating config drive at /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41/disk.config#033[00m
Nov 29 03:04:04 np0005539564 nova_compute[226295]: 2025-11-29 08:04:04.658 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjjqtatz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:04 np0005539564 nova_compute[226295]: 2025-11-29 08:04:04.808 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjjqtatz" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:04 np0005539564 nova_compute[226295]: 2025-11-29 08:04:04.852 226310 DEBUG nova.storage.rbd_utils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:04 np0005539564 nova_compute[226295]: 2025-11-29 08:04:04.857 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41/disk.config 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.070 226310 DEBUG oslo_concurrency.processutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41/disk.config 5ae9fe1c-0566-4112-ac09-e04deb899d41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.071 226310 INFO nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Deleting local config drive /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:05 np0005539564 kernel: tapff7f077c-a4: entered promiscuous mode
Nov 29 03:04:05 np0005539564 NetworkManager[48997]: <info>  [1764403445.1592] manager: (tapff7f077c-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Nov 29 03:04:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:05Z|00229|binding|INFO|Claiming lport ff7f077c-a428-4a43-86c8-7e1210f51fab for this chassis.
Nov 29 03:04:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:05Z|00230|binding|INFO|ff7f077c-a428-4a43-86c8-7e1210f51fab: Claiming fa:16:3e:5e:18:47 10.100.0.13
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.170 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:18:47 10.100.0.13'], port_security=['fa:16:3e:5e:18:47 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5ae9fe1c-0566-4112-ac09-e04deb899d41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=ff7f077c-a428-4a43-86c8-7e1210f51fab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.174 139780 INFO neutron.agent.ovn.metadata.agent [-] Port ff7f077c-a428-4a43-86c8-7e1210f51fab in datapath d5e42602-d72e-4beb-864d-714bd1635da9 bound to our chassis#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.176 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5e42602-d72e-4beb-864d-714bd1635da9#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.179 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:05Z|00231|binding|INFO|Setting lport ff7f077c-a428-4a43-86c8-7e1210f51fab ovn-installed in OVS
Nov 29 03:04:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:05Z|00232|binding|INFO|Setting lport ff7f077c-a428-4a43-86c8-7e1210f51fab up in Southbound
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.185 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.197 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7c13ed-7497-4447-a5dd-6ca8e3504eb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.199 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5e42602-d1 in ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.202 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5e42602-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.202 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9598512e-01f5-469f-b0a5-16aa9ba45b86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 systemd-udevd[253699]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.204 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[eaac433b-88bd-46b5-8f69-112634aeffbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 systemd-machined[190128]: New machine qemu-32-instance-00000048.
Nov 29 03:04:05 np0005539564 NetworkManager[48997]: <info>  [1764403445.2274] device (tapff7f077c-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.225 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[125587df-c58e-4802-9d6e-701ffce0be85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 NetworkManager[48997]: <info>  [1764403445.2307] device (tapff7f077c-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:04:05 np0005539564 systemd[1]: Started Virtual Machine qemu-32-instance-00000048.
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.249 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c5db4209-b7d2-4942-aa54-d17af698e436]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:05.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.284 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f2aa218b-1dad-4dd5-a635-b9e9eb27ad56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 NetworkManager[48997]: <info>  [1764403445.2897] manager: (tapd5e42602-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Nov 29 03:04:05 np0005539564 systemd-udevd[253703]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.289 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5d70d25f-2884-46c0-bbf1-0f7fd2c5048b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.332 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[63447f61-e784-4183-a696-7434011f82e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.335 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8f7814-7e2a-4b54-a18d-e569089aeee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 NetworkManager[48997]: <info>  [1764403445.3638] device (tapd5e42602-d0): carrier: link connected
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.369 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3344c131-6dec-476f-8c2b-2e03fcdaee5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.374 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403430.3734186, f4450ee9-fb6d-4c17-bee5-84291eedd055 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.375 226310 INFO nova.compute.manager [-] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.389 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[18ba6ea6-4bac-43ab-be83-8d65a961ee96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642004, 'reachable_time': 17955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253732, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.405 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdc2213-ddb6-4cd5-839b-9277eb6f8fe5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:370b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642004, 'tstamp': 642004}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253733, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.407 226310 DEBUG nova.compute.manager [None req-f560e44d-2615-4c79-a6f4-01cf85c0d7d7 - - - - - -] [instance: f4450ee9-fb6d-4c17-bee5-84291eedd055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.424 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cd021509-1d8a-425a-8c37-c3ad29d40024]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642004, 'reachable_time': 17955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253734, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.461 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef24b39-b23f-409d-bb9c-3983e7ec1a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.530 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7ff557-a643-4929-884c-2e3e2b4ca6ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.531 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.532 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.532 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e42602-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.535 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539564 kernel: tapd5e42602-d0: entered promiscuous mode
Nov 29 03:04:05 np0005539564 NetworkManager[48997]: <info>  [1764403445.5356] manager: (tapd5e42602-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.539 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5e42602-d0, col_values=(('external_ids', {'iface-id': 'b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:05Z|00233|binding|INFO|Releasing lport b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e from this chassis (sb_readonly=0)
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.540 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.547 226310 DEBUG nova.compute.manager [req-606a3f53-2680-455a-9867-2b28f35237de req-c0462b86-2f34-40f0-8e29-44c093af4663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received event network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.548 226310 DEBUG oslo_concurrency.lockutils [req-606a3f53-2680-455a-9867-2b28f35237de req-c0462b86-2f34-40f0-8e29-44c093af4663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.548 226310 DEBUG oslo_concurrency.lockutils [req-606a3f53-2680-455a-9867-2b28f35237de req-c0462b86-2f34-40f0-8e29-44c093af4663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.549 226310 DEBUG oslo_concurrency.lockutils [req-606a3f53-2680-455a-9867-2b28f35237de req-c0462b86-2f34-40f0-8e29-44c093af4663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.549 226310 DEBUG nova.compute.manager [req-606a3f53-2680-455a-9867-2b28f35237de req-c0462b86-2f34-40f0-8e29-44c093af4663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Processing event network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.555 226310 DEBUG nova.network.neutron [req-d014b0f1-79b3-4c7c-8b2d-f2d8e5b646bc req-11fb323f-a89d-42a3-b9ad-fea6e954d04f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updated VIF entry in instance network info cache for port ff7f077c-a428-4a43-86c8-7e1210f51fab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.555 226310 DEBUG nova.network.neutron [req-d014b0f1-79b3-4c7c-8b2d-f2d8e5b646bc req-11fb323f-a89d-42a3-b9ad-fea6e954d04f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updating instance_info_cache with network_info: [{"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.560 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.561 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.563 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[18520a2c-6729-4e73-a6b1-fd42bd8707b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.563 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:04:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:05.564 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'env', 'PROCESS_TAG=haproxy-d5e42602-d72e-4beb-864d-714bd1635da9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5e42602-d72e-4beb-864d-714bd1635da9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:04:05 np0005539564 nova_compute[226295]: 2025-11-29 08:04:05.584 226310 DEBUG oslo_concurrency.lockutils [req-d014b0f1-79b3-4c7c-8b2d-f2d8e5b646bc req-11fb323f-a89d-42a3-b9ad-fea6e954d04f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:05 np0005539564 podman[253766]: 2025-11-29 08:04:05.987100086 +0000 UTC m=+0.062157059 container create 0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:04:06 np0005539564 systemd[1]: Started libpod-conmon-0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24.scope.
Nov 29 03:04:06 np0005539564 podman[253766]: 2025-11-29 08:04:05.953085727 +0000 UTC m=+0.028142760 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:04:06 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:04:06 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27292a16adfc9a037d30a5cd244e9d9230a1d2b94aa54209b299c2e392476813/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:06 np0005539564 podman[253766]: 2025-11-29 08:04:06.096115436 +0000 UTC m=+0.171172439 container init 0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:04:06 np0005539564 podman[253766]: 2025-11-29 08:04:06.102000335 +0000 UTC m=+0.177057308 container start 0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:06 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253781]: [NOTICE]   (253785) : New worker (253787) forked
Nov 29 03:04:06 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253781]: [NOTICE]   (253785) : Loading success.
Nov 29 03:04:06 np0005539564 nova_compute[226295]: 2025-11-29 08:04:06.419 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:07.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:07.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.581 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403447.5814097, 5ae9fe1c-0566-4112-ac09-e04deb899d41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.582 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.584 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.588 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.593 226310 INFO nova.virt.libvirt.driver [-] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance spawned successfully.#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.593 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.613 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.620 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.620 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.621 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.621 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.621 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.622 226310 DEBUG nova.virt.libvirt.driver [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.626 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.652 226310 DEBUG nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received event network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.653 226310 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.653 226310 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.653 226310 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.653 226310 DEBUG nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] No waiting events found dispatching network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.654 226310 WARNING nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received unexpected event network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.665 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.665 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403447.5817091, 5ae9fe1c-0566-4112-ac09-e04deb899d41 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.665 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.687 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.691 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403447.5874138, 5ae9fe1c-0566-4112-ac09-e04deb899d41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.691 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.699 226310 INFO nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Took 9.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.699 226310 DEBUG nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.706 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.710 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.738 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.763 226310 INFO nova.compute.manager [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Took 10.58 seconds to build instance.#033[00m
Nov 29 03:04:07 np0005539564 nova_compute[226295]: 2025-11-29 08:04:07.788 226310 DEBUG oslo_concurrency.lockutils [None req-f3de0b1d-7549-4a80-aeee-1d665579d596 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:08 np0005539564 nova_compute[226295]: 2025-11-29 08:04:08.494 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:09.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:09.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:11.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:11 np0005539564 nova_compute[226295]: 2025-11-29 08:04:11.198 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "5ae9fe1c-0566-4112-ac09-e04deb899d41" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:11 np0005539564 nova_compute[226295]: 2025-11-29 08:04:11.199 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:11 np0005539564 nova_compute[226295]: 2025-11-29 08:04:11.200 226310 INFO nova.compute.manager [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Shelving#033[00m
Nov 29 03:04:11 np0005539564 nova_compute[226295]: 2025-11-29 08:04:11.226 226310 DEBUG nova.virt.libvirt.driver [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:04:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:11.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:11 np0005539564 nova_compute[226295]: 2025-11-29 08:04:11.421 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:13.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:13.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:13 np0005539564 nova_compute[226295]: 2025-11-29 08:04:13.496 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:15.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:15.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:15 np0005539564 podman[253840]: 2025-11-29 08:04:15.527867077 +0000 UTC m=+0.078910619 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:04:15 np0005539564 podman[253841]: 2025-11-29 08:04:15.546377507 +0000 UTC m=+0.096018411 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:04:15 np0005539564 podman[253839]: 2025-11-29 08:04:15.555218085 +0000 UTC m=+0.107197742 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 03:04:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:16 np0005539564 nova_compute[226295]: 2025-11-29 08:04:16.425 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:17.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:17.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:18 np0005539564 nova_compute[226295]: 2025-11-29 08:04:18.500 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:19.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:19.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:20 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 29 03:04:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:21.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:21.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:21 np0005539564 nova_compute[226295]: 2025-11-29 08:04:21.287 226310 DEBUG nova.virt.libvirt.driver [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:04:21 np0005539564 nova_compute[226295]: 2025-11-29 08:04:21.429 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:22Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:18:47 10.100.0.13
Nov 29 03:04:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:22Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:18:47 10.100.0.13
Nov 29 03:04:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:23.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:23.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:23 np0005539564 nova_compute[226295]: 2025-11-29 08:04:23.502 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:25.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:25 np0005539564 kernel: tapff7f077c-a4 (unregistering): left promiscuous mode
Nov 29 03:04:25 np0005539564 NetworkManager[48997]: <info>  [1764403465.2776] device (tapff7f077c-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:04:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:25.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:25Z|00234|binding|INFO|Releasing lport ff7f077c-a428-4a43-86c8-7e1210f51fab from this chassis (sb_readonly=0)
Nov 29 03:04:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:25Z|00235|binding|INFO|Setting lport ff7f077c-a428-4a43-86c8-7e1210f51fab down in Southbound
Nov 29 03:04:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:04:25Z|00236|binding|INFO|Removing iface tapff7f077c-a4 ovn-installed in OVS
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.300 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:18:47 10.100.0.13'], port_security=['fa:16:3e:5e:18:47 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5ae9fe1c-0566-4112-ac09-e04deb899d41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=ff7f077c-a428-4a43-86c8-7e1210f51fab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.301 139780 INFO neutron.agent.ovn.metadata.agent [-] Port ff7f077c-a428-4a43-86c8-7e1210f51fab in datapath d5e42602-d72e-4beb-864d-714bd1635da9 unbound from our chassis#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.303 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5e42602-d72e-4beb-864d-714bd1635da9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.305 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[581e14db-5df9-4523-9349-29b651dab3e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.305 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace which is not needed anymore#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.316 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539564 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 29 03:04:25 np0005539564 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000048.scope: Consumed 17.362s CPU time.
Nov 29 03:04:25 np0005539564 systemd-machined[190128]: Machine qemu-32-instance-00000048 terminated.
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.535 226310 INFO nova.virt.libvirt.driver [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance shutdown successfully after 14 seconds.#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.541 226310 INFO nova.virt.libvirt.driver [-] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance destroyed successfully.#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.541 226310 DEBUG nova.objects.instance [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'numa_topology' on Instance uuid 5ae9fe1c-0566-4112-ac09-e04deb899d41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:25 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253781]: [NOTICE]   (253785) : haproxy version is 2.8.14-c23fe91
Nov 29 03:04:25 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253781]: [NOTICE]   (253785) : path to executable is /usr/sbin/haproxy
Nov 29 03:04:25 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253781]: [WARNING]  (253785) : Exiting Master process...
Nov 29 03:04:25 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253781]: [WARNING]  (253785) : Exiting Master process...
Nov 29 03:04:25 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253781]: [ALERT]    (253785) : Current worker (253787) exited with code 143 (Terminated)
Nov 29 03:04:25 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[253781]: [WARNING]  (253785) : All workers exited. Exiting... (0)
Nov 29 03:04:25 np0005539564 systemd[1]: libpod-0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24.scope: Deactivated successfully.
Nov 29 03:04:25 np0005539564 podman[253928]: 2025-11-29 08:04:25.565952647 +0000 UTC m=+0.147138641 container died 0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:04:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.643 226310 DEBUG nova.compute.manager [req-967de337-0ba4-4a87-b4c3-e26a49fabbe5 req-89961d68-a716-4a80-a58e-6a2bbfa73bea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received event network-vif-unplugged-ff7f077c-a428-4a43-86c8-7e1210f51fab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.644 226310 DEBUG oslo_concurrency.lockutils [req-967de337-0ba4-4a87-b4c3-e26a49fabbe5 req-89961d68-a716-4a80-a58e-6a2bbfa73bea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.644 226310 DEBUG oslo_concurrency.lockutils [req-967de337-0ba4-4a87-b4c3-e26a49fabbe5 req-89961d68-a716-4a80-a58e-6a2bbfa73bea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.645 226310 DEBUG oslo_concurrency.lockutils [req-967de337-0ba4-4a87-b4c3-e26a49fabbe5 req-89961d68-a716-4a80-a58e-6a2bbfa73bea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.645 226310 DEBUG nova.compute.manager [req-967de337-0ba4-4a87-b4c3-e26a49fabbe5 req-89961d68-a716-4a80-a58e-6a2bbfa73bea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] No waiting events found dispatching network-vif-unplugged-ff7f077c-a428-4a43-86c8-7e1210f51fab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.645 226310 WARNING nova.compute.manager [req-967de337-0ba4-4a87-b4c3-e26a49fabbe5 req-89961d68-a716-4a80-a58e-6a2bbfa73bea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received unexpected event network-vif-unplugged-ff7f077c-a428-4a43-86c8-7e1210f51fab for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:04:25 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24-userdata-shm.mount: Deactivated successfully.
Nov 29 03:04:25 np0005539564 systemd[1]: var-lib-containers-storage-overlay-27292a16adfc9a037d30a5cd244e9d9230a1d2b94aa54209b299c2e392476813-merged.mount: Deactivated successfully.
Nov 29 03:04:25 np0005539564 podman[253928]: 2025-11-29 08:04:25.669034846 +0000 UTC m=+0.250220840 container cleanup 0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:25 np0005539564 systemd[1]: libpod-conmon-0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24.scope: Deactivated successfully.
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.827 226310 INFO nova.virt.libvirt.driver [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Beginning cold snapshot process#033[00m
Nov 29 03:04:25 np0005539564 podman[254042]: 2025-11-29 08:04:25.894025426 +0000 UTC m=+0.198085635 container remove 0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.900 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[20a8af31-d9cc-4e2c-bac3-d6d9bb64c6dc]: (4, ('Sat Nov 29 08:04:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24)\n0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24\nSat Nov 29 08:04:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24)\n0936695a7eea1e6076cf8c8b32c355addb4ad9267572914b1503f4bf33856c24\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.903 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3708e99f-fff1-43a5-aca4-048e6ab83d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.905 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.908 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539564 kernel: tapd5e42602-d0: left promiscuous mode
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.948 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[486809bf-6251-4390-832d-aa8e704dd267]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.963 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[46b2d125-bbfa-4831-b52b-8ece56d6376d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.965 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[07c6db48-1237-4565-a64d-96912971d0e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.980 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[62ae0f88-22b6-4550-9a2a-c67f571fa32c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641996, 'reachable_time': 39459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254129, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.983 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539564 systemd[1]: run-netns-ovnmeta\x2dd5e42602\x2dd72e\x2d4beb\x2d864d\x2d714bd1635da9.mount: Deactivated successfully.
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.985 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:04:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:25.986 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[24cdb76f-d944-469b-8ad1-4a8e38a0ac4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:25 np0005539564 nova_compute[226295]: 2025-11-29 08:04:25.991 226310 DEBUG nova.virt.libvirt.imagebackend [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:04:26 np0005539564 nova_compute[226295]: 2025-11-29 08:04:26.313 226310 DEBUG nova.storage.rbd_utils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] creating snapshot(be4240fff82340dfa4311218816723b6) on rbd image(5ae9fe1c-0566-4112-ac09-e04deb899d41_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:26 np0005539564 nova_compute[226295]: 2025-11-29 08:04:26.431 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:04:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:04:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e251 e251: 3 total, 3 up, 3 in
Nov 29 03:04:26 np0005539564 nova_compute[226295]: 2025-11-29 08:04:26.977 226310 DEBUG nova.storage.rbd_utils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] cloning vms/5ae9fe1c-0566-4112-ac09-e04deb899d41_disk@be4240fff82340dfa4311218816723b6 to images/79df671d-c55d-4ae7-bed4-66b1126df827 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.103 226310 DEBUG nova.storage.rbd_utils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] flattening images/79df671d-c55d-4ae7-bed4-66b1126df827 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:04:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:27.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.781 226310 DEBUG nova.compute.manager [req-d68e5d62-5a26-4963-b8d3-24ae0cb28b2e req-cba991b5-49a3-4e14-af6b-959561796874 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received event network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.782 226310 DEBUG oslo_concurrency.lockutils [req-d68e5d62-5a26-4963-b8d3-24ae0cb28b2e req-cba991b5-49a3-4e14-af6b-959561796874 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.782 226310 DEBUG oslo_concurrency.lockutils [req-d68e5d62-5a26-4963-b8d3-24ae0cb28b2e req-cba991b5-49a3-4e14-af6b-959561796874 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.782 226310 DEBUG oslo_concurrency.lockutils [req-d68e5d62-5a26-4963-b8d3-24ae0cb28b2e req-cba991b5-49a3-4e14-af6b-959561796874 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.783 226310 DEBUG nova.compute.manager [req-d68e5d62-5a26-4963-b8d3-24ae0cb28b2e req-cba991b5-49a3-4e14-af6b-959561796874 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] No waiting events found dispatching network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.783 226310 WARNING nova.compute.manager [req-d68e5d62-5a26-4963-b8d3-24ae0cb28b2e req-cba991b5-49a3-4e14-af6b-959561796874 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received unexpected event network-vif-plugged-ff7f077c-a428-4a43-86c8-7e1210f51fab for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.801 226310 DEBUG nova.storage.rbd_utils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] removing snapshot(be4240fff82340dfa4311218816723b6) on rbd image(5ae9fe1c-0566-4112-ac09-e04deb899d41_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:04:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e252 e252: 3 total, 3 up, 3 in
Nov 29 03:04:27 np0005539564 nova_compute[226295]: 2025-11-29 08:04:27.998 226310 DEBUG nova.storage.rbd_utils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] creating snapshot(snap) on rbd image(79df671d-c55d-4ae7-bed4-66b1126df827) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:28 np0005539564 nova_compute[226295]: 2025-11-29 08:04:28.504 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:29.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:29.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e253 e253: 3 total, 3 up, 3 in
Nov 29 03:04:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:31.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.434 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.554 226310 INFO nova.virt.libvirt.driver [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Snapshot image upload complete#033[00m
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.555 226310 DEBUG nova.compute.manager [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.616 226310 INFO nova.compute.manager [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Shelve offloading#033[00m
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.622 226310 INFO nova.virt.libvirt.driver [-] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance destroyed successfully.#033[00m
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.623 226310 DEBUG nova.compute.manager [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.625 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.625 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:31 np0005539564 nova_compute[226295]: 2025-11-29 08:04:31.626 226310 DEBUG nova.network.neutron [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:33.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:33 np0005539564 nova_compute[226295]: 2025-11-29 08:04:33.505 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:34 np0005539564 nova_compute[226295]: 2025-11-29 08:04:34.337 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:34 np0005539564 nova_compute[226295]: 2025-11-29 08:04:34.844 226310 DEBUG nova.network.neutron [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updating instance_info_cache with network_info: [{"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:34 np0005539564 nova_compute[226295]: 2025-11-29 08:04:34.867 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:35.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:35.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:35 np0005539564 nova_compute[226295]: 2025-11-29 08:04:35.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:36 np0005539564 nova_compute[226295]: 2025-11-29 08:04:36.438 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e254 e254: 3 total, 3 up, 3 in
Nov 29 03:04:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:37.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.179 226310 INFO nova.virt.libvirt.driver [-] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Instance destroyed successfully.#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.179 226310 DEBUG nova.objects.instance [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'resources' on Instance uuid 5ae9fe1c-0566-4112-ac09-e04deb899d41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.197 226310 DEBUG nova.virt.libvirt.vif [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1390090872',display_name='tempest-DeleteServersTestJSON-server-1390090872',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1390090872',id=72,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-jpkefmoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member',shelved_at='2025-11-29T08:04:31.555416',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='79df671d-c55d-4ae7-bed4-66b1126df827'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:25Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=5ae9fe1c-0566-4112-ac09-e04deb899d41,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.198 226310 DEBUG nova.network.os_vif_util [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.199 226310 DEBUG nova.network.os_vif_util [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:18:47,bridge_name='br-int',has_traffic_filtering=True,id=ff7f077c-a428-4a43-86c8-7e1210f51fab,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff7f077c-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.200 226310 DEBUG os_vif [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:18:47,bridge_name='br-int',has_traffic_filtering=True,id=ff7f077c-a428-4a43-86c8-7e1210f51fab,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff7f077c-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.203 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.204 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff7f077c-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.206 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.209 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.214 226310 INFO os_vif [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:18:47,bridge_name='br-int',has_traffic_filtering=True,id=ff7f077c-a428-4a43-86c8-7e1210f51fab,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff7f077c-a4')#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.285 226310 DEBUG nova.compute.manager [req-7d3cbe84-ae6f-49b7-86a5-ac16ac8175b9 req-d3b3693c-d614-4c0b-9e83-189b3ffd4eb6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Received event network-changed-ff7f077c-a428-4a43-86c8-7e1210f51fab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.285 226310 DEBUG nova.compute.manager [req-7d3cbe84-ae6f-49b7-86a5-ac16ac8175b9 req-d3b3693c-d614-4c0b-9e83-189b3ffd4eb6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Refreshing instance network info cache due to event network-changed-ff7f077c-a428-4a43-86c8-7e1210f51fab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.286 226310 DEBUG oslo_concurrency.lockutils [req-7d3cbe84-ae6f-49b7-86a5-ac16ac8175b9 req-d3b3693c-d614-4c0b-9e83-189b3ffd4eb6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.286 226310 DEBUG oslo_concurrency.lockutils [req-7d3cbe84-ae6f-49b7-86a5-ac16ac8175b9 req-d3b3693c-d614-4c0b-9e83-189b3ffd4eb6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.286 226310 DEBUG nova.network.neutron [req-7d3cbe84-ae6f-49b7-86a5-ac16ac8175b9 req-d3b3693c-d614-4c0b-9e83-189b3ffd4eb6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Refreshing network info cache for port ff7f077c-a428-4a43-86c8-7e1210f51fab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:37.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:04:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:37.650 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.651 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:37.652 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.666 226310 INFO nova.virt.libvirt.driver [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Deleting instance files /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41_del#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.667 226310 INFO nova.virt.libvirt.driver [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Deletion of /var/lib/nova/instances/5ae9fe1c-0566-4112-ac09-e04deb899d41_del complete#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.823 226310 INFO nova.scheduler.client.report [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Deleted allocations for instance 5ae9fe1c-0566-4112-ac09-e04deb899d41#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.869 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.870 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:37 np0005539564 nova_compute[226295]: 2025-11-29 08:04:37.986 226310 DEBUG oslo_concurrency.processutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:04:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3157694843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.458 226310 DEBUG oslo_concurrency.processutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.468 226310 DEBUG nova.compute.provider_tree [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.495 226310 DEBUG nova.scheduler.client.report [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.525 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.565 226310 INFO nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updating ports in neutron#033[00m
Nov 29 03:04:38 np0005539564 nova_compute[226295]: 2025-11-29 08:04:38.601 226310 DEBUG oslo_concurrency.lockutils [None req-51e11a3e-24b5-4f83-baed-d594d5d674b9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "5ae9fe1c-0566-4112-ac09-e04deb899d41" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 27.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:39.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:39.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:39 np0005539564 nova_compute[226295]: 2025-11-29 08:04:39.388 226310 INFO nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updating port ff7f077c-a428-4a43-86c8-7e1210f51fab with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:04:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e255 e255: 3 total, 3 up, 3 in
Nov 29 03:04:39 np0005539564 nova_compute[226295]: 2025-11-29 08:04:39.757 226310 DEBUG nova.network.neutron [req-7d3cbe84-ae6f-49b7-86a5-ac16ac8175b9 req-d3b3693c-d614-4c0b-9e83-189b3ffd4eb6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updated VIF entry in instance network info cache for port ff7f077c-a428-4a43-86c8-7e1210f51fab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:04:39 np0005539564 nova_compute[226295]: 2025-11-29 08:04:39.758 226310 DEBUG nova.network.neutron [req-7d3cbe84-ae6f-49b7-86a5-ac16ac8175b9 req-d3b3693c-d614-4c0b-9e83-189b3ffd4eb6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updating instance_info_cache with network_info: [{"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": null, "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapff7f077c-a4", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:39 np0005539564 nova_compute[226295]: 2025-11-29 08:04:39.780 226310 DEBUG oslo_concurrency.lockutils [req-7d3cbe84-ae6f-49b7-86a5-ac16ac8175b9 req-d3b3693c-d614-4c0b-9e83-189b3ffd4eb6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:40 np0005539564 nova_compute[226295]: 2025-11-29 08:04:40.326 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:40 np0005539564 nova_compute[226295]: 2025-11-29 08:04:40.327 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:40 np0005539564 nova_compute[226295]: 2025-11-29 08:04:40.327 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:04:40 np0005539564 nova_compute[226295]: 2025-11-29 08:04:40.328 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5ae9fe1c-0566-4112-ac09-e04deb899d41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:40 np0005539564 nova_compute[226295]: 2025-11-29 08:04:40.536 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403465.5351253, 5ae9fe1c-0566-4112-ac09-e04deb899d41 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:40 np0005539564 nova_compute[226295]: 2025-11-29 08:04:40.537 226310 INFO nova.compute.manager [-] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:40 np0005539564 nova_compute[226295]: 2025-11-29 08:04:40.554 226310 DEBUG nova.compute.manager [None req-37f67e9a-c384-4158-b930-be7d10d86c9e - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:04:40.654 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:41.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:41.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:41 np0005539564 nova_compute[226295]: 2025-11-29 08:04:41.442 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:42 np0005539564 nova_compute[226295]: 2025-11-29 08:04:42.036 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updating instance_info_cache with network_info: [{"id": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "address": "fa:16:3e:5e:18:47", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff7f077c-a4", "ovs_interfaceid": "ff7f077c-a428-4a43-86c8-7e1210f51fab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:42 np0005539564 nova_compute[226295]: 2025-11-29 08:04:42.071 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-5ae9fe1c-0566-4112-ac09-e04deb899d41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:42 np0005539564 nova_compute[226295]: 2025-11-29 08:04:42.072 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5ae9fe1c-0566-4112-ac09-e04deb899d41] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:04:42 np0005539564 nova_compute[226295]: 2025-11-29 08:04:42.072 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:42 np0005539564 nova_compute[226295]: 2025-11-29 08:04:42.072 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:42 np0005539564 nova_compute[226295]: 2025-11-29 08:04:42.208 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:43.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:43.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:43 np0005539564 nova_compute[226295]: 2025-11-29 08:04:43.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:44 np0005539564 nova_compute[226295]: 2025-11-29 08:04:44.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:44 np0005539564 nova_compute[226295]: 2025-11-29 08:04:44.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:44 np0005539564 nova_compute[226295]: 2025-11-29 08:04:44.454 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:44 np0005539564 nova_compute[226295]: 2025-11-29 08:04:44.455 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:44 np0005539564 nova_compute[226295]: 2025-11-29 08:04:44.456 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:44 np0005539564 nova_compute[226295]: 2025-11-29 08:04:44.456 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:04:44 np0005539564 nova_compute[226295]: 2025-11-29 08:04:44.457 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2607916270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:44 np0005539564 nova_compute[226295]: 2025-11-29 08:04:44.942 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:45.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.218 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.223 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4646MB free_disk=20.95074462890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.223 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.224 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.290 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.290 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.311 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:45.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/767972050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.801 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.811 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.862 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.912 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:04:45 np0005539564 nova_compute[226295]: 2025-11-29 08:04:45.912 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:46 np0005539564 nova_compute[226295]: 2025-11-29 08:04:46.444 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 e256: 3 total, 3 up, 3 in
Nov 29 03:04:46 np0005539564 podman[254397]: 2025-11-29 08:04:46.548359327 +0000 UTC m=+0.099096565 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:04:46 np0005539564 podman[254396]: 2025-11-29 08:04:46.557869893 +0000 UTC m=+0.108196740 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:46 np0005539564 podman[254395]: 2025-11-29 08:04:46.59072302 +0000 UTC m=+0.135856746 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:04:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:47.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:47 np0005539564 nova_compute[226295]: 2025-11-29 08:04:47.211 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:47.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:49.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:49.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:51.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:51 np0005539564 nova_compute[226295]: 2025-11-29 08:04:51.482 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:52 np0005539564 nova_compute[226295]: 2025-11-29 08:04:52.213 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:53.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:53.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:55.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:55.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:56 np0005539564 nova_compute[226295]: 2025-11-29 08:04:56.483 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:57.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:57 np0005539564 nova_compute[226295]: 2025-11-29 08:04:57.216 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:57.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:59.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:04:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:59.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:59 np0005539564 nova_compute[226295]: 2025-11-29 08:04:59.905 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:01.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:01 np0005539564 nova_compute[226295]: 2025-11-29 08:05:01.485 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:02 np0005539564 nova_compute[226295]: 2025-11-29 08:05:02.218 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:03.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:03.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:03.715 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:03.716 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:03.716 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:04 np0005539564 nova_compute[226295]: 2025-11-29 08:05:04.709 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:04 np0005539564 nova_compute[226295]: 2025-11-29 08:05:04.709 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:04 np0005539564 nova_compute[226295]: 2025-11-29 08:05:04.742 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:05:04 np0005539564 nova_compute[226295]: 2025-11-29 08:05:04.827 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:04 np0005539564 nova_compute[226295]: 2025-11-29 08:05:04.828 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:04 np0005539564 nova_compute[226295]: 2025-11-29 08:05:04.837 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:04 np0005539564 nova_compute[226295]: 2025-11-29 08:05:04.838 226310 INFO nova.compute.claims [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:05:04 np0005539564 nova_compute[226295]: 2025-11-29 08:05:04.980 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:05.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:05.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:05 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2534689414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.467 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.476 226310 DEBUG nova.compute.provider_tree [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.495 226310 DEBUG nova.scheduler.client.report [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.518 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.520 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.568 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.569 226310 DEBUG nova.network.neutron [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.589 226310 INFO nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.606 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:05:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.692 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.694 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.695 226310 INFO nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Creating image(s)#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.753 226310 DEBUG nova.storage.rbd_utils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] rbd image 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.784 226310 DEBUG nova.storage.rbd_utils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] rbd image 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.818 226310 DEBUG nova.storage.rbd_utils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] rbd image 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.823 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.851 226310 DEBUG nova.policy [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b752bbc38da346cd906c48b7e558600a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1cbb1a55279047568ac52b0498dba447', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.891 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.892 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.893 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.893 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.924 226310 DEBUG nova.storage.rbd_utils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] rbd image 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:05 np0005539564 nova_compute[226295]: 2025-11-29 08:05:05.928 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.251 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.350 226310 DEBUG nova.storage.rbd_utils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] resizing rbd image 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.565 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.581 226310 DEBUG nova.objects.instance [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lazy-loading 'migration_context' on Instance uuid 13e6ea90-7ec8-4c88-a019-2683f4e42de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.614 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.614 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Ensure instance console log exists: /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.616 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.616 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:06 np0005539564 nova_compute[226295]: 2025-11-29 08:05:06.617 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:07.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:07 np0005539564 nova_compute[226295]: 2025-11-29 08:05:07.221 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:07.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:07 np0005539564 nova_compute[226295]: 2025-11-29 08:05:07.629 226310 DEBUG nova.network.neutron [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Successfully created port: 4ddd5798-a43c-4151-b21a-d47f5386c760 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:05:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:09.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:09.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:09 np0005539564 nova_compute[226295]: 2025-11-29 08:05:09.716 226310 DEBUG nova.network.neutron [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Successfully updated port: 4ddd5798-a43c-4151-b21a-d47f5386c760 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:05:09 np0005539564 nova_compute[226295]: 2025-11-29 08:05:09.742 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:09 np0005539564 nova_compute[226295]: 2025-11-29 08:05:09.742 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquired lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:09 np0005539564 nova_compute[226295]: 2025-11-29 08:05:09.742 226310 DEBUG nova.network.neutron [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:09 np0005539564 nova_compute[226295]: 2025-11-29 08:05:09.852 226310 DEBUG nova.compute.manager [req-069eed65-6fb7-46fc-802f-318f7317d9e6 req-98e1d694-da44-403d-9109-6a83c5a01e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received event network-changed-4ddd5798-a43c-4151-b21a-d47f5386c760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:09 np0005539564 nova_compute[226295]: 2025-11-29 08:05:09.852 226310 DEBUG nova.compute.manager [req-069eed65-6fb7-46fc-802f-318f7317d9e6 req-98e1d694-da44-403d-9109-6a83c5a01e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Refreshing instance network info cache due to event network-changed-4ddd5798-a43c-4151-b21a-d47f5386c760. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:09 np0005539564 nova_compute[226295]: 2025-11-29 08:05:09.853 226310 DEBUG oslo_concurrency.lockutils [req-069eed65-6fb7-46fc-802f-318f7317d9e6 req-98e1d694-da44-403d-9109-6a83c5a01e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:10 np0005539564 nova_compute[226295]: 2025-11-29 08:05:10.629 226310 DEBUG nova.network.neutron [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:11.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:11.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:11 np0005539564 nova_compute[226295]: 2025-11-29 08:05:11.543 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:12 np0005539564 nova_compute[226295]: 2025-11-29 08:05:12.223 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.186 226310 DEBUG nova.network.neutron [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Updating instance_info_cache with network_info: [{"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:13.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.217 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Releasing lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.217 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Instance network_info: |[{"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.218 226310 DEBUG oslo_concurrency.lockutils [req-069eed65-6fb7-46fc-802f-318f7317d9e6 req-98e1d694-da44-403d-9109-6a83c5a01e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.218 226310 DEBUG nova.network.neutron [req-069eed65-6fb7-46fc-802f-318f7317d9e6 req-98e1d694-da44-403d-9109-6a83c5a01e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Refreshing network info cache for port 4ddd5798-a43c-4151-b21a-d47f5386c760 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.221 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Start _get_guest_xml network_info=[{"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.227 226310 WARNING nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.231 226310 DEBUG nova.virt.libvirt.host [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.232 226310 DEBUG nova.virt.libvirt.host [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.243 226310 DEBUG nova.virt.libvirt.host [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.244 226310 DEBUG nova.virt.libvirt.host [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.246 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.246 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.247 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.248 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.248 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.249 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.249 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.249 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.250 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.250 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.251 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.251 226310 DEBUG nova.virt.hardware [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.256 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:13.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:13 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2285732094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.715 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.742 226310 DEBUG nova.storage.rbd_utils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] rbd image 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:13 np0005539564 nova_compute[226295]: 2025-11-29 08:05:13.747 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:14 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4264389366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.178 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.181 226310 DEBUG nova.virt.libvirt.vif [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1719767373',display_name='tempest-ServersTestManualDisk-server-1719767373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1719767373',id=76,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKruuDCg5LihbmBeh3rQ1uRRivwD95slN55putXkJKDRBjte7r88isXAkHElOVQsnRhmZnoBiesUIQlDu98QwoQArYe3nhWgn48hUDQ+dJhXj6KdyL5XIyRrsfeCBrTuTA==',key_name='tempest-keypair-1369263509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1cbb1a55279047568ac52b0498dba447',ramdisk_id='',reservation_id='r-70gheif7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-866726591',owner_user_name='tempest-ServersTestManualDisk-866726591-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b752bbc38da346cd906c48b7e558600a',uuid=13e6ea90-7ec8-4c88-a019-2683f4e42de1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.181 226310 DEBUG nova.network.os_vif_util [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Converting VIF {"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.182 226310 DEBUG nova.network.os_vif_util [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:dc:a4,bridge_name='br-int',has_traffic_filtering=True,id=4ddd5798-a43c-4151-b21a-d47f5386c760,network=Network(5d3318ae-73ee-4f34-b7fc-49f8b6b1a835),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ddd5798-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.184 226310 DEBUG nova.objects.instance [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lazy-loading 'pci_devices' on Instance uuid 13e6ea90-7ec8-4c88-a019-2683f4e42de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.214 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <uuid>13e6ea90-7ec8-4c88-a019-2683f4e42de1</uuid>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <name>instance-0000004c</name>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersTestManualDisk-server-1719767373</nova:name>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:05:13</nova:creationTime>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <nova:user uuid="b752bbc38da346cd906c48b7e558600a">tempest-ServersTestManualDisk-866726591-project-member</nova:user>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <nova:project uuid="1cbb1a55279047568ac52b0498dba447">tempest-ServersTestManualDisk-866726591</nova:project>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <nova:port uuid="4ddd5798-a43c-4151-b21a-d47f5386c760">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <entry name="serial">13e6ea90-7ec8-4c88-a019-2683f4e42de1</entry>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <entry name="uuid">13e6ea90-7ec8-4c88-a019-2683f4e42de1</entry>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk.config">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:fe:dc:a4"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <target dev="tap4ddd5798-a4"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1/console.log" append="off"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:05:14 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:05:14 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:05:14 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:05:14 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.216 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Preparing to wait for external event network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.217 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.217 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.218 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.219 226310 DEBUG nova.virt.libvirt.vif [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1719767373',display_name='tempest-ServersTestManualDisk-server-1719767373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1719767373',id=76,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKruuDCg5LihbmBeh3rQ1uRRivwD95slN55putXkJKDRBjte7r88isXAkHElOVQsnRhmZnoBiesUIQlDu98QwoQArYe3nhWgn48hUDQ+dJhXj6KdyL5XIyRrsfeCBrTuTA==',key_name='tempest-keypair-1369263509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1cbb1a55279047568ac52b0498dba447',ramdisk_id='',reservation_id='r-70gheif7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-866726591',owner_user_name='tempest-ServersTestManualDisk-866726591-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b752bbc38da346cd906c48b7e558600a',uuid=13e6ea90-7ec8-4c88-a019-2683f4e42de1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.219 226310 DEBUG nova.network.os_vif_util [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Converting VIF {"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.220 226310 DEBUG nova.network.os_vif_util [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:dc:a4,bridge_name='br-int',has_traffic_filtering=True,id=4ddd5798-a43c-4151-b21a-d47f5386c760,network=Network(5d3318ae-73ee-4f34-b7fc-49f8b6b1a835),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ddd5798-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.220 226310 DEBUG os_vif [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:dc:a4,bridge_name='br-int',has_traffic_filtering=True,id=4ddd5798-a43c-4151-b21a-d47f5386c760,network=Network(5d3318ae-73ee-4f34-b7fc-49f8b6b1a835),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ddd5798-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.221 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.222 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.223 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.228 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.229 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ddd5798-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.230 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ddd5798-a4, col_values=(('external_ids', {'iface-id': '4ddd5798-a43c-4151-b21a-d47f5386c760', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:dc:a4', 'vm-uuid': '13e6ea90-7ec8-4c88-a019-2683f4e42de1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539564 NetworkManager[48997]: <info>  [1764403514.2637] manager: (tap4ddd5798-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.266 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.269 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.270 226310 INFO os_vif [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:dc:a4,bridge_name='br-int',has_traffic_filtering=True,id=4ddd5798-a43c-4151-b21a-d47f5386c760,network=Network(5d3318ae-73ee-4f34-b7fc-49f8b6b1a835),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ddd5798-a4')#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.346 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.347 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.347 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] No VIF found with MAC fa:16:3e:fe:dc:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.349 226310 INFO nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Using config drive#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.391 226310 DEBUG nova.storage.rbd_utils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] rbd image 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.573 226310 DEBUG nova.network.neutron [req-069eed65-6fb7-46fc-802f-318f7317d9e6 req-98e1d694-da44-403d-9109-6a83c5a01e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Updated VIF entry in instance network info cache for port 4ddd5798-a43c-4151-b21a-d47f5386c760. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.574 226310 DEBUG nova.network.neutron [req-069eed65-6fb7-46fc-802f-318f7317d9e6 req-98e1d694-da44-403d-9109-6a83c5a01e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Updating instance_info_cache with network_info: [{"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.590 226310 DEBUG oslo_concurrency.lockutils [req-069eed65-6fb7-46fc-802f-318f7317d9e6 req-98e1d694-da44-403d-9109-6a83c5a01e8e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.828 226310 INFO nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Creating config drive at /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1/disk.config#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.841 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3byigqy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:14 np0005539564 nova_compute[226295]: 2025-11-29 08:05:14.995 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3byigqy" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.041 226310 DEBUG nova.storage.rbd_utils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] rbd image 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.047 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1/disk.config 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:15.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.221 226310 DEBUG oslo_concurrency.processutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1/disk.config 13e6ea90-7ec8-4c88-a019-2683f4e42de1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.222 226310 INFO nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Deleting local config drive /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:15 np0005539564 kernel: tap4ddd5798-a4: entered promiscuous mode
Nov 29 03:05:15 np0005539564 NetworkManager[48997]: <info>  [1764403515.2871] manager: (tap4ddd5798-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Nov 29 03:05:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:15Z|00237|binding|INFO|Claiming lport 4ddd5798-a43c-4151-b21a-d47f5386c760 for this chassis.
Nov 29 03:05:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:15Z|00238|binding|INFO|4ddd5798-a43c-4151-b21a-d47f5386c760: Claiming fa:16:3e:fe:dc:a4 10.100.0.13
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.339 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.347 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.355 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:dc:a4 10.100.0.13'], port_security=['fa:16:3e:fe:dc:a4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '13e6ea90-7ec8-4c88-a019-2683f4e42de1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cbb1a55279047568ac52b0498dba447', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86ed8466-f9dc-4f3f-ad3f-bb34ae8f3f19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2abbc96e-1b96-4e2e-8fb2-60a9d48131f8, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4ddd5798-a43c-4151-b21a-d47f5386c760) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.356 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4ddd5798-a43c-4151-b21a-d47f5386c760 in datapath 5d3318ae-73ee-4f34-b7fc-49f8b6b1a835 bound to our chassis#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.358 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d3318ae-73ee-4f34-b7fc-49f8b6b1a835#033[00m
Nov 29 03:05:15 np0005539564 systemd-machined[190128]: New machine qemu-33-instance-0000004c.
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.371 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[72c1ed57-e34f-42b0-8f76-08e455ee9825]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.372 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d3318ae-71 in ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.375 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d3318ae-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.375 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0cdb236f-6f7e-4da1-a6b0-126497a21891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.376 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[98632234-d10e-49b1-b0ce-93f9435ebfbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:15.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.392 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b2f494-650c-4824-b529-f5c0cdc1af34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:15Z|00239|binding|INFO|Setting lport 4ddd5798-a43c-4151-b21a-d47f5386c760 ovn-installed in OVS
Nov 29 03:05:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:15Z|00240|binding|INFO|Setting lport 4ddd5798-a43c-4151-b21a-d47f5386c760 up in Southbound
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.416 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539564 systemd[1]: Started Virtual Machine qemu-33-instance-0000004c.
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.421 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f581f537-d959-4043-aba3-0e6798846e40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 systemd-udevd[254790]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:15 np0005539564 NetworkManager[48997]: <info>  [1764403515.4652] device (tap4ddd5798-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:05:15 np0005539564 NetworkManager[48997]: <info>  [1764403515.4661] device (tap4ddd5798-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.462 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5fac7ad8-0260-4d4d-9567-9adfee7dbf79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 NetworkManager[48997]: <info>  [1764403515.4719] manager: (tap5d3318ae-70): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.471 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9e39e432-aff7-45d4-bb19-407f3ad4c5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 systemd-udevd[254792]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.505 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[780b2f36-6aba-436a-abe9-60c691203572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.508 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa998ba-2d0a-4f64-9149-4787acd899a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 NetworkManager[48997]: <info>  [1764403515.5357] device (tap5d3318ae-70): carrier: link connected
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.541 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a5e567-230c-4844-9e50-00a4515a9628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.562 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b18b7ee-1ea2-4c86-a709-1576a486af54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d3318ae-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:ce:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649021, 'reachable_time': 31736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254818, 'error': None, 'target': 'ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.581 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd1c6b0-de12-4ac3-95be-8f8f96995099]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:cedf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649021, 'tstamp': 649021}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254819, 'error': None, 'target': 'ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.606 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[50537090-8afe-45cb-9bb0-4377ef3cc9c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d3318ae-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:ce:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649021, 'reachable_time': 31736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254820, 'error': None, 'target': 'ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.646 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aadc39c9-4789-44d4-87f4-b720c6d914d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.674 226310 DEBUG nova.compute.manager [req-763de1d6-283d-4dd1-8dbc-4099f64d8f60 req-25456928-3045-4abf-b03f-ccbd6fd5a09c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received event network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.675 226310 DEBUG oslo_concurrency.lockutils [req-763de1d6-283d-4dd1-8dbc-4099f64d8f60 req-25456928-3045-4abf-b03f-ccbd6fd5a09c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.676 226310 DEBUG oslo_concurrency.lockutils [req-763de1d6-283d-4dd1-8dbc-4099f64d8f60 req-25456928-3045-4abf-b03f-ccbd6fd5a09c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.676 226310 DEBUG oslo_concurrency.lockutils [req-763de1d6-283d-4dd1-8dbc-4099f64d8f60 req-25456928-3045-4abf-b03f-ccbd6fd5a09c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.677 226310 DEBUG nova.compute.manager [req-763de1d6-283d-4dd1-8dbc-4099f64d8f60 req-25456928-3045-4abf-b03f-ccbd6fd5a09c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Processing event network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.704 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3874fb-9604-4ad7-af02-bc24e49fb799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.706 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d3318ae-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.706 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.706 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d3318ae-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.708 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539564 NetworkManager[48997]: <info>  [1764403515.7088] manager: (tap5d3318ae-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Nov 29 03:05:15 np0005539564 kernel: tap5d3318ae-70: entered promiscuous mode
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.711 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d3318ae-70, col_values=(('external_ids', {'iface-id': '9ddd00c5-4a4b-4a7b-a840-a2b80dd6b9b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:15Z|00241|binding|INFO|Releasing lport 9ddd00c5-4a4b-4a7b-a840-a2b80dd6b9b4 from this chassis (sb_readonly=0)
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.732 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.735 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d3318ae-73ee-4f34-b7fc-49f8b6b1a835.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d3318ae-73ee-4f34-b7fc-49f8b6b1a835.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.736 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0fad06-3b42-4995-a0fa-89452b4529d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.737 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/5d3318ae-73ee-4f34-b7fc-49f8b6b1a835.pid.haproxy
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 5d3318ae-73ee-4f34-b7fc-49f8b6b1a835
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:05:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:15.739 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835', 'env', 'PROCESS_TAG=haproxy-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d3318ae-73ee-4f34-b7fc-49f8b6b1a835.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.865 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403515.8651872, 13e6ea90-7ec8-4c88-a019-2683f4e42de1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.867 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.870 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.874 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.878 226310 INFO nova.virt.libvirt.driver [-] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Instance spawned successfully.#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.879 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.889 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.898 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.906 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.907 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.908 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.909 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.910 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.910 226310 DEBUG nova.virt.libvirt.driver [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.918 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.919 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403515.8665502, 13e6ea90-7ec8-4c88-a019-2683f4e42de1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.919 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.952 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.955 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403515.8751934, 13e6ea90-7ec8-4c88-a019-2683f4e42de1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.955 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.986 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:15 np0005539564 nova_compute[226295]: 2025-11-29 08:05:15.989 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:16 np0005539564 nova_compute[226295]: 2025-11-29 08:05:16.018 226310 INFO nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Took 10.33 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:05:16 np0005539564 nova_compute[226295]: 2025-11-29 08:05:16.018 226310 DEBUG nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:16 np0005539564 nova_compute[226295]: 2025-11-29 08:05:16.020 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:16 np0005539564 nova_compute[226295]: 2025-11-29 08:05:16.102 226310 INFO nova.compute.manager [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Took 11.31 seconds to build instance.#033[00m
Nov 29 03:05:16 np0005539564 nova_compute[226295]: 2025-11-29 08:05:16.135 226310 DEBUG oslo_concurrency.lockutils [None req-ceb8a2cb-2c9e-4f94-b299-cf87f3bafe5e b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:16 np0005539564 podman[254891]: 2025-11-29 08:05:16.155896867 +0000 UTC m=+0.047252844 container create 142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:05:16 np0005539564 systemd[1]: Started libpod-conmon-142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36.scope.
Nov 29 03:05:16 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:05:16 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6d7401b1140e1db965484ce3f04d788d3b4c8856cd6ea8cc4de8f4d527e681/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:16 np0005539564 podman[254891]: 2025-11-29 08:05:16.13042711 +0000 UTC m=+0.021783107 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:05:16 np0005539564 podman[254891]: 2025-11-29 08:05:16.234896989 +0000 UTC m=+0.126252996 container init 142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:05:16 np0005539564 podman[254891]: 2025-11-29 08:05:16.241897777 +0000 UTC m=+0.133253764 container start 142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:05:16 np0005539564 neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835[254906]: [NOTICE]   (254910) : New worker (254912) forked
Nov 29 03:05:16 np0005539564 neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835[254906]: [NOTICE]   (254910) : Loading success.
Nov 29 03:05:16 np0005539564 nova_compute[226295]: 2025-11-29 08:05:16.568 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:17.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:17.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:17 np0005539564 podman[254923]: 2025-11-29 08:05:17.507843475 +0000 UTC m=+0.058757616 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:05:17 np0005539564 podman[254922]: 2025-11-29 08:05:17.515524863 +0000 UTC m=+0.058496029 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:17 np0005539564 podman[254921]: 2025-11-29 08:05:17.538991595 +0000 UTC m=+0.097446449 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:05:18 np0005539564 nova_compute[226295]: 2025-11-29 08:05:18.153 226310 DEBUG nova.compute.manager [req-74c6fe1e-faca-4983-b94b-2129b34463e8 req-6548bb0c-731c-445c-b256-5964e91cf3e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received event network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:18 np0005539564 nova_compute[226295]: 2025-11-29 08:05:18.154 226310 DEBUG oslo_concurrency.lockutils [req-74c6fe1e-faca-4983-b94b-2129b34463e8 req-6548bb0c-731c-445c-b256-5964e91cf3e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:18 np0005539564 nova_compute[226295]: 2025-11-29 08:05:18.155 226310 DEBUG oslo_concurrency.lockutils [req-74c6fe1e-faca-4983-b94b-2129b34463e8 req-6548bb0c-731c-445c-b256-5964e91cf3e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:18 np0005539564 nova_compute[226295]: 2025-11-29 08:05:18.155 226310 DEBUG oslo_concurrency.lockutils [req-74c6fe1e-faca-4983-b94b-2129b34463e8 req-6548bb0c-731c-445c-b256-5964e91cf3e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:18 np0005539564 nova_compute[226295]: 2025-11-29 08:05:18.155 226310 DEBUG nova.compute.manager [req-74c6fe1e-faca-4983-b94b-2129b34463e8 req-6548bb0c-731c-445c-b256-5964e91cf3e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] No waiting events found dispatching network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:18 np0005539564 nova_compute[226295]: 2025-11-29 08:05:18.156 226310 WARNING nova.compute.manager [req-74c6fe1e-faca-4983-b94b-2129b34463e8 req-6548bb0c-731c-445c-b256-5964e91cf3e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received unexpected event network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:05:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:19.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:19.223 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:19.225 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:05:19 np0005539564 nova_compute[226295]: 2025-11-29 08:05:19.223 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539564 nova_compute[226295]: 2025-11-29 08:05:19.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:20 np0005539564 NetworkManager[48997]: <info>  [1764403520.1092] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Nov 29 03:05:20 np0005539564 NetworkManager[48997]: <info>  [1764403520.1101] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Nov 29 03:05:20 np0005539564 nova_compute[226295]: 2025-11-29 08:05:20.110 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:20.228 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:20 np0005539564 nova_compute[226295]: 2025-11-29 08:05:20.295 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:20 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:20Z|00242|binding|INFO|Releasing lport 9ddd00c5-4a4b-4a7b-a840-a2b80dd6b9b4 from this chassis (sb_readonly=0)
Nov 29 03:05:20 np0005539564 nova_compute[226295]: 2025-11-29 08:05:20.315 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:20 np0005539564 nova_compute[226295]: 2025-11-29 08:05:20.807 226310 DEBUG nova.compute.manager [req-90f1b2fc-7583-4737-ac47-02ffb641c5b2 req-8bf14923-35c9-42e9-a09a-59e4b10d089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received event network-changed-4ddd5798-a43c-4151-b21a-d47f5386c760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:20 np0005539564 nova_compute[226295]: 2025-11-29 08:05:20.808 226310 DEBUG nova.compute.manager [req-90f1b2fc-7583-4737-ac47-02ffb641c5b2 req-8bf14923-35c9-42e9-a09a-59e4b10d089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Refreshing instance network info cache due to event network-changed-4ddd5798-a43c-4151-b21a-d47f5386c760. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:20 np0005539564 nova_compute[226295]: 2025-11-29 08:05:20.809 226310 DEBUG oslo_concurrency.lockutils [req-90f1b2fc-7583-4737-ac47-02ffb641c5b2 req-8bf14923-35c9-42e9-a09a-59e4b10d089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:20 np0005539564 nova_compute[226295]: 2025-11-29 08:05:20.809 226310 DEBUG oslo_concurrency.lockutils [req-90f1b2fc-7583-4737-ac47-02ffb641c5b2 req-8bf14923-35c9-42e9-a09a-59e4b10d089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:20 np0005539564 nova_compute[226295]: 2025-11-29 08:05:20.810 226310 DEBUG nova.network.neutron [req-90f1b2fc-7583-4737-ac47-02ffb641c5b2 req-8bf14923-35c9-42e9-a09a-59e4b10d089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Refreshing network info cache for port 4ddd5798-a43c-4151-b21a-d47f5386c760 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:21.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:21.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:21 np0005539564 nova_compute[226295]: 2025-11-29 08:05:21.574 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:22Z|00243|binding|INFO|Releasing lport 9ddd00c5-4a4b-4a7b-a840-a2b80dd6b9b4 from this chassis (sb_readonly=0)
Nov 29 03:05:22 np0005539564 nova_compute[226295]: 2025-11-29 08:05:22.651 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:22 np0005539564 nova_compute[226295]: 2025-11-29 08:05:22.796 226310 DEBUG nova.network.neutron [req-90f1b2fc-7583-4737-ac47-02ffb641c5b2 req-8bf14923-35c9-42e9-a09a-59e4b10d089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Updated VIF entry in instance network info cache for port 4ddd5798-a43c-4151-b21a-d47f5386c760. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:22 np0005539564 nova_compute[226295]: 2025-11-29 08:05:22.799 226310 DEBUG nova.network.neutron [req-90f1b2fc-7583-4737-ac47-02ffb641c5b2 req-8bf14923-35c9-42e9-a09a-59e4b10d089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Updating instance_info_cache with network_info: [{"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:22 np0005539564 nova_compute[226295]: 2025-11-29 08:05:22.828 226310 DEBUG oslo_concurrency.lockutils [req-90f1b2fc-7583-4737-ac47-02ffb641c5b2 req-8bf14923-35c9-42e9-a09a-59e4b10d089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-13e6ea90-7ec8-4c88-a019-2683f4e42de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:23.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:23.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:24 np0005539564 nova_compute[226295]: 2025-11-29 08:05:24.311 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:25.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:25.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:26Z|00244|binding|INFO|Releasing lport 9ddd00c5-4a4b-4a7b-a840-a2b80dd6b9b4 from this chassis (sb_readonly=0)
Nov 29 03:05:26 np0005539564 nova_compute[226295]: 2025-11-29 08:05:26.638 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:27.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:27.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:05:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:29.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:05:29 np0005539564 nova_compute[226295]: 2025-11-29 08:05:29.314 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:29.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:31.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:31Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:dc:a4 10.100.0.13
Nov 29 03:05:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:31Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:dc:a4 10.100.0.13
Nov 29 03:05:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:31.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:31 np0005539564 nova_compute[226295]: 2025-11-29 08:05:31.642 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:33.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:33.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:34 np0005539564 nova_compute[226295]: 2025-11-29 08:05:34.324 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:34 np0005539564 nova_compute[226295]: 2025-11-29 08:05:34.357 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:35.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:35.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:05:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:05:36 np0005539564 nova_compute[226295]: 2025-11-29 08:05:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:36 np0005539564 nova_compute[226295]: 2025-11-29 08:05:36.646 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:05:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:05:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:05:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:37.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:05:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:37.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.885 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.886 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.886 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.886 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.886 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.887 226310 INFO nova.compute.manager [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Terminating instance#033[00m
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.888 226310 DEBUG nova.compute.manager [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:05:37 np0005539564 kernel: tap4ddd5798-a4 (unregistering): left promiscuous mode
Nov 29 03:05:37 np0005539564 NetworkManager[48997]: <info>  [1764403537.9396] device (tap4ddd5798-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:05:37 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:37Z|00245|binding|INFO|Releasing lport 4ddd5798-a43c-4151-b21a-d47f5386c760 from this chassis (sb_readonly=0)
Nov 29 03:05:37 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:37Z|00246|binding|INFO|Setting lport 4ddd5798-a43c-4151-b21a-d47f5386c760 down in Southbound
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.952 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:37 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:37Z|00247|binding|INFO|Removing iface tap4ddd5798-a4 ovn-installed in OVS
Nov 29 03:05:37 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:37.961 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:dc:a4 10.100.0.13'], port_security=['fa:16:3e:fe:dc:a4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '13e6ea90-7ec8-4c88-a019-2683f4e42de1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cbb1a55279047568ac52b0498dba447', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86ed8466-f9dc-4f3f-ad3f-bb34ae8f3f19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2abbc96e-1b96-4e2e-8fb2-60a9d48131f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4ddd5798-a43c-4151-b21a-d47f5386c760) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:37.964 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4ddd5798-a43c-4151-b21a-d47f5386c760 in datapath 5d3318ae-73ee-4f34-b7fc-49f8b6b1a835 unbound from our chassis#033[00m
Nov 29 03:05:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:37.966 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d3318ae-73ee-4f34-b7fc-49f8b6b1a835, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:05:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:37.968 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2e5c11-db8a-477c-8424-b122772d39a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:37.969 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835 namespace which is not needed anymore#033[00m
Nov 29 03:05:37 np0005539564 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:37.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:38 np0005539564 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004c.scope: Consumed 14.758s CPU time.
Nov 29 03:05:38 np0005539564 systemd-machined[190128]: Machine qemu-33-instance-0000004c terminated.
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.133 226310 INFO nova.virt.libvirt.driver [-] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Instance destroyed successfully.#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.134 226310 DEBUG nova.objects.instance [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lazy-loading 'resources' on Instance uuid 13e6ea90-7ec8-4c88-a019-2683f4e42de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.148 226310 DEBUG nova.virt.libvirt.vif [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1719767373',display_name='tempest-ServersTestManualDisk-server-1719767373',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1719767373',id=76,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKruuDCg5LihbmBeh3rQ1uRRivwD95slN55putXkJKDRBjte7r88isXAkHElOVQsnRhmZnoBiesUIQlDu98QwoQArYe3nhWgn48hUDQ+dJhXj6KdyL5XIyRrsfeCBrTuTA==',key_name='tempest-keypair-1369263509',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1cbb1a55279047568ac52b0498dba447',ramdisk_id='',reservation_id='r-70gheif7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-866726591',owner_user_name='tempest-ServersTestManualDisk-866726591-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b752bbc38da346cd906c48b7e558600a',uuid=13e6ea90-7ec8-4c88-a019-2683f4e42de1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.149 226310 DEBUG nova.network.os_vif_util [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Converting VIF {"id": "4ddd5798-a43c-4151-b21a-d47f5386c760", "address": "fa:16:3e:fe:dc:a4", "network": {"id": "5d3318ae-73ee-4f34-b7fc-49f8b6b1a835", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-686480058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cbb1a55279047568ac52b0498dba447", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ddd5798-a4", "ovs_interfaceid": "4ddd5798-a43c-4151-b21a-d47f5386c760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.149 226310 DEBUG nova.network.os_vif_util [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:dc:a4,bridge_name='br-int',has_traffic_filtering=True,id=4ddd5798-a43c-4151-b21a-d47f5386c760,network=Network(5d3318ae-73ee-4f34-b7fc-49f8b6b1a835),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ddd5798-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.150 226310 DEBUG os_vif [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:dc:a4,bridge_name='br-int',has_traffic_filtering=True,id=4ddd5798-a43c-4151-b21a-d47f5386c760,network=Network(5d3318ae-73ee-4f34-b7fc-49f8b6b1a835),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ddd5798-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.151 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.151 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ddd5798-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.154 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.157 226310 INFO os_vif [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:dc:a4,bridge_name='br-int',has_traffic_filtering=True,id=4ddd5798-a43c-4151-b21a-d47f5386c760,network=Network(5d3318ae-73ee-4f34-b7fc-49f8b6b1a835),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ddd5798-a4')#033[00m
Nov 29 03:05:38 np0005539564 neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835[254906]: [NOTICE]   (254910) : haproxy version is 2.8.14-c23fe91
Nov 29 03:05:38 np0005539564 neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835[254906]: [NOTICE]   (254910) : path to executable is /usr/sbin/haproxy
Nov 29 03:05:38 np0005539564 neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835[254906]: [WARNING]  (254910) : Exiting Master process...
Nov 29 03:05:38 np0005539564 neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835[254906]: [ALERT]    (254910) : Current worker (254912) exited with code 143 (Terminated)
Nov 29 03:05:38 np0005539564 neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835[254906]: [WARNING]  (254910) : All workers exited. Exiting... (0)
Nov 29 03:05:38 np0005539564 systemd[1]: libpod-142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36.scope: Deactivated successfully.
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.185 226310 DEBUG nova.compute.manager [req-15ad6dd1-a5bf-420e-ba8f-9438bc0eee93 req-48df59b2-7559-4aab-905c-1cc7cf406b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received event network-vif-unplugged-4ddd5798-a43c-4151-b21a-d47f5386c760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.185 226310 DEBUG oslo_concurrency.lockutils [req-15ad6dd1-a5bf-420e-ba8f-9438bc0eee93 req-48df59b2-7559-4aab-905c-1cc7cf406b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.186 226310 DEBUG oslo_concurrency.lockutils [req-15ad6dd1-a5bf-420e-ba8f-9438bc0eee93 req-48df59b2-7559-4aab-905c-1cc7cf406b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.186 226310 DEBUG oslo_concurrency.lockutils [req-15ad6dd1-a5bf-420e-ba8f-9438bc0eee93 req-48df59b2-7559-4aab-905c-1cc7cf406b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:38 np0005539564 conmon[254906]: conmon 142ee4849728bc659dc6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36.scope/container/memory.events
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.186 226310 DEBUG nova.compute.manager [req-15ad6dd1-a5bf-420e-ba8f-9438bc0eee93 req-48df59b2-7559-4aab-905c-1cc7cf406b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] No waiting events found dispatching network-vif-unplugged-4ddd5798-a43c-4151-b21a-d47f5386c760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.187 226310 DEBUG nova.compute.manager [req-15ad6dd1-a5bf-420e-ba8f-9438bc0eee93 req-48df59b2-7559-4aab-905c-1cc7cf406b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received event network-vif-unplugged-4ddd5798-a43c-4151-b21a-d47f5386c760 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:05:38 np0005539564 podman[255147]: 2025-11-29 08:05:38.191214169 +0000 UTC m=+0.066546576 container died 142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:05:38 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36-userdata-shm.mount: Deactivated successfully.
Nov 29 03:05:38 np0005539564 systemd[1]: var-lib-containers-storage-overlay-cd6d7401b1140e1db965484ce3f04d788d3b4c8856cd6ea8cc4de8f4d527e681-merged.mount: Deactivated successfully.
Nov 29 03:05:38 np0005539564 podman[255147]: 2025-11-29 08:05:38.239962794 +0000 UTC m=+0.115295161 container cleanup 142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:05:38 np0005539564 systemd[1]: libpod-conmon-142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36.scope: Deactivated successfully.
Nov 29 03:05:38 np0005539564 podman[255202]: 2025-11-29 08:05:38.321587406 +0000 UTC m=+0.052469337 container remove 142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.331 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c0819c88-aa29-416c-b621-51ff0ff98c6c]: (4, ('Sat Nov 29 08:05:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835 (142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36)\n142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36\nSat Nov 29 08:05:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835 (142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36)\n142ee4849728bc659dc69d804bc30ffae2993880933cfad29ae7747c43587a36\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.334 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[373630fd-c2fe-4706-99bd-e7cc8b7dac26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.335 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d3318ae-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:38 np0005539564 kernel: tap5d3318ae-70: left promiscuous mode
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.338 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.344 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8709ec-e159-4246-ad21-88b7dd77fe3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.353 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.358 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4ddc71-1313-49fc-9012-e0d187bb0a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.360 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[80d0e828-dac4-499b-8a8a-0c87f959eacd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.378 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[87e08ae9-6181-4a3b-afce-5fb09eae1585]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649013, 'reachable_time': 31139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255217, 'error': None, 'target': 'ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.382 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d3318ae-73ee-4f34-b7fc-49f8b6b1a835 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:05:38 np0005539564 systemd[1]: run-netns-ovnmeta\x2d5d3318ae\x2d73ee\x2d4f34\x2db7fc\x2d49f8b6b1a835.mount: Deactivated successfully.
Nov 29 03:05:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:38.382 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3e630a-71de-463d-8d8e-c4595d8cdd03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.443 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.443 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.444 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.445 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.628 226310 INFO nova.virt.libvirt.driver [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Deleting instance files /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1_del#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.629 226310 INFO nova.virt.libvirt.driver [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Deletion of /var/lib/nova/instances/13e6ea90-7ec8-4c88-a019-2683f4e42de1_del complete#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.702 226310 INFO nova.compute.manager [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.703 226310 DEBUG oslo.service.loopingcall [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.705 226310 DEBUG nova.compute.manager [-] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.705 226310 DEBUG nova.network.neutron [-] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:05:38 np0005539564 nova_compute[226295]: 2025-11-29 08:05:38.944 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:39.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:39.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:39 np0005539564 nova_compute[226295]: 2025-11-29 08:05:39.864 226310 DEBUG nova.network.neutron [-] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:39 np0005539564 nova_compute[226295]: 2025-11-29 08:05:39.915 226310 INFO nova.compute.manager [-] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Took 1.21 seconds to deallocate network for instance.#033[00m
Nov 29 03:05:39 np0005539564 nova_compute[226295]: 2025-11-29 08:05:39.967 226310 DEBUG nova.compute.manager [req-7c8e4224-5bd5-48ee-810f-bd2d6de6f1c5 req-c41f6d67-b8c7-4044-bf0a-a5c39c9292e5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received event network-vif-deleted-4ddd5798-a43c-4151-b21a-d47f5386c760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:39 np0005539564 nova_compute[226295]: 2025-11-29 08:05:39.988 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:39 np0005539564 nova_compute[226295]: 2025-11-29 08:05:39.989 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.087 226310 DEBUG oslo_concurrency.processutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.273 226310 DEBUG nova.compute.manager [req-9d5df5f3-4c43-4875-8c04-ff3f2980cdf4 req-64bfb507-dd5f-4096-bbfd-08513649b7c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received event network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.274 226310 DEBUG oslo_concurrency.lockutils [req-9d5df5f3-4c43-4875-8c04-ff3f2980cdf4 req-64bfb507-dd5f-4096-bbfd-08513649b7c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.274 226310 DEBUG oslo_concurrency.lockutils [req-9d5df5f3-4c43-4875-8c04-ff3f2980cdf4 req-64bfb507-dd5f-4096-bbfd-08513649b7c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.274 226310 DEBUG oslo_concurrency.lockutils [req-9d5df5f3-4c43-4875-8c04-ff3f2980cdf4 req-64bfb507-dd5f-4096-bbfd-08513649b7c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.274 226310 DEBUG nova.compute.manager [req-9d5df5f3-4c43-4875-8c04-ff3f2980cdf4 req-64bfb507-dd5f-4096-bbfd-08513649b7c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] No waiting events found dispatching network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.275 226310 WARNING nova.compute.manager [req-9d5df5f3-4c43-4875-8c04-ff3f2980cdf4 req-64bfb507-dd5f-4096-bbfd-08513649b7c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Received unexpected event network-vif-plugged-4ddd5798-a43c-4151-b21a-d47f5386c760 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3865215440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.614 226310 DEBUG oslo_concurrency.processutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.623 226310 DEBUG nova.compute.provider_tree [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.642 226310 DEBUG nova.scheduler.client.report [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.668 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.701 226310 INFO nova.scheduler.client.report [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Deleted allocations for instance 13e6ea90-7ec8-4c88-a019-2683f4e42de1#033[00m
Nov 29 03:05:40 np0005539564 nova_compute[226295]: 2025-11-29 08:05:40.792 226310 DEBUG oslo_concurrency.lockutils [None req-b43ba048-42e1-4330-a5d0-bbfdf96c325b b752bbc38da346cd906c48b7e558600a 1cbb1a55279047568ac52b0498dba447 - - default default] Lock "13e6ea90-7ec8-4c88-a019-2683f4e42de1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:41.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:41 np0005539564 nova_compute[226295]: 2025-11-29 08:05:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:41.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:41 np0005539564 nova_compute[226295]: 2025-11-29 08:05:41.649 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:43 np0005539564 nova_compute[226295]: 2025-11-29 08:05:43.076 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:43 np0005539564 nova_compute[226295]: 2025-11-29 08:05:43.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:43.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:43.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:45.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.328 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.387 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.388 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.389 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.389 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.390 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.549 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4115316929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:45 np0005539564 nova_compute[226295]: 2025-11-29 08:05:45.831 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.081 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.084 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4635MB free_disk=20.927486419677734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.084 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.085 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.188 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.188 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.204 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3552444880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.652 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.657 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.665 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.769 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.832 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:05:46 np0005539564 nova_compute[226295]: 2025-11-29 08:05:46.833 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:47.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:47 np0005539564 nova_compute[226295]: 2025-11-29 08:05:47.383 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:47 np0005539564 nova_compute[226295]: 2025-11-29 08:05:47.384 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:47 np0005539564 nova_compute[226295]: 2025-11-29 08:05:47.424 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:05:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:47.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:47 np0005539564 nova_compute[226295]: 2025-11-29 08:05:47.523 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:47 np0005539564 nova_compute[226295]: 2025-11-29 08:05:47.524 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:47 np0005539564 nova_compute[226295]: 2025-11-29 08:05:47.532 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:47 np0005539564 nova_compute[226295]: 2025-11-29 08:05:47.532 226310 INFO nova.compute.claims [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:05:47 np0005539564 nova_compute[226295]: 2025-11-29 08:05:47.692 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3851234073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.176 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.181 226310 DEBUG nova.compute.provider_tree [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.205 226310 DEBUG nova.scheduler.client.report [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.241 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.241 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.295 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.296 226310 DEBUG nova.network.neutron [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.320 226310 INFO nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.342 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.484 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.486 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.486 226310 INFO nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Creating image(s)#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.525 226310 DEBUG nova.storage.rbd_utils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:48 np0005539564 podman[255361]: 2025-11-29 08:05:48.529687901 +0000 UTC m=+0.069843785 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:48 np0005539564 podman[255359]: 2025-11-29 08:05:48.569110094 +0000 UTC m=+0.114263203 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.572 226310 DEBUG nova.storage.rbd_utils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.609 226310 DEBUG nova.storage.rbd_utils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.615 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.653 226310 DEBUG nova.policy [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef8e9cc962eb4827954df3c42cc34798', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.691 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.693 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.695 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:48 np0005539564 nova_compute[226295]: 2025-11-29 08:05:48.696 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:49 np0005539564 podman[255360]: 2025-11-29 08:05:49.166625541 +0000 UTC m=+0.715215602 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 03:05:49 np0005539564 nova_compute[226295]: 2025-11-29 08:05:49.191 226310 DEBUG nova.storage.rbd_utils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:49 np0005539564 nova_compute[226295]: 2025-11-29 08:05:49.198 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:49.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:49 np0005539564 nova_compute[226295]: 2025-11-29 08:05:49.629 226310 DEBUG nova.network.neutron [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Successfully created port: 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:05:49 np0005539564 nova_compute[226295]: 2025-11-29 08:05:49.795 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:49 np0005539564 nova_compute[226295]: 2025-11-29 08:05:49.896 226310 DEBUG nova.storage.rbd_utils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] resizing rbd image 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.062 226310 DEBUG nova.objects.instance [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'migration_context' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.079 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.080 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Ensure instance console log exists: /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.080 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.081 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.081 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.939 226310 DEBUG nova.network.neutron [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Successfully updated port: 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.956 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.956 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:50 np0005539564 nova_compute[226295]: 2025-11-29 08:05:50.957 226310 DEBUG nova.network.neutron [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:51 np0005539564 nova_compute[226295]: 2025-11-29 08:05:51.079 226310 DEBUG nova.compute.manager [req-9be725e3-867a-48e2-b7b5-4a22fb3e83c1 req-dd429f43-d2c5-4a9f-ac00-fc06cb90ec1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-changed-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:51 np0005539564 nova_compute[226295]: 2025-11-29 08:05:51.080 226310 DEBUG nova.compute.manager [req-9be725e3-867a-48e2-b7b5-4a22fb3e83c1 req-dd429f43-d2c5-4a9f-ac00-fc06cb90ec1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Refreshing instance network info cache due to event network-changed-8b53507f-acc1-4e75-a82d-55c4a7d7abd7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:51 np0005539564 nova_compute[226295]: 2025-11-29 08:05:51.081 226310 DEBUG oslo_concurrency.lockutils [req-9be725e3-867a-48e2-b7b5-4a22fb3e83c1 req-dd429f43-d2c5-4a9f-ac00-fc06cb90ec1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:51 np0005539564 nova_compute[226295]: 2025-11-29 08:05:51.162 226310 DEBUG nova.network.neutron [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:51.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:51 np0005539564 nova_compute[226295]: 2025-11-29 08:05:51.654 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.118 226310 DEBUG nova.network.neutron [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating instance_info_cache with network_info: [{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.264 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.264 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance network_info: |[{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.265 226310 DEBUG oslo_concurrency.lockutils [req-9be725e3-867a-48e2-b7b5-4a22fb3e83c1 req-dd429f43-d2c5-4a9f-ac00-fc06cb90ec1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.266 226310 DEBUG nova.network.neutron [req-9be725e3-867a-48e2-b7b5-4a22fb3e83c1 req-dd429f43-d2c5-4a9f-ac00-fc06cb90ec1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Refreshing network info cache for port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.272 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Start _get_guest_xml network_info=[{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.282 226310 WARNING nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.417 226310 DEBUG nova.virt.libvirt.host [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.418 226310 DEBUG nova.virt.libvirt.host [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.424 226310 DEBUG nova.virt.libvirt.host [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.425 226310 DEBUG nova.virt.libvirt.host [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.427 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.427 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.428 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.429 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.429 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.430 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.430 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.431 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.432 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.432 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.432 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.433 226310 DEBUG nova.virt.hardware [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.438 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3825691271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.915 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.960 226310 DEBUG nova.storage.rbd_utils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:52 np0005539564 nova_compute[226295]: 2025-11-29 08:05:52.967 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.130 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403538.129857, 13e6ea90-7ec8-4c88-a019-2683f4e42de1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.132 226310 INFO nova.compute.manager [-] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.160 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.195 226310 DEBUG nova.compute.manager [None req-8ec256b2-3b71-4bb0-8eb7-d0f9584d117d - - - - - -] [instance: 13e6ea90-7ec8-4c88-a019-2683f4e42de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:53.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/785794356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.438 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:53.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.439 226310 DEBUG nova.virt.libvirt.vif [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-373302301',display_name='tempest-DeleteServersTestJSON-server-373302301',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-373302301',id=80,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-xg03xyph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:48Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=8f9bb224-0119-4a96-9859-d3afda2ab1ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.439 226310 DEBUG nova.network.os_vif_util [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.440 226310 DEBUG nova.network.os_vif_util [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.441 226310 DEBUG nova.objects.instance [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.462 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <uuid>8f9bb224-0119-4a96-9859-d3afda2ab1ce</uuid>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <name>instance-00000050</name>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <nova:name>tempest-DeleteServersTestJSON-server-373302301</nova:name>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:05:52</nova:creationTime>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <nova:user uuid="ef8e9cc962eb4827954df3c42cc34798">tempest-DeleteServersTestJSON-69711189-project-member</nova:user>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <nova:project uuid="f8bc2a2616a34ba1a18b3211e406993f">tempest-DeleteServersTestJSON-69711189</nova:project>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <nova:port uuid="8b53507f-acc1-4e75-a82d-55c4a7d7abd7">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <entry name="serial">8f9bb224-0119-4a96-9859-d3afda2ab1ce</entry>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <entry name="uuid">8f9bb224-0119-4a96-9859-d3afda2ab1ce</entry>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk.config">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:6c:22:bc"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <target dev="tap8b53507f-ac"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/console.log" append="off"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:05:53 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:05:53 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:05:53 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:05:53 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.464 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Preparing to wait for external event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.465 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.466 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.467 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.469 226310 DEBUG nova.virt.libvirt.vif [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-373302301',display_name='tempest-DeleteServersTestJSON-server-373302301',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-373302301',id=80,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-xg03xyph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:48Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=8f9bb224-0119-4a96-9859-d3afda2ab1ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.469 226310 DEBUG nova.network.os_vif_util [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.471 226310 DEBUG nova.network.os_vif_util [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.472 226310 DEBUG os_vif [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.473 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.474 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.475 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.481 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.481 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b53507f-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.482 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b53507f-ac, col_values=(('external_ids', {'iface-id': '8b53507f-acc1-4e75-a82d-55c4a7d7abd7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:22:bc', 'vm-uuid': '8f9bb224-0119-4a96-9859-d3afda2ab1ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.485 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539564 NetworkManager[48997]: <info>  [1764403553.4862] manager: (tap8b53507f-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.488 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.496 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.498 226310 INFO os_vif [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac')#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.556 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.557 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.557 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:6c:22:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.558 226310 INFO nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Using config drive#033[00m
Nov 29 03:05:53 np0005539564 nova_compute[226295]: 2025-11-29 08:05:53.598 226310 DEBUG nova.storage.rbd_utils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.055 226310 DEBUG nova.network.neutron [req-9be725e3-867a-48e2-b7b5-4a22fb3e83c1 req-dd429f43-d2c5-4a9f-ac00-fc06cb90ec1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updated VIF entry in instance network info cache for port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.056 226310 DEBUG nova.network.neutron [req-9be725e3-867a-48e2-b7b5-4a22fb3e83c1 req-dd429f43-d2c5-4a9f-ac00-fc06cb90ec1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating instance_info_cache with network_info: [{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.097 226310 DEBUG oslo_concurrency.lockutils [req-9be725e3-867a-48e2-b7b5-4a22fb3e83c1 req-dd429f43-d2c5-4a9f-ac00-fc06cb90ec1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.161 226310 INFO nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Creating config drive at /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/disk.config#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.167 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxg7ef9mp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.321 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxg7ef9mp" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.371 226310 DEBUG nova.storage.rbd_utils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.376 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/disk.config 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.597 226310 DEBUG oslo_concurrency.processutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/disk.config 8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.598 226310 INFO nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Deleting local config drive /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:54 np0005539564 kernel: tap8b53507f-ac: entered promiscuous mode
Nov 29 03:05:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:54Z|00248|binding|INFO|Claiming lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for this chassis.
Nov 29 03:05:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:54Z|00249|binding|INFO|8b53507f-acc1-4e75-a82d-55c4a7d7abd7: Claiming fa:16:3e:6c:22:bc 10.100.0.7
Nov 29 03:05:54 np0005539564 NetworkManager[48997]: <info>  [1764403554.6748] manager: (tap8b53507f-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/128)
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.674 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.684 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:22:bc 10.100.0.7'], port_security=['fa:16:3e:6c:22:bc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8f9bb224-0119-4a96-9859-d3afda2ab1ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=8b53507f-acc1-4e75-a82d-55c4a7d7abd7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.685 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 bound to our chassis#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.686 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5e42602-d72e-4beb-864d-714bd1635da9#033[00m
Nov 29 03:05:54 np0005539564 systemd-udevd[255722]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.705 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fa09d7b4-08b7-41da-8694-f43bfdd8c2e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.706 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5e42602-d1 in ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.712 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5e42602-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.712 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2f9572-bead-44e9-8058-801e295a62d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.713 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c5faf7b4-8f0b-4d5c-aac6-7c9c3a6fcb54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 NetworkManager[48997]: <info>  [1764403554.7178] device (tap8b53507f-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:05:54 np0005539564 NetworkManager[48997]: <info>  [1764403554.7190] device (tap8b53507f-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:05:54 np0005539564 systemd-machined[190128]: New machine qemu-34-instance-00000050.
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.733 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[ff826459-a210-46bc-9f08-460418b2afcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 systemd[1]: Started Virtual Machine qemu-34-instance-00000050.
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.761 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4669065b-a523-42e8-a13a-c88e6102e8f4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.771 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:54Z|00250|binding|INFO|Setting lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 ovn-installed in OVS
Nov 29 03:05:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:54Z|00251|binding|INFO|Setting lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 up in Southbound
Nov 29 03:05:54 np0005539564 nova_compute[226295]: 2025-11-29 08:05:54.776 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.801 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3de38200-fe4b-40c0-a5bc-9d317ed7181d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 NetworkManager[48997]: <info>  [1764403554.8098] manager: (tapd5e42602-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/129)
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.809 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[406dd97c-fce6-4070-9e5d-abab664b6498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.836 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7eb226-139b-494e-8665-3bfa92f052b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.839 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa29fa4-45eb-435c-87e5-bcd734137654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 NetworkManager[48997]: <info>  [1764403554.8688] device (tapd5e42602-d0): carrier: link connected
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.875 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd36f15-37cb-4c5e-81c4-fd247a10b147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.898 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f6875f81-0e93-4b34-b894-a67defd04694]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652954, 'reachable_time': 43405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255757, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.922 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[71e068ad-dcb8-42bc-912a-9cb557055272]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:370b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652954, 'tstamp': 652954}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255758, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.947 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[02a1c675-b62a-498c-8926-56010358ee84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652954, 'reachable_time': 43405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255759, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:54.989 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[92a1d235-d29c-4c26-9a26-baba7bd1e833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.069 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3705450e-e092-46bc-be2a-7fa4edcc31f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.071 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.072 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.073 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e42602-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:55 np0005539564 kernel: tapd5e42602-d0: entered promiscuous mode
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.077 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:55 np0005539564 NetworkManager[48997]: <info>  [1764403555.0785] manager: (tapd5e42602-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.079 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5e42602-d0, col_values=(('external_ids', {'iface-id': 'b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:05:55Z|00252|binding|INFO|Releasing lport b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e from this chassis (sb_readonly=0)
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.080 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.082 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.083 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[79bfd28e-1774-4419-a0d1-3efd50845c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.084 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:05:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:05:55.085 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'env', 'PROCESS_TAG=haproxy-d5e42602-d72e-4beb-864d-714bd1635da9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5e42602-d72e-4beb-864d-714bd1635da9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:55.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.389 226310 DEBUG nova.compute.manager [req-33a40fa1-13ea-4f5d-b5ff-eb7ffe574883 req-a13fcbbb-1f2a-465e-a8a7-30b61f2c8ed8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.390 226310 DEBUG oslo_concurrency.lockutils [req-33a40fa1-13ea-4f5d-b5ff-eb7ffe574883 req-a13fcbbb-1f2a-465e-a8a7-30b61f2c8ed8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.390 226310 DEBUG oslo_concurrency.lockutils [req-33a40fa1-13ea-4f5d-b5ff-eb7ffe574883 req-a13fcbbb-1f2a-465e-a8a7-30b61f2c8ed8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.391 226310 DEBUG oslo_concurrency.lockutils [req-33a40fa1-13ea-4f5d-b5ff-eb7ffe574883 req-a13fcbbb-1f2a-465e-a8a7-30b61f2c8ed8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.391 226310 DEBUG nova.compute.manager [req-33a40fa1-13ea-4f5d-b5ff-eb7ffe574883 req-a13fcbbb-1f2a-465e-a8a7-30b61f2c8ed8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Processing event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:05:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:55 np0005539564 podman[255804]: 2025-11-29 08:05:55.487520782 +0000 UTC m=+0.040874564 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:05:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:55 np0005539564 podman[255804]: 2025-11-29 08:05:55.631146326 +0000 UTC m=+0.184500078 container create 7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:05:55 np0005539564 systemd[1]: Started libpod-conmon-7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc.scope.
Nov 29 03:05:55 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:05:55 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b826c6f1882d8ae1bf4c613886bb845bd0b57c4fc7deeed2b6f932a9a23dc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:55 np0005539564 podman[255804]: 2025-11-29 08:05:55.718240045 +0000 UTC m=+0.271593767 container init 7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.719 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.720 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403555.718795, 8f9bb224-0119-4a96-9859-d3afda2ab1ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.720 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.723 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:55 np0005539564 podman[255804]: 2025-11-29 08:05:55.725220073 +0000 UTC m=+0.278573835 container start 7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.726 226310 INFO nova.virt.libvirt.driver [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance spawned successfully.#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.726 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:05:55 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[255846]: [NOTICE]   (255851) : New worker (255853) forked
Nov 29 03:05:55 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[255846]: [NOTICE]   (255851) : Loading success.
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.749 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.757 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.758 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.758 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.759 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.759 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.760 226310 DEBUG nova.virt.libvirt.driver [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.764 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.790 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.790 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403555.718951, 8f9bb224-0119-4a96-9859-d3afda2ab1ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.791 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.812 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.817 226310 INFO nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Took 7.33 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.817 226310 DEBUG nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.819 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403555.7231038, 8f9bb224-0119-4a96-9859-d3afda2ab1ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.819 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.847 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.858 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.877 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.902 226310 INFO nova.compute.manager [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Took 8.40 seconds to build instance.#033[00m
Nov 29 03:05:55 np0005539564 nova_compute[226295]: 2025-11-29 08:05:55.923 226310 DEBUG oslo_concurrency.lockutils [None req-5a6d3e8f-58bd-437c-934b-00a8ed9d04f9 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:56 np0005539564 nova_compute[226295]: 2025-11-29 08:05:56.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:57.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:57.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:57 np0005539564 nova_compute[226295]: 2025-11-29 08:05:57.732 226310 DEBUG nova.compute.manager [req-8ce41535-c805-4686-98bf-ae83903157c9 req-35456070-7672-431e-9705-205925924ae1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:57 np0005539564 nova_compute[226295]: 2025-11-29 08:05:57.733 226310 DEBUG oslo_concurrency.lockutils [req-8ce41535-c805-4686-98bf-ae83903157c9 req-35456070-7672-431e-9705-205925924ae1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:57 np0005539564 nova_compute[226295]: 2025-11-29 08:05:57.734 226310 DEBUG oslo_concurrency.lockutils [req-8ce41535-c805-4686-98bf-ae83903157c9 req-35456070-7672-431e-9705-205925924ae1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:57 np0005539564 nova_compute[226295]: 2025-11-29 08:05:57.734 226310 DEBUG oslo_concurrency.lockutils [req-8ce41535-c805-4686-98bf-ae83903157c9 req-35456070-7672-431e-9705-205925924ae1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:57 np0005539564 nova_compute[226295]: 2025-11-29 08:05:57.734 226310 DEBUG nova.compute.manager [req-8ce41535-c805-4686-98bf-ae83903157c9 req-35456070-7672-431e-9705-205925924ae1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:57 np0005539564 nova_compute[226295]: 2025-11-29 08:05:57.735 226310 WARNING nova.compute.manager [req-8ce41535-c805-4686-98bf-ae83903157c9 req-35456070-7672-431e-9705-205925924ae1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state active and task_state resize_prep.#033[00m
Nov 29 03:05:58 np0005539564 nova_compute[226295]: 2025-11-29 08:05:58.487 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:59.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:05:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:01 np0005539564 nova_compute[226295]: 2025-11-29 08:06:01.143 226310 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:01 np0005539564 nova_compute[226295]: 2025-11-29 08:06:01.143 226310 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:01 np0005539564 nova_compute[226295]: 2025-11-29 08:06:01.144 226310 DEBUG nova.network.neutron [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:01.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:01.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:01 np0005539564 nova_compute[226295]: 2025-11-29 08:06:01.659 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:03.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:03 np0005539564 nova_compute[226295]: 2025-11-29 08:06:03.513 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:03.715 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:03.716 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:03.717 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:05.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.125 226310 DEBUG nova.network.neutron [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating instance_info_cache with network_info: [{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.144 226310 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.216 226310 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.216 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Creating file /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/f8afba9a0f704b08a03926b2e5821bc4.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.217 226310 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/f8afba9a0f704b08a03926b2e5821bc4.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.662 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.715 226310 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/f8afba9a0f704b08a03926b2e5821bc4.tmp" returned: 1 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.716 226310 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce/f8afba9a0f704b08a03926b2e5821bc4.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.717 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Creating directory /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.717 226310 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.956 226310 DEBUG oslo_concurrency.processutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/8f9bb224-0119-4a96-9859-d3afda2ab1ce" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:06 np0005539564 nova_compute[226295]: 2025-11-29 08:06:06.963 226310 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:06:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:07.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:08 np0005539564 nova_compute[226295]: 2025-11-29 08:06:08.515 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:09.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:09.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:06:10Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:22:bc 10.100.0.7
Nov 29 03:06:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:06:10Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:22:bc 10.100.0.7
Nov 29 03:06:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:11.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:11 np0005539564 nova_compute[226295]: 2025-11-29 08:06:11.665 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:13.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:13.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:13 np0005539564 nova_compute[226295]: 2025-11-29 08:06:13.520 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:15.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:16 np0005539564 nova_compute[226295]: 2025-11-29 08:06:16.667 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539564 nova_compute[226295]: 2025-11-29 08:06:17.019 226310 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:06:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:17.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:18 np0005539564 nova_compute[226295]: 2025-11-29 08:06:18.525 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:19.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:19 np0005539564 kernel: tap8b53507f-ac (unregistering): left promiscuous mode
Nov 29 03:06:19 np0005539564 NetworkManager[48997]: <info>  [1764403579.3249] device (tap8b53507f-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:06:19Z|00253|binding|INFO|Releasing lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 from this chassis (sb_readonly=0)
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.335 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:06:19Z|00254|binding|INFO|Setting lport 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 down in Southbound
Nov 29 03:06:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:06:19Z|00255|binding|INFO|Removing iface tap8b53507f-ac ovn-installed in OVS
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.342 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:22:bc 10.100.0.7'], port_security=['fa:16:3e:6c:22:bc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8f9bb224-0119-4a96-9859-d3afda2ab1ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=8b53507f-acc1-4e75-a82d-55c4a7d7abd7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.344 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 unbound from our chassis#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.346 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5e42602-d72e-4beb-864d-714bd1635da9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.347 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[08b74d7f-9e65-4305-bcc6-8e069a9fe87b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.348 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace which is not needed anymore#033[00m
Nov 29 03:06:19 np0005539564 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000050.scope: Deactivated successfully.
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.426 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539564 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000050.scope: Consumed 15.364s CPU time.
Nov 29 03:06:19 np0005539564 systemd-machined[190128]: Machine qemu-34-instance-00000050 terminated.
Nov 29 03:06:19 np0005539564 podman[255867]: 2025-11-29 08:06:19.476922291 +0000 UTC m=+0.101014535 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:06:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:19.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:19 np0005539564 podman[255869]: 2025-11-29 08:06:19.480656362 +0000 UTC m=+0.108240160 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:06:19 np0005539564 podman[255866]: 2025-11-29 08:06:19.497115016 +0000 UTC m=+0.132773502 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:06:19 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[255846]: [NOTICE]   (255851) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:19 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[255846]: [NOTICE]   (255851) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:19 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[255846]: [WARNING]  (255851) : Exiting Master process...
Nov 29 03:06:19 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[255846]: [ALERT]    (255851) : Current worker (255853) exited with code 143 (Terminated)
Nov 29 03:06:19 np0005539564 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[255846]: [WARNING]  (255851) : All workers exited. Exiting... (0)
Nov 29 03:06:19 np0005539564 systemd[1]: libpod-7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc.scope: Deactivated successfully.
Nov 29 03:06:19 np0005539564 podman[255943]: 2025-11-29 08:06:19.531667028 +0000 UTC m=+0.053514055 container died 7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:06:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay-f1b826c6f1882d8ae1bf4c613886bb845bd0b57c4fc7deeed2b6f932a9a23dc9-merged.mount: Deactivated successfully.
Nov 29 03:06:19 np0005539564 podman[255943]: 2025-11-29 08:06:19.573555847 +0000 UTC m=+0.095402874 container cleanup 7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:06:19 np0005539564 systemd[1]: libpod-conmon-7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc.scope: Deactivated successfully.
Nov 29 03:06:19 np0005539564 podman[255991]: 2025-11-29 08:06:19.631597333 +0000 UTC m=+0.038559301 container remove 7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.636 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0bd9d1-b843-4064-80d3-a91d5700f544]: (4, ('Sat Nov 29 08:06:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc)\n7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc\nSat Nov 29 08:06:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc)\n7f4a5b2fef5c8a604117182bfb4b1c23b4b3e34e5acb320cbdb341b3a255facc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.638 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[589e04f7-58f4-421c-8509-7ef4c84f550e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.639 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:19 np0005539564 kernel: tapd5e42602-d0: left promiscuous mode
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.641 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.658 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.660 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.666 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1e2b8d-6eb6-4a4a-b02c-382d7efbb7f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.682 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3a85c5-e8bc-4564-811e-51d275518773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.684 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a98250c7-f88d-4a2c-a30b-799d27661f2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.710 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0a0f73-b7d8-4278-b078-2ab38392beee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652947, 'reachable_time': 37061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256010, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.716 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:19.716 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[3e07de77-061f-4669-b019-d54b6d5a4374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539564 systemd[1]: run-netns-ovnmeta\x2dd5e42602\x2dd72e\x2d4beb\x2d864d\x2d714bd1635da9.mount: Deactivated successfully.
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.971 226310 DEBUG nova.compute.manager [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.972 226310 DEBUG oslo_concurrency.lockutils [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.972 226310 DEBUG oslo_concurrency.lockutils [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.973 226310 DEBUG oslo_concurrency.lockutils [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.973 226310 DEBUG nova.compute.manager [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:19 np0005539564 nova_compute[226295]: 2025-11-29 08:06:19.974 226310 WARNING nova.compute.manager [req-e19d496d-28ee-4d20-98df-e9f91fb178d3 req-b7f477a3-0eea-47c5-954b-5dc0943609f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-unplugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.037 226310 INFO nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.045 226310 INFO nova.virt.libvirt.driver [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Instance destroyed successfully.#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.047 226310 DEBUG nova.virt.libvirt.vif [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-373302301',display_name='tempest-DeleteServersTestJSON-server-373302301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-373302301',id=80,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-xg03xyph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:00Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=8f9bb224-0119-4a96-9859-d3afda2ab1ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-2144636506-network", "vif_mac": "fa:16:3e:6c:22:bc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.048 226310 DEBUG nova.network.os_vif_util [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-2144636506-network", "vif_mac": "fa:16:3e:6c:22:bc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.049 226310 DEBUG nova.network.os_vif_util [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.050 226310 DEBUG os_vif [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.056 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.057 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b53507f-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.065 226310 INFO os_vif [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac')#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.071 226310 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.072 226310 DEBUG nova.virt.libvirt.driver [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.719 226310 DEBUG neutronclient.v2_0.client [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.843 226310 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.844 226310 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:20 np0005539564 nova_compute[226295]: 2025-11-29 08:06:20.845 226310 DEBUG oslo_concurrency.lockutils [None req-c6b57dd5-ebe0-46fa-9318-e22d518c6211 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:21.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:21.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:21 np0005539564 nova_compute[226295]: 2025-11-29 08:06:21.670 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.062 226310 DEBUG nova.compute.manager [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.062 226310 DEBUG oslo_concurrency.lockutils [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.062 226310 DEBUG oslo_concurrency.lockutils [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.062 226310 DEBUG oslo_concurrency.lockutils [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.063 226310 DEBUG nova.compute.manager [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.063 226310 WARNING nova.compute.manager [req-0131e6c1-b887-421e-8cc7-92af8b40b8d2 req-82086366-982c-455c-9942-fee4ec06d576 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.288 226310 DEBUG nova.compute.manager [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-changed-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.288 226310 DEBUG nova.compute.manager [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Refreshing instance network info cache due to event network-changed-8b53507f-acc1-4e75-a82d-55c4a7d7abd7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.288 226310 DEBUG oslo_concurrency.lockutils [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.289 226310 DEBUG oslo_concurrency.lockutils [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:22 np0005539564 nova_compute[226295]: 2025-11-29 08:06:22.289 226310 DEBUG nova.network.neutron [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Refreshing network info cache for port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:23.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:24.205 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:24 np0005539564 nova_compute[226295]: 2025-11-29 08:06:24.206 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:24.207 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:06:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e257 e257: 3 total, 3 up, 3 in
Nov 29 03:06:24 np0005539564 nova_compute[226295]: 2025-11-29 08:06:24.851 226310 DEBUG nova.network.neutron [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updated VIF entry in instance network info cache for port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:24 np0005539564 nova_compute[226295]: 2025-11-29 08:06:24.851 226310 DEBUG nova.network.neutron [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating instance_info_cache with network_info: [{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:24 np0005539564 nova_compute[226295]: 2025-11-29 08:06:24.872 226310 DEBUG oslo_concurrency.lockutils [req-cd3aaec5-12f0-47ad-a0a7-050738034469 req-13f615f9-68a6-4fb8-8597-5344ca15e628 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:25 np0005539564 nova_compute[226295]: 2025-11-29 08:06:25.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:25.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:26 np0005539564 nova_compute[226295]: 2025-11-29 08:06:26.673 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.181 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.181 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.182 226310 DEBUG nova.compute.manager [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:06:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:27.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.677 226310 DEBUG neutronclient.v2_0.client [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.678 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.678 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.678 226310 DEBUG nova.network.neutron [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.678 226310 DEBUG nova.objects.instance [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'info_cache' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.830 226310 DEBUG nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.831 226310 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.831 226310 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.832 226310 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.832 226310 DEBUG nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.833 226310 WARNING nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state resized and task_state deleting.#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.833 226310 DEBUG nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.834 226310 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.834 226310 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.835 226310 DEBUG oslo_concurrency.lockutils [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.835 226310 DEBUG nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] No waiting events found dispatching network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:27 np0005539564 nova_compute[226295]: 2025-11-29 08:06:27.836 226310 WARNING nova.compute.manager [req-95f11cd6-04da-4cbd-804f-5a1964f7bd3e req-f2622ee9-94cf-447e-8869-7d8091c3a0f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Received unexpected event network-vif-plugged-8b53507f-acc1-4e75-a82d-55c4a7d7abd7 for instance with vm_state resized and task_state deleting.#033[00m
Nov 29 03:06:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:29.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:29.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.063 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.142 226310 DEBUG nova.network.neutron [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Updating instance_info_cache with network_info: [{"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.199 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-8f9bb224-0119-4a96-9859-d3afda2ab1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.200 226310 DEBUG nova.objects.instance [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'migration_context' on Instance uuid 8f9bb224-0119-4a96-9859-d3afda2ab1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.375 226310 DEBUG nova.storage.rbd_utils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] removing snapshot(nova-resize) on rbd image(8f9bb224-0119-4a96-9859-d3afda2ab1ce_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:06:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e258 e258: 3 total, 3 up, 3 in
Nov 29 03:06:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.633 226310 DEBUG nova.virt.libvirt.vif [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-373302301',display_name='tempest-DeleteServersTestJSON-server-373302301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-373302301',id=80,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-xg03xyph',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:27Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=8f9bb224-0119-4a96-9859-d3afda2ab1ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.634 226310 DEBUG nova.network.os_vif_util [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "address": "fa:16:3e:6c:22:bc", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b53507f-ac", "ovs_interfaceid": "8b53507f-acc1-4e75-a82d-55c4a7d7abd7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.635 226310 DEBUG nova.network.os_vif_util [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.635 226310 DEBUG os_vif [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.636 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.637 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b53507f-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.637 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.640 226310 INFO os_vif [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:22:bc,bridge_name='br-int',has_traffic_filtering=True,id=8b53507f-acc1-4e75-a82d-55c4a7d7abd7,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b53507f-ac')#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.640 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.640 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:30 np0005539564 nova_compute[226295]: 2025-11-29 08:06:30.770 226310 DEBUG oslo_concurrency.processutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/395441916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:31 np0005539564 nova_compute[226295]: 2025-11-29 08:06:31.302 226310 DEBUG oslo_concurrency.processutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:31.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:31 np0005539564 nova_compute[226295]: 2025-11-29 08:06:31.310 226310 DEBUG nova.compute.provider_tree [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:31 np0005539564 nova_compute[226295]: 2025-11-29 08:06:31.428 226310 DEBUG nova.scheduler.client.report [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:31.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.513692) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591513732, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2357, "num_deletes": 259, "total_data_size": 5386538, "memory_usage": 5468216, "flush_reason": "Manual Compaction"}
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Nov 29 03:06:31 np0005539564 nova_compute[226295]: 2025-11-29 08:06:31.562 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591570364, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3530624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37318, "largest_seqno": 39670, "table_properties": {"data_size": 3521047, "index_size": 5943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20532, "raw_average_key_size": 20, "raw_value_size": 3501591, "raw_average_value_size": 3501, "num_data_blocks": 258, "num_entries": 1000, "num_filter_entries": 1000, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403397, "oldest_key_time": 1764403397, "file_creation_time": 1764403591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 56752 microseconds, and 13569 cpu microseconds.
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.570437) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3530624 bytes OK
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.570465) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.573302) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.573334) EVENT_LOG_v1 {"time_micros": 1764403591573324, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.573362) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5376032, prev total WAL file size 5376032, number of live WAL files 2.
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.576150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303131' seq:72057594037927935, type:22 .. '6C6F676D0031323634' seq:0, type:0; will stop at (end)
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3447KB)], [69(9361KB)]
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591576187, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 13116681, "oldest_snapshot_seqno": -1}
Nov 29 03:06:31 np0005539564 nova_compute[226295]: 2025-11-29 08:06:31.675 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 7027 keys, 12955440 bytes, temperature: kUnknown
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591747264, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12955440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12905546, "index_size": 31222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 180759, "raw_average_key_size": 25, "raw_value_size": 12776864, "raw_average_value_size": 1818, "num_data_blocks": 1250, "num_entries": 7027, "num_filter_entries": 7027, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:06:31 np0005539564 nova_compute[226295]: 2025-11-29 08:06:31.747 226310 INFO nova.scheduler.client.report [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Deleted allocation for migration c0d2572c-886f-4b30-9307-63a2558060db#033[00m
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.747687) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12955440 bytes
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.751260) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.6 rd, 75.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.1 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.7) OK, records in: 7564, records dropped: 537 output_compression: NoCompression
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.751298) EVENT_LOG_v1 {"time_micros": 1764403591751281, "job": 42, "event": "compaction_finished", "compaction_time_micros": 171207, "compaction_time_cpu_micros": 49134, "output_level": 6, "num_output_files": 1, "total_output_size": 12955440, "num_input_records": 7564, "num_output_records": 7027, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591752703, "job": 42, "event": "table_file_deletion", "file_number": 71}
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591755577, "job": 42, "event": "table_file_deletion", "file_number": 69}
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.576069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.755680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.755687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.755689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.755691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:31.755693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539564 nova_compute[226295]: 2025-11-29 08:06:31.833 226310 DEBUG oslo_concurrency.lockutils [None req-a98fb486-adf6-4a0a-9a77-bc52c99c05e6 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "8f9bb224-0119-4a96-9859-d3afda2ab1ce" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:33.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:33.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:06:34.209 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:34 np0005539564 nova_compute[226295]: 2025-11-29 08:06:34.572 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403579.571101, 8f9bb224-0119-4a96-9859-d3afda2ab1ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:34 np0005539564 nova_compute[226295]: 2025-11-29 08:06:34.573 226310 INFO nova.compute.manager [-] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:06:34 np0005539564 nova_compute[226295]: 2025-11-29 08:06:34.633 226310 DEBUG nova.compute.manager [None req-e8ede931-86e1-46eb-b231-56bf80338549 - - - - - -] [instance: 8f9bb224-0119-4a96-9859-d3afda2ab1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:35 np0005539564 nova_compute[226295]: 2025-11-29 08:06:35.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:35.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:36 np0005539564 nova_compute[226295]: 2025-11-29 08:06:36.678 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:36 np0005539564 nova_compute[226295]: 2025-11-29 08:06:36.825 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:37 np0005539564 nova_compute[226295]: 2025-11-29 08:06:37.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:37.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:37.845341) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597845551, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 325, "num_deletes": 251, "total_data_size": 174985, "memory_usage": 182632, "flush_reason": "Manual Compaction"}
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597848540, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 114908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39675, "largest_seqno": 39995, "table_properties": {"data_size": 112839, "index_size": 233, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5269, "raw_average_key_size": 18, "raw_value_size": 108801, "raw_average_value_size": 383, "num_data_blocks": 10, "num_entries": 284, "num_filter_entries": 284, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403591, "oldest_key_time": 1764403591, "file_creation_time": 1764403597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 3441 microseconds, and 846 cpu microseconds.
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:37.848780) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 114908 bytes OK
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:37.848897) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:37.850870) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:37.850886) EVENT_LOG_v1 {"time_micros": 1764403597850882, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:37.850900) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 172687, prev total WAL file size 172687, number of live WAL files 2.
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:37.851768) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(112KB)], [72(12MB)]
Nov 29 03:06:37 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597851803, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 13070348, "oldest_snapshot_seqno": -1}
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6801 keys, 11021346 bytes, temperature: kUnknown
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403598014635, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 11021346, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10974797, "index_size": 28453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 176774, "raw_average_key_size": 25, "raw_value_size": 10851667, "raw_average_value_size": 1595, "num_data_blocks": 1126, "num_entries": 6801, "num_filter_entries": 6801, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:38.015022) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11021346 bytes
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:38.017197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.2 rd, 67.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.4 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(209.7) write-amplify(95.9) OK, records in: 7311, records dropped: 510 output_compression: NoCompression
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:38.017234) EVENT_LOG_v1 {"time_micros": 1764403598017220, "job": 44, "event": "compaction_finished", "compaction_time_micros": 162920, "compaction_time_cpu_micros": 39692, "output_level": 6, "num_output_files": 1, "total_output_size": 11021346, "num_input_records": 7311, "num_output_records": 6801, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403598017444, "job": 44, "event": "table_file_deletion", "file_number": 74}
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403598022525, "job": 44, "event": "table_file_deletion", "file_number": 72}
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:37.851728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:38.022609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:38.022617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:38.022621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:38.022625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:06:38.022629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:39.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:40 np0005539564 nova_compute[226295]: 2025-11-29 08:06:40.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539564 nova_compute[226295]: 2025-11-29 08:06:40.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:40 np0005539564 nova_compute[226295]: 2025-11-29 08:06:40.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:06:40 np0005539564 nova_compute[226295]: 2025-11-29 08:06:40.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:06:40 np0005539564 nova_compute[226295]: 2025-11-29 08:06:40.367 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:06:40 np0005539564 nova_compute[226295]: 2025-11-29 08:06:40.368 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:40 np0005539564 nova_compute[226295]: 2025-11-29 08:06:40.368 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:40 np0005539564 nova_compute[226295]: 2025-11-29 08:06:40.368 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:06:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:41 np0005539564 nova_compute[226295]: 2025-11-29 08:06:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:41 np0005539564 nova_compute[226295]: 2025-11-29 08:06:41.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:06:41 np0005539564 nova_compute[226295]: 2025-11-29 08:06:41.374 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:06:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:06:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2209813400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:06:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:06:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2209813400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:06:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:41.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 e259: 3 total, 3 up, 3 in
Nov 29 03:06:41 np0005539564 nova_compute[226295]: 2025-11-29 08:06:41.682 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:42 np0005539564 nova_compute[226295]: 2025-11-29 08:06:42.346 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:42 np0005539564 nova_compute[226295]: 2025-11-29 08:06:42.346 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:06:42 np0005539564 nova_compute[226295]: 2025-11-29 08:06:42.702 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:43.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:43 np0005539564 nova_compute[226295]: 2025-11-29 08:06:43.346 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:43.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:06:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:06:45 np0005539564 nova_compute[226295]: 2025-11-29 08:06:45.071 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:45.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:45 np0005539564 nova_compute[226295]: 2025-11-29 08:06:45.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:45.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:46 np0005539564 nova_compute[226295]: 2025-11-29 08:06:46.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:46 np0005539564 nova_compute[226295]: 2025-11-29 08:06:46.685 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:47.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:47 np0005539564 nova_compute[226295]: 2025-11-29 08:06:47.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:47 np0005539564 nova_compute[226295]: 2025-11-29 08:06:47.428 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:47 np0005539564 nova_compute[226295]: 2025-11-29 08:06:47.428 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:47 np0005539564 nova_compute[226295]: 2025-11-29 08:06:47.428 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:47 np0005539564 nova_compute[226295]: 2025-11-29 08:06:47.428 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:06:47 np0005539564 nova_compute[226295]: 2025-11-29 08:06:47.429 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:47.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:47 np0005539564 nova_compute[226295]: 2025-11-29 08:06:47.907 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3859663900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:47 np0005539564 nova_compute[226295]: 2025-11-29 08:06:47.951 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.120 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.121 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4638MB free_disk=20.946483612060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.121 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.121 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.287 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.288 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.426 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2208697596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.887 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.896 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.931 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.962 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:06:48 np0005539564 nova_compute[226295]: 2025-11-29 08:06:48.963 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:49.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:49.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:50 np0005539564 nova_compute[226295]: 2025-11-29 08:06:50.073 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:50 np0005539564 podman[256249]: 2025-11-29 08:06:50.544528977 +0000 UTC m=+0.086827993 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 03:06:50 np0005539564 podman[256250]: 2025-11-29 08:06:50.552902713 +0000 UTC m=+0.099871006 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:50 np0005539564 podman[256248]: 2025-11-29 08:06:50.572244305 +0000 UTC m=+0.114689605 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:51.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:51.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:51 np0005539564 nova_compute[226295]: 2025-11-29 08:06:51.687 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:53.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:53.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:55 np0005539564 nova_compute[226295]: 2025-11-29 08:06:55.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:55.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:56 np0005539564 nova_compute[226295]: 2025-11-29 08:06:56.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:57.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:57.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:59.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:06:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:59.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:00 np0005539564 nova_compute[226295]: 2025-11-29 08:07:00.077 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:01 np0005539564 nova_compute[226295]: 2025-11-29 08:07:01.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:01.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:01.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:01 np0005539564 nova_compute[226295]: 2025-11-29 08:07:01.693 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:02 np0005539564 nova_compute[226295]: 2025-11-29 08:07:02.361 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:03.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:03.717 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:03.717 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:03.717 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:05 np0005539564 nova_compute[226295]: 2025-11-29 08:07:05.079 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:05.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:06 np0005539564 nova_compute[226295]: 2025-11-29 08:07:06.696 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:07.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:07.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:08 np0005539564 nova_compute[226295]: 2025-11-29 08:07:08.469 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:08 np0005539564 nova_compute[226295]: 2025-11-29 08:07:08.470 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:08 np0005539564 nova_compute[226295]: 2025-11-29 08:07:08.500 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:07:08 np0005539564 nova_compute[226295]: 2025-11-29 08:07:08.628 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:08 np0005539564 nova_compute[226295]: 2025-11-29 08:07:08.628 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:08 np0005539564 nova_compute[226295]: 2025-11-29 08:07:08.638 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:08 np0005539564 nova_compute[226295]: 2025-11-29 08:07:08.638 226310 INFO nova.compute.claims [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:07:08 np0005539564 nova_compute[226295]: 2025-11-29 08:07:08.905 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4227337823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:09.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.367 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.378 226310 DEBUG nova.compute.provider_tree [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.413 226310 DEBUG nova.scheduler.client.report [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.458 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.459 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.530 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.531 226310 DEBUG nova.network.neutron [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:07:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.563 226310 INFO nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:07:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:09.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.601 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.735 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.737 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.738 226310 INFO nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Creating image(s)#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.783 226310 DEBUG nova.storage.rbd_utils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 13ac8506-e387-462d-a5ea-7f44a5e08994_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.831 226310 DEBUG nova.storage.rbd_utils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 13ac8506-e387-462d-a5ea-7f44a5e08994_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.878 226310 DEBUG nova.storage.rbd_utils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 13ac8506-e387-462d-a5ea-7f44a5e08994_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.883 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.971 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.973 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.974 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:09 np0005539564 nova_compute[226295]: 2025-11-29 08:07:09.975 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.019 226310 DEBUG nova.storage.rbd_utils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 13ac8506-e387-462d-a5ea-7f44a5e08994_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.025 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 13ac8506-e387-462d-a5ea-7f44a5e08994_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.057 226310 DEBUG nova.policy [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '484a7cf7f6cc49de97903a4efa4db0a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.377 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 13ac8506-e387-462d-a5ea-7f44a5e08994_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.495 226310 DEBUG nova.storage.rbd_utils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] resizing rbd image 13ac8506-e387-462d-a5ea-7f44a5e08994_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:07:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.647 226310 DEBUG nova.objects.instance [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lazy-loading 'migration_context' on Instance uuid 13ac8506-e387-462d-a5ea-7f44a5e08994 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.667 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.668 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Ensure instance console log exists: /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.668 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.669 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:10 np0005539564 nova_compute[226295]: 2025-11-29 08:07:10.669 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:11 np0005539564 nova_compute[226295]: 2025-11-29 08:07:11.276 226310 DEBUG nova.network.neutron [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Successfully created port: 0860edd1-f244-48a4-b41b-54ead40fd453 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:07:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:11.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:11 np0005539564 nova_compute[226295]: 2025-11-29 08:07:11.699 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:13.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:13.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:15 np0005539564 nova_compute[226295]: 2025-11-29 08:07:15.048 226310 DEBUG nova.network.neutron [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Successfully created port: 67d8d30d-6a2f-4232-8d4c-2605fda704a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:07:15 np0005539564 nova_compute[226295]: 2025-11-29 08:07:15.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:15.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:15.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:16 np0005539564 nova_compute[226295]: 2025-11-29 08:07:16.702 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:17 np0005539564 nova_compute[226295]: 2025-11-29 08:07:17.038 226310 DEBUG nova.network.neutron [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Successfully updated port: 0860edd1-f244-48a4-b41b-54ead40fd453 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:07:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:17.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:18 np0005539564 nova_compute[226295]: 2025-11-29 08:07:18.031 226310 DEBUG nova.compute.manager [req-ed1e4a07-7b7a-477d-8799-13942c03daf7 req-1c59934a-2d78-4ec6-a22e-6d26b6113451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-changed-0860edd1-f244-48a4-b41b-54ead40fd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:18 np0005539564 nova_compute[226295]: 2025-11-29 08:07:18.032 226310 DEBUG nova.compute.manager [req-ed1e4a07-7b7a-477d-8799-13942c03daf7 req-1c59934a-2d78-4ec6-a22e-6d26b6113451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Refreshing instance network info cache due to event network-changed-0860edd1-f244-48a4-b41b-54ead40fd453. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:18 np0005539564 nova_compute[226295]: 2025-11-29 08:07:18.032 226310 DEBUG oslo_concurrency.lockutils [req-ed1e4a07-7b7a-477d-8799-13942c03daf7 req-1c59934a-2d78-4ec6-a22e-6d26b6113451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:18 np0005539564 nova_compute[226295]: 2025-11-29 08:07:18.033 226310 DEBUG oslo_concurrency.lockutils [req-ed1e4a07-7b7a-477d-8799-13942c03daf7 req-1c59934a-2d78-4ec6-a22e-6d26b6113451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:18 np0005539564 nova_compute[226295]: 2025-11-29 08:07:18.033 226310 DEBUG nova.network.neutron [req-ed1e4a07-7b7a-477d-8799-13942c03daf7 req-1c59934a-2d78-4ec6-a22e-6d26b6113451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Refreshing network info cache for port 0860edd1-f244-48a4-b41b-54ead40fd453 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:18 np0005539564 nova_compute[226295]: 2025-11-29 08:07:18.891 226310 DEBUG nova.network.neutron [req-ed1e4a07-7b7a-477d-8799-13942c03daf7 req-1c59934a-2d78-4ec6-a22e-6d26b6113451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:07:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:19.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:19.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:19 np0005539564 nova_compute[226295]: 2025-11-29 08:07:19.660 226310 DEBUG nova.network.neutron [req-ed1e4a07-7b7a-477d-8799-13942c03daf7 req-1c59934a-2d78-4ec6-a22e-6d26b6113451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:19 np0005539564 nova_compute[226295]: 2025-11-29 08:07:19.687 226310 DEBUG oslo_concurrency.lockutils [req-ed1e4a07-7b7a-477d-8799-13942c03daf7 req-1c59934a-2d78-4ec6-a22e-6d26b6113451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:20 np0005539564 nova_compute[226295]: 2025-11-29 08:07:20.085 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:20 np0005539564 nova_compute[226295]: 2025-11-29 08:07:20.703 226310 DEBUG nova.network.neutron [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Successfully updated port: 67d8d30d-6a2f-4232-8d4c-2605fda704a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:07:20 np0005539564 nova_compute[226295]: 2025-11-29 08:07:20.733 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:20 np0005539564 nova_compute[226295]: 2025-11-29 08:07:20.733 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquired lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:20 np0005539564 nova_compute[226295]: 2025-11-29 08:07:20.734 226310 DEBUG nova.network.neutron [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:21 np0005539564 nova_compute[226295]: 2025-11-29 08:07:21.026 226310 DEBUG nova.network.neutron [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:07:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:21.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:21 np0005539564 nova_compute[226295]: 2025-11-29 08:07:21.507 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:21 np0005539564 nova_compute[226295]: 2025-11-29 08:07:21.528 226310 WARNING nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Nov 29 03:07:21 np0005539564 nova_compute[226295]: 2025-11-29 08:07:21.529 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Triggering sync for uuid 13ac8506-e387-462d-a5ea-7f44a5e08994 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:07:21 np0005539564 nova_compute[226295]: 2025-11-29 08:07:21.530 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:21 np0005539564 podman[256545]: 2025-11-29 08:07:21.543860524 +0000 UTC m=+0.083718149 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:07:21 np0005539564 podman[256546]: 2025-11-29 08:07:21.554502831 +0000 UTC m=+0.081367986 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:21 np0005539564 podman[256544]: 2025-11-29 08:07:21.570452761 +0000 UTC m=+0.118583730 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:21.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:21 np0005539564 nova_compute[226295]: 2025-11-29 08:07:21.723 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:22 np0005539564 nova_compute[226295]: 2025-11-29 08:07:22.514 226310 DEBUG nova.compute.manager [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-changed-67d8d30d-6a2f-4232-8d4c-2605fda704a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:22 np0005539564 nova_compute[226295]: 2025-11-29 08:07:22.515 226310 DEBUG nova.compute.manager [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Refreshing instance network info cache due to event network-changed-67d8d30d-6a2f-4232-8d4c-2605fda704a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:22 np0005539564 nova_compute[226295]: 2025-11-29 08:07:22.515 226310 DEBUG oslo_concurrency.lockutils [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:23.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.800 226310 DEBUG nova.network.neutron [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Updating instance_info_cache with network_info: [{"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.837 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Releasing lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.838 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Instance network_info: |[{"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.839 226310 DEBUG oslo_concurrency.lockutils [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.840 226310 DEBUG nova.network.neutron [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Refreshing network info cache for port 67d8d30d-6a2f-4232-8d4c-2605fda704a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.849 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Start _get_guest_xml network_info=[{"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.859 226310 WARNING nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.870 226310 DEBUG nova.virt.libvirt.host [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.871 226310 DEBUG nova.virt.libvirt.host [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.878 226310 DEBUG nova.virt.libvirt.host [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.878 226310 DEBUG nova.virt.libvirt.host [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.880 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.881 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.881 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.882 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.882 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.883 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.883 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.883 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.884 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.884 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.885 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.885 226310 DEBUG nova.virt.hardware [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:23 np0005539564 nova_compute[226295]: 2025-11-29 08:07:23.890 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3806903345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.349 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.415 226310 DEBUG nova.storage.rbd_utils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 13ac8506-e387-462d-a5ea-7f44a5e08994_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.422 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/243536570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.863 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.866 226310 DEBUG nova.virt.libvirt.vif [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-514053367',display_name='tempest-ServersTestMultiNic-server-514053367',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-514053367',id=84,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-dqvofqxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:09Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=13ac8506-e387-462d-a5ea-7f44a5e08994,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.867 226310 DEBUG nova.network.os_vif_util [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.869 226310 DEBUG nova.network.os_vif_util [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:00,bridge_name='br-int',has_traffic_filtering=True,id=0860edd1-f244-48a4-b41b-54ead40fd453,network=Network(701efe72-3dbf-4709-9b1f-12b18fde4750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0860edd1-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.871 226310 DEBUG nova.virt.libvirt.vif [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-514053367',display_name='tempest-ServersTestMultiNic-server-514053367',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-514053367',id=84,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-dqvofqxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:09Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=13ac8506-e387-462d-a5ea-7f44a5e08994,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.871 226310 DEBUG nova.network.os_vif_util [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.872 226310 DEBUG nova.network.os_vif_util [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:1b:96,bridge_name='br-int',has_traffic_filtering=True,id=67d8d30d-6a2f-4232-8d4c-2605fda704a1,network=Network(41c0b1c4-2921-43cc-8517-bb7bf56a02a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67d8d30d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.873 226310 DEBUG nova.objects.instance [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lazy-loading 'pci_devices' on Instance uuid 13ac8506-e387-462d-a5ea-7f44a5e08994 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.906 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <uuid>13ac8506-e387-462d-a5ea-7f44a5e08994</uuid>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <name>instance-00000054</name>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersTestMultiNic-server-514053367</nova:name>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:07:23</nova:creationTime>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:user uuid="484a7cf7f6cc49de97903a4efa4db0a5">tempest-ServersTestMultiNic-1863571577-project-member</nova:user>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:project uuid="3fc18ed0bcfe45d99b2965a6745bb628">tempest-ServersTestMultiNic-1863571577</nova:project>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:port uuid="0860edd1-f244-48a4-b41b-54ead40fd453">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.167" ipVersion="4"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <nova:port uuid="67d8d30d-6a2f-4232-8d4c-2605fda704a1">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.1.236" ipVersion="4"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <entry name="serial">13ac8506-e387-462d-a5ea-7f44a5e08994</entry>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <entry name="uuid">13ac8506-e387-462d-a5ea-7f44a5e08994</entry>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/13ac8506-e387-462d-a5ea-7f44a5e08994_disk">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/13ac8506-e387-462d-a5ea-7f44a5e08994_disk.config">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:cd:e5:00"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <target dev="tap0860edd1-f2"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5c:1b:96"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <target dev="tap67d8d30d-6a"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994/console.log" append="off"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:07:24 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:07:24 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:07:24 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:07:24 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.908 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Preparing to wait for external event network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.909 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.910 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.911 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.912 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Preparing to wait for external event network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.912 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.913 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.913 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.915 226310 DEBUG nova.virt.libvirt.vif [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-514053367',display_name='tempest-ServersTestMultiNic-server-514053367',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-514053367',id=84,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-dqvofqxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:09Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=13ac8506-e387-462d-a5ea-7f44a5e08994,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.916 226310 DEBUG nova.network.os_vif_util [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.918 226310 DEBUG nova.network.os_vif_util [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:00,bridge_name='br-int',has_traffic_filtering=True,id=0860edd1-f244-48a4-b41b-54ead40fd453,network=Network(701efe72-3dbf-4709-9b1f-12b18fde4750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0860edd1-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.919 226310 DEBUG os_vif [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:00,bridge_name='br-int',has_traffic_filtering=True,id=0860edd1-f244-48a4-b41b-54ead40fd453,network=Network(701efe72-3dbf-4709-9b1f-12b18fde4750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0860edd1-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.920 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.921 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.921 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.926 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.926 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0860edd1-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.927 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0860edd1-f2, col_values=(('external_ids', {'iface-id': '0860edd1-f244-48a4-b41b-54ead40fd453', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:e5:00', 'vm-uuid': '13ac8506-e387-462d-a5ea-7f44a5e08994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.929 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539564 NetworkManager[48997]: <info>  [1764403644.9311] manager: (tap0860edd1-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.932 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.942 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.943 226310 INFO os_vif [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:00,bridge_name='br-int',has_traffic_filtering=True,id=0860edd1-f244-48a4-b41b-54ead40fd453,network=Network(701efe72-3dbf-4709-9b1f-12b18fde4750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0860edd1-f2')#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.944 226310 DEBUG nova.virt.libvirt.vif [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-514053367',display_name='tempest-ServersTestMultiNic-server-514053367',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-514053367',id=84,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-dqvofqxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:09Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=13ac8506-e387-462d-a5ea-7f44a5e08994,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.945 226310 DEBUG nova.network.os_vif_util [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.945 226310 DEBUG nova.network.os_vif_util [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:1b:96,bridge_name='br-int',has_traffic_filtering=True,id=67d8d30d-6a2f-4232-8d4c-2605fda704a1,network=Network(41c0b1c4-2921-43cc-8517-bb7bf56a02a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67d8d30d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.946 226310 DEBUG os_vif [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:1b:96,bridge_name='br-int',has_traffic_filtering=True,id=67d8d30d-6a2f-4232-8d4c-2605fda704a1,network=Network(41c0b1c4-2921-43cc-8517-bb7bf56a02a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67d8d30d-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.947 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.947 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.948 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.950 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.950 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67d8d30d-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.951 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67d8d30d-6a, col_values=(('external_ids', {'iface-id': '67d8d30d-6a2f-4232-8d4c-2605fda704a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:1b:96', 'vm-uuid': '13ac8506-e387-462d-a5ea-7f44a5e08994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.952 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539564 NetworkManager[48997]: <info>  [1764403644.9537] manager: (tap67d8d30d-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.963 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539564 nova_compute[226295]: 2025-11-29 08:07:24.964 226310 INFO os_vif [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:1b:96,bridge_name='br-int',has_traffic_filtering=True,id=67d8d30d-6a2f-4232-8d4c-2605fda704a1,network=Network(41c0b1c4-2921-43cc-8517-bb7bf56a02a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67d8d30d-6a')#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.036 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.037 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.037 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No VIF found with MAC fa:16:3e:cd:e5:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.037 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] No VIF found with MAC fa:16:3e:5c:1b:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.038 226310 INFO nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Using config drive#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.071 226310 DEBUG nova.storage.rbd_utils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 13ac8506-e387-462d-a5ea-7f44a5e08994_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:25.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.569 226310 INFO nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Creating config drive at /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994/disk.config#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.579 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejrry1le execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:25.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.723 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejrry1le" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.750 226310 DEBUG nova.storage.rbd_utils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] rbd image 13ac8506-e387-462d-a5ea-7f44a5e08994_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:25 np0005539564 nova_compute[226295]: 2025-11-29 08:07:25.754 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994/disk.config 13ac8506-e387-462d-a5ea-7f44a5e08994_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:26 np0005539564 nova_compute[226295]: 2025-11-29 08:07:26.020 226310 DEBUG nova.network.neutron [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Updated VIF entry in instance network info cache for port 67d8d30d-6a2f-4232-8d4c-2605fda704a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:26 np0005539564 nova_compute[226295]: 2025-11-29 08:07:26.021 226310 DEBUG nova.network.neutron [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Updating instance_info_cache with network_info: [{"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:26 np0005539564 nova_compute[226295]: 2025-11-29 08:07:26.047 226310 DEBUG oslo_concurrency.lockutils [req-c5202e1a-2451-473a-a4be-a41b7d7759b5 req-69b65f74-aa14-4ebe-8499-8227e04a6590 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-13ac8506-e387-462d-a5ea-7f44a5e08994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:26 np0005539564 nova_compute[226295]: 2025-11-29 08:07:26.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:26 np0005539564 nova_compute[226295]: 2025-11-29 08:07:26.862 226310 DEBUG oslo_concurrency.processutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994/disk.config 13ac8506-e387-462d-a5ea-7f44a5e08994_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:26 np0005539564 nova_compute[226295]: 2025-11-29 08:07:26.863 226310 INFO nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Deleting local config drive /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:26 np0005539564 kernel: tap0860edd1-f2: entered promiscuous mode
Nov 29 03:07:26 np0005539564 NetworkManager[48997]: <info>  [1764403646.9387] manager: (tap0860edd1-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Nov 29 03:07:26 np0005539564 nova_compute[226295]: 2025-11-29 08:07:26.939 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:26Z|00256|binding|INFO|Claiming lport 0860edd1-f244-48a4-b41b-54ead40fd453 for this chassis.
Nov 29 03:07:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:26Z|00257|binding|INFO|0860edd1-f244-48a4-b41b-54ead40fd453: Claiming fa:16:3e:cd:e5:00 10.100.0.167
Nov 29 03:07:26 np0005539564 kernel: tap67d8d30d-6a: entered promiscuous mode
Nov 29 03:07:26 np0005539564 NetworkManager[48997]: <info>  [1764403646.9564] manager: (tap67d8d30d-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.962 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:e5:00 10.100.0.167'], port_security=['fa:16:3e:cd:e5:00 10.100.0.167'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.167/24', 'neutron:device_id': '13ac8506-e387-462d-a5ea-7f44a5e08994', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-701efe72-3dbf-4709-9b1f-12b18fde4750', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73075667-2556-4258-8653-baacec20b314, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0860edd1-f244-48a4-b41b-54ead40fd453) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.963 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0860edd1-f244-48a4-b41b-54ead40fd453 in datapath 701efe72-3dbf-4709-9b1f-12b18fde4750 bound to our chassis#033[00m
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.964 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 701efe72-3dbf-4709-9b1f-12b18fde4750#033[00m
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.976 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9d7268-c87e-4e13-871e-48f897d7960b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.978 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap701efe72-31 in ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.979 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap701efe72-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.980 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[71d701d2-4c07-4317-8af5-7128de355e9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.981 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ba49ec-5b30-4b51-a13d-1edd056d861d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:26 np0005539564 systemd-udevd[256751]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:26 np0005539564 systemd-udevd[256752]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:26.991 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[f1eaeb63-91df-44d6-a27d-1e053349c1e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 NetworkManager[48997]: <info>  [1764403647.0031] device (tap67d8d30d-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:27 np0005539564 NetworkManager[48997]: <info>  [1764403647.0045] device (tap67d8d30d-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:27 np0005539564 NetworkManager[48997]: <info>  [1764403647.0095] device (tap0860edd1-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:27 np0005539564 NetworkManager[48997]: <info>  [1764403647.0107] device (tap0860edd1-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.010 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 systemd-machined[190128]: New machine qemu-35-instance-00000054.
Nov 29 03:07:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:27Z|00258|binding|INFO|Claiming lport 67d8d30d-6a2f-4232-8d4c-2605fda704a1 for this chassis.
Nov 29 03:07:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:27Z|00259|binding|INFO|67d8d30d-6a2f-4232-8d4c-2605fda704a1: Claiming fa:16:3e:5c:1b:96 10.100.1.236
Nov 29 03:07:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:27Z|00260|binding|INFO|Setting lport 0860edd1-f244-48a4-b41b-54ead40fd453 ovn-installed in OVS
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.015 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.015 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:27Z|00261|binding|INFO|Setting lport 0860edd1-f244-48a4-b41b-54ead40fd453 up in Southbound
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.019 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:1b:96 10.100.1.236'], port_security=['fa:16:3e:5c:1b:96 10.100.1.236'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.236/24', 'neutron:device_id': '13ac8506-e387-462d-a5ea-7f44a5e08994', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c0b1c4-2921-43cc-8517-bb7bf56a02a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5c11960-77e8-4615-a0c4-34021e41e6d2, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=67d8d30d-6a2f-4232-8d4c-2605fda704a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.020 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0d3c03-9106-4c4e-832e-46b93eda911e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.045 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e61dbb-7f21-4f34-a51c-e3ada7f10e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 systemd[1]: Started Virtual Machine qemu-35-instance-00000054.
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.051 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1820d06e-909e-45c0-8f6a-00bdf5b3e071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 NetworkManager[48997]: <info>  [1764403647.0520] manager: (tap701efe72-30): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Nov 29 03:07:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:27Z|00262|binding|INFO|Setting lport 67d8d30d-6a2f-4232-8d4c-2605fda704a1 ovn-installed in OVS
Nov 29 03:07:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:27Z|00263|binding|INFO|Setting lport 67d8d30d-6a2f-4232-8d4c-2605fda704a1 up in Southbound
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.054 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.080 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5761a0a5-87ba-4380-8820-78909f380346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.082 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d25310-4c81-48ac-87dc-d2d886ef74aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 NetworkManager[48997]: <info>  [1764403647.1004] device (tap701efe72-30): carrier: link connected
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.105 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c0394fb4-7bd0-4932-8e62-f7d5b7b44987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.120 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[72c18fc9-96aa-47d5-adab-03fb1f915f04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap701efe72-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:2d:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662178, 'reachable_time': 18352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256780, 'error': None, 'target': 'ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.139 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0cc3c9-3a98-4ca3-a308-d137d0f13523]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:2d25'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662178, 'tstamp': 662178}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256784, 'error': None, 'target': 'ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.154 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5cacbf3a-20bb-4cd7-8785-20d5a51f7276]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap701efe72-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:2d:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662178, 'reachable_time': 18352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256786, 'error': None, 'target': 'ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.179 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[29c091da-5c49-44de-9573-67bda220289f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.252 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e2bfef-7f05-4521-b741-f065a701f3ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.254 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap701efe72-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.254 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.254 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap701efe72-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 kernel: tap701efe72-30: entered promiscuous mode
Nov 29 03:07:27 np0005539564 NetworkManager[48997]: <info>  [1764403647.2582] manager: (tap701efe72-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.259 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.261 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap701efe72-30, col_values=(('external_ids', {'iface-id': '9ebb0491-27ce-4329-beef-d4588e454bc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:27Z|00264|binding|INFO|Releasing lport 9ebb0491-27ce-4329-beef-d4588e454bc1 from this chassis (sb_readonly=0)
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.283 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/701efe72-3dbf-4709-9b1f-12b18fde4750.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/701efe72-3dbf-4709-9b1f-12b18fde4750.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.284 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9084ab29-7cbd-476e-8ede-b07c52601004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.285 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-701efe72-3dbf-4709-9b1f-12b18fde4750
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/701efe72-3dbf-4709-9b1f-12b18fde4750.pid.haproxy
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 701efe72-3dbf-4709-9b1f-12b18fde4750
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.285 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750', 'env', 'PROCESS_TAG=haproxy-701efe72-3dbf-4709-9b1f-12b18fde4750', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/701efe72-3dbf-4709-9b1f-12b18fde4750.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.357 226310 DEBUG nova.compute.manager [req-61e062c5-ef3c-439b-b34b-a6bffc84bfcf req-8c990fcd-bc13-42ae-a45c-5a52bbb2acf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.358 226310 DEBUG oslo_concurrency.lockutils [req-61e062c5-ef3c-439b-b34b-a6bffc84bfcf req-8c990fcd-bc13-42ae-a45c-5a52bbb2acf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.358 226310 DEBUG oslo_concurrency.lockutils [req-61e062c5-ef3c-439b-b34b-a6bffc84bfcf req-8c990fcd-bc13-42ae-a45c-5a52bbb2acf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.359 226310 DEBUG oslo_concurrency.lockutils [req-61e062c5-ef3c-439b-b34b-a6bffc84bfcf req-8c990fcd-bc13-42ae-a45c-5a52bbb2acf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.359 226310 DEBUG nova.compute.manager [req-61e062c5-ef3c-439b-b34b-a6bffc84bfcf req-8c990fcd-bc13-42ae-a45c-5a52bbb2acf8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Processing event network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.364 226310 DEBUG nova.compute.manager [req-a580a4a2-e767-4111-a85d-78c4b66c57b8 req-ec8c291a-04c4-4ce8-a7c3-631b61c3af57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.364 226310 DEBUG oslo_concurrency.lockutils [req-a580a4a2-e767-4111-a85d-78c4b66c57b8 req-ec8c291a-04c4-4ce8-a7c3-631b61c3af57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.364 226310 DEBUG oslo_concurrency.lockutils [req-a580a4a2-e767-4111-a85d-78c4b66c57b8 req-ec8c291a-04c4-4ce8-a7c3-631b61c3af57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.364 226310 DEBUG oslo_concurrency.lockutils [req-a580a4a2-e767-4111-a85d-78c4b66c57b8 req-ec8c291a-04c4-4ce8-a7c3-631b61c3af57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.365 226310 DEBUG nova.compute.manager [req-a580a4a2-e767-4111-a85d-78c4b66c57b8 req-ec8c291a-04c4-4ce8-a7c3-631b61c3af57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Processing event network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:27.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.511 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403647.5110762, 13ac8506-e387-462d-a5ea-7f44a5e08994 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.512 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.517 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.534 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.541 226310 INFO nova.virt.libvirt.driver [-] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Instance spawned successfully.#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.541 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.544 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.549 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.572 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.573 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.573 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.574 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.575 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.575 226310 DEBUG nova.virt.libvirt.driver [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.581 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.581 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403647.5165415, 13ac8506-e387-462d-a5ea-7f44a5e08994 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.582 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:27.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.643 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.648 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403647.5216002, 13ac8506-e387-462d-a5ea-7f44a5e08994 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.648 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.676 226310 INFO nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Took 17.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.677 226310 DEBUG nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.680 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.692 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.734 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:27 np0005539564 podman[256859]: 2025-11-29 08:07:27.755774114 +0000 UTC m=+0.066527465 container create 367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.764 226310 INFO nova.compute.manager [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Took 19.18 seconds to build instance.#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.805 226310 DEBUG oslo_concurrency.lockutils [None req-7e8d9354-01b5-42e4-9cc0-3b234a8a2410 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.805 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.805 226310 INFO nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.805 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539564 podman[256859]: 2025-11-29 08:07:27.7196461 +0000 UTC m=+0.030399491 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:27 np0005539564 nova_compute[226295]: 2025-11-29 08:07:27.979 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:27.982 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:28 np0005539564 systemd[1]: Started libpod-conmon-367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc.scope.
Nov 29 03:07:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:07:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3621953641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:07:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:07:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3621953641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:07:28 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:07:28 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff8ea7df95a2a136387b6e376ec6b23a213f56c2c65c9cc940955ddf19c3e71/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:28 np0005539564 podman[256859]: 2025-11-29 08:07:28.073536696 +0000 UTC m=+0.384290117 container init 367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:28 np0005539564 podman[256859]: 2025-11-29 08:07:28.082461887 +0000 UTC m=+0.393215258 container start 367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:07:28 np0005539564 neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750[256874]: [NOTICE]   (256878) : New worker (256880) forked
Nov 29 03:07:28 np0005539564 neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750[256874]: [NOTICE]   (256878) : Loading success.
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.160 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 67d8d30d-6a2f-4232-8d4c-2605fda704a1 in datapath 41c0b1c4-2921-43cc-8517-bb7bf56a02a6 unbound from our chassis#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.165 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41c0b1c4-2921-43cc-8517-bb7bf56a02a6#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.183 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[795343f2-b14c-47ef-9bad-9fdb860ac4a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.185 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41c0b1c4-21 in ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.189 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41c0b1c4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.189 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e84af3fc-2a3a-4a13-b0df-d428ac5fabf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.190 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7272b855-aa9b-4d6c-a1d5-3f267d0245cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.209 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[7283f49e-fce8-41e7-a6e2-0b8a008579e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.230 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d93693-6792-4fd8-9680-bbf96f04de11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.286 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f42a3108-46d8-4167-828e-fbf0380660fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 NetworkManager[48997]: <info>  [1764403648.3031] manager: (tap41c0b1c4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Nov 29 03:07:28 np0005539564 systemd-udevd[256765]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.302 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2ef899-fee6-4c0c-9ae3-ab67b00b46ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.346 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8abbec91-dbde-469b-add4-86acc6cca48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.350 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[480b35dd-2104-4eae-b48a-f15b874a1eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 NetworkManager[48997]: <info>  [1764403648.3795] device (tap41c0b1c4-20): carrier: link connected
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.384 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[acdfe160-b2e8-46ca-838a-ff1dd1cc9ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.404 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[59ea6335-7adc-4f91-a3ab-4f4f214d6f56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c0b1c4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:cd:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662306, 'reachable_time': 17444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256899, 'error': None, 'target': 'ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.432 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aac7a2a6-b7b5-4f5d-afeb-f6462d482996]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:cdb6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662306, 'tstamp': 662306}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256900, 'error': None, 'target': 'ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.469 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a9575f-8ddf-4e2e-ba0a-482762d3b606]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c0b1c4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:cd:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662306, 'reachable_time': 17444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256901, 'error': None, 'target': 'ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.531 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1cbc71-856f-4912-928d-1f25c09e6214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.613 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a633090f-5929-490f-9ec1-c3386f9ed140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.616 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c0b1c4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.616 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.617 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c0b1c4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:28 np0005539564 nova_compute[226295]: 2025-11-29 08:07:28.661 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:28 np0005539564 NetworkManager[48997]: <info>  [1764403648.6643] manager: (tap41c0b1c4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Nov 29 03:07:28 np0005539564 kernel: tap41c0b1c4-20: entered promiscuous mode
Nov 29 03:07:28 np0005539564 nova_compute[226295]: 2025-11-29 08:07:28.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.670 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41c0b1c4-20, col_values=(('external_ids', {'iface-id': '7a7eb3ab-67f7-4173-9847-ca5f6b24b433'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:28 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:28Z|00265|binding|INFO|Releasing lport 7a7eb3ab-67f7-4173-9847-ca5f6b24b433 from this chassis (sb_readonly=0)
Nov 29 03:07:28 np0005539564 nova_compute[226295]: 2025-11-29 08:07:28.672 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:28 np0005539564 nova_compute[226295]: 2025-11-29 08:07:28.673 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.674 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41c0b1c4-2921-43cc-8517-bb7bf56a02a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41c0b1c4-2921-43cc-8517-bb7bf56a02a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.676 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea7aa10-202b-4e34-a8c6-d12f9e8a0485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.677 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-41c0b1c4-2921-43cc-8517-bb7bf56a02a6
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/41c0b1c4-2921-43cc-8517-bb7bf56a02a6.pid.haproxy
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 41c0b1c4-2921-43cc-8517-bb7bf56a02a6
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:28.678 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6', 'env', 'PROCESS_TAG=haproxy-41c0b1c4-2921-43cc-8517-bb7bf56a02a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41c0b1c4-2921-43cc-8517-bb7bf56a02a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:28 np0005539564 nova_compute[226295]: 2025-11-29 08:07:28.696 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 podman[256933]: 2025-11-29 08:07:29.16178595 +0000 UTC m=+0.095179508 container create 2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:07:29 np0005539564 podman[256933]: 2025-11-29 08:07:29.111436022 +0000 UTC m=+0.044829630 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:29 np0005539564 systemd[1]: Started libpod-conmon-2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596.scope.
Nov 29 03:07:29 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:07:29 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebcc21189444dcc361af3814278be9ded242004de44c35e7a99b5f29f70f5cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:29 np0005539564 podman[256933]: 2025-11-29 08:07:29.256481605 +0000 UTC m=+0.189875153 container init 2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:07:29 np0005539564 podman[256933]: 2025-11-29 08:07:29.266377531 +0000 UTC m=+0.199771089 container start 2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:07:29 np0005539564 neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6[256948]: [NOTICE]   (256952) : New worker (256954) forked
Nov 29 03:07:29 np0005539564 neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6[256948]: [NOTICE]   (256952) : Loading success.
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.333 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:07:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.397 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:29.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.398 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.399 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.399 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.400 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.401 226310 INFO nova.compute.manager [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Terminating instance#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.403 226310 DEBUG nova.compute.manager [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:07:29 np0005539564 kernel: tap0860edd1-f2 (unregistering): left promiscuous mode
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.449 226310 DEBUG nova.compute.manager [req-a1c04c15-23f3-4f4c-bee0-275ee994677e req-e66d180c-b27a-426d-b58b-fab1e13f6399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.451 226310 DEBUG oslo_concurrency.lockutils [req-a1c04c15-23f3-4f4c-bee0-275ee994677e req-e66d180c-b27a-426d-b58b-fab1e13f6399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.453 226310 DEBUG oslo_concurrency.lockutils [req-a1c04c15-23f3-4f4c-bee0-275ee994677e req-e66d180c-b27a-426d-b58b-fab1e13f6399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.453 226310 DEBUG oslo_concurrency.lockutils [req-a1c04c15-23f3-4f4c-bee0-275ee994677e req-e66d180c-b27a-426d-b58b-fab1e13f6399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:29 np0005539564 NetworkManager[48997]: <info>  [1764403649.4543] device (tap0860edd1-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.454 226310 DEBUG nova.compute.manager [req-a1c04c15-23f3-4f4c-bee0-275ee994677e req-e66d180c-b27a-426d-b58b-fab1e13f6399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] No waiting events found dispatching network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.454 226310 WARNING nova.compute.manager [req-a1c04c15-23f3-4f4c-bee0-275ee994677e req-e66d180c-b27a-426d-b58b-fab1e13f6399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received unexpected event network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:07:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:29Z|00266|binding|INFO|Releasing lport 0860edd1-f244-48a4-b41b-54ead40fd453 from this chassis (sb_readonly=0)
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.465 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:29Z|00267|binding|INFO|Setting lport 0860edd1-f244-48a4-b41b-54ead40fd453 down in Southbound
Nov 29 03:07:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:29Z|00268|binding|INFO|Removing iface tap0860edd1-f2 ovn-installed in OVS
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.469 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 kernel: tap67d8d30d-6a (unregistering): left promiscuous mode
Nov 29 03:07:29 np0005539564 NetworkManager[48997]: <info>  [1764403649.4766] device (tap67d8d30d-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.477 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:e5:00 10.100.0.167'], port_security=['fa:16:3e:cd:e5:00 10.100.0.167'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.167/24', 'neutron:device_id': '13ac8506-e387-462d-a5ea-7f44a5e08994', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-701efe72-3dbf-4709-9b1f-12b18fde4750', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73075667-2556-4258-8653-baacec20b314, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0860edd1-f244-48a4-b41b-54ead40fd453) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.479 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0860edd1-f244-48a4-b41b-54ead40fd453 in datapath 701efe72-3dbf-4709-9b1f-12b18fde4750 unbound from our chassis#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.484 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 701efe72-3dbf-4709-9b1f-12b18fde4750, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.485 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ab11cc4c-24c0-4481-90a5-2892132337bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.486 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750 namespace which is not needed anymore#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.505 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:29Z|00269|binding|INFO|Releasing lport 67d8d30d-6a2f-4232-8d4c-2605fda704a1 from this chassis (sb_readonly=0)
Nov 29 03:07:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:29Z|00270|binding|INFO|Setting lport 67d8d30d-6a2f-4232-8d4c-2605fda704a1 down in Southbound
Nov 29 03:07:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:29Z|00271|binding|INFO|Removing iface tap67d8d30d-6a ovn-installed in OVS
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.516 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.520 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.530 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:1b:96 10.100.1.236'], port_security=['fa:16:3e:5c:1b:96 10.100.1.236'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.236/24', 'neutron:device_id': '13ac8506-e387-462d-a5ea-7f44a5e08994', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c0b1c4-2921-43cc-8517-bb7bf56a02a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fc18ed0bcfe45d99b2965a6745bb628', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cac99d96-ef5a-494d-8c9f-2ce7162e2cd2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5c11960-77e8-4615-a0c4-34021e41e6d2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=67d8d30d-6a2f-4232-8d4c-2605fda704a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.532 226310 DEBUG nova.compute.manager [req-c3a38dea-a4e0-4348-958c-75868d71c8c3 req-60c2362d-bf0b-41c9-82a4-64e71539f306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.533 226310 DEBUG oslo_concurrency.lockutils [req-c3a38dea-a4e0-4348-958c-75868d71c8c3 req-60c2362d-bf0b-41c9-82a4-64e71539f306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.533 226310 DEBUG oslo_concurrency.lockutils [req-c3a38dea-a4e0-4348-958c-75868d71c8c3 req-60c2362d-bf0b-41c9-82a4-64e71539f306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.534 226310 DEBUG oslo_concurrency.lockutils [req-c3a38dea-a4e0-4348-958c-75868d71c8c3 req-60c2362d-bf0b-41c9-82a4-64e71539f306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.534 226310 DEBUG nova.compute.manager [req-c3a38dea-a4e0-4348-958c-75868d71c8c3 req-60c2362d-bf0b-41c9-82a4-64e71539f306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] No waiting events found dispatching network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.535 226310 WARNING nova.compute.manager [req-c3a38dea-a4e0-4348-958c-75868d71c8c3 req-60c2362d-bf0b-41c9-82a4-64e71539f306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received unexpected event network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:07:29 np0005539564 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000054.scope: Deactivated successfully.
Nov 29 03:07:29 np0005539564 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000054.scope: Consumed 2.471s CPU time.
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.560 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 systemd-machined[190128]: Machine qemu-35-instance-00000054 terminated.
Nov 29 03:07:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:29.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:29 np0005539564 NetworkManager[48997]: <info>  [1764403649.6341] manager: (tap0860edd1-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Nov 29 03:07:29 np0005539564 NetworkManager[48997]: <info>  [1764403649.6515] manager: (tap67d8d30d-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.670 226310 INFO nova.virt.libvirt.driver [-] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Instance destroyed successfully.#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.671 226310 DEBUG nova.objects.instance [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lazy-loading 'resources' on Instance uuid 13ac8506-e387-462d-a5ea-7f44a5e08994 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.689 226310 DEBUG nova.virt.libvirt.vif [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-514053367',display_name='tempest-ServersTestMultiNic-server-514053367',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-514053367',id=84,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-dqvofqxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:27Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=13ac8506-e387-462d-a5ea-7f44a5e08994,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.690 226310 DEBUG nova.network.os_vif_util [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "0860edd1-f244-48a4-b41b-54ead40fd453", "address": "fa:16:3e:cd:e5:00", "network": {"id": "701efe72-3dbf-4709-9b1f-12b18fde4750", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1063142837", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0860edd1-f2", "ovs_interfaceid": "0860edd1-f244-48a4-b41b-54ead40fd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.691 226310 DEBUG nova.network.os_vif_util [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:00,bridge_name='br-int',has_traffic_filtering=True,id=0860edd1-f244-48a4-b41b-54ead40fd453,network=Network(701efe72-3dbf-4709-9b1f-12b18fde4750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0860edd1-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.691 226310 DEBUG os_vif [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:00,bridge_name='br-int',has_traffic_filtering=True,id=0860edd1-f244-48a4-b41b-54ead40fd453,network=Network(701efe72-3dbf-4709-9b1f-12b18fde4750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0860edd1-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.693 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.694 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0860edd1-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.742 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.744 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:29 np0005539564 neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750[256874]: [NOTICE]   (256878) : haproxy version is 2.8.14-c23fe91
Nov 29 03:07:29 np0005539564 neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750[256874]: [NOTICE]   (256878) : path to executable is /usr/sbin/haproxy
Nov 29 03:07:29 np0005539564 neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750[256874]: [WARNING]  (256878) : Exiting Master process...
Nov 29 03:07:29 np0005539564 neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750[256874]: [ALERT]    (256878) : Current worker (256880) exited with code 143 (Terminated)
Nov 29 03:07:29 np0005539564 neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750[256874]: [WARNING]  (256878) : All workers exited. Exiting... (0)
Nov 29 03:07:29 np0005539564 systemd[1]: libpod-367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc.scope: Deactivated successfully.
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.749 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.752 226310 INFO os_vif [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:00,bridge_name='br-int',has_traffic_filtering=True,id=0860edd1-f244-48a4-b41b-54ead40fd453,network=Network(701efe72-3dbf-4709-9b1f-12b18fde4750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0860edd1-f2')#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.753 226310 DEBUG nova.virt.libvirt.vif [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-514053367',display_name='tempest-ServersTestMultiNic-server-514053367',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-514053367',id=84,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fc18ed0bcfe45d99b2965a6745bb628',ramdisk_id='',reservation_id='r-dqvofqxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1863571577',owner_user_name='tempest-ServersTestMultiNic-1863571577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:27Z,user_data=None,user_id='484a7cf7f6cc49de97903a4efa4db0a5',uuid=13ac8506-e387-462d-a5ea-7f44a5e08994,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.754 226310 DEBUG nova.network.os_vif_util [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converting VIF {"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.754 226310 DEBUG nova.network.os_vif_util [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:1b:96,bridge_name='br-int',has_traffic_filtering=True,id=67d8d30d-6a2f-4232-8d4c-2605fda704a1,network=Network(41c0b1c4-2921-43cc-8517-bb7bf56a02a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67d8d30d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.755 226310 DEBUG os_vif [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:1b:96,bridge_name='br-int',has_traffic_filtering=True,id=67d8d30d-6a2f-4232-8d4c-2605fda704a1,network=Network(41c0b1c4-2921-43cc-8517-bb7bf56a02a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67d8d30d-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.756 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 podman[256992]: 2025-11-29 08:07:29.756387319 +0000 UTC m=+0.106989847 container died 367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.756 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67d8d30d-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.758 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.761 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.764 226310 INFO os_vif [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:1b:96,bridge_name='br-int',has_traffic_filtering=True,id=67d8d30d-6a2f-4232-8d4c-2605fda704a1,network=Network(41c0b1c4-2921-43cc-8517-bb7bf56a02a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67d8d30d-6a')#033[00m
Nov 29 03:07:29 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc-userdata-shm.mount: Deactivated successfully.
Nov 29 03:07:29 np0005539564 systemd[1]: var-lib-containers-storage-overlay-7ff8ea7df95a2a136387b6e376ec6b23a213f56c2c65c9cc940955ddf19c3e71-merged.mount: Deactivated successfully.
Nov 29 03:07:29 np0005539564 podman[256992]: 2025-11-29 08:07:29.823798888 +0000 UTC m=+0.174401386 container cleanup 367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:07:29 np0005539564 systemd[1]: libpod-conmon-367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc.scope: Deactivated successfully.
Nov 29 03:07:29 np0005539564 podman[257057]: 2025-11-29 08:07:29.913182949 +0000 UTC m=+0.054350397 container remove 367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.923 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b25da202-58b6-456d-9d0f-605ee12ad6b8]: (4, ('Sat Nov 29 08:07:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750 (367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc)\n367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc\nSat Nov 29 08:07:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750 (367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc)\n367d392268840612eaf33952fa956f365247e7baf4c81eff1de9e38d3c3ceddc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.925 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5b61a08c-4bc4-40a2-b354-b7ecdcb63f6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.926 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap701efe72-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.929 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 kernel: tap701efe72-30: left promiscuous mode
Nov 29 03:07:29 np0005539564 nova_compute[226295]: 2025-11-29 08:07:29.958 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.962 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[515631c8-9ebc-4d81-92e6-9f972f21385d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.982 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ece8dd-aad6-4a1b-b0b6-5d099a634a91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:29.984 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[53f072d1-cad1-4b16-b22c-dd0817b15a17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.011 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[88a2b69f-9e05-40b0-a724-8e8705e5e4e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662172, 'reachable_time': 39153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257070, 'error': None, 'target': 'ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.016 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-701efe72-3dbf-4709-9b1f-12b18fde4750 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:07:30 np0005539564 systemd[1]: run-netns-ovnmeta\x2d701efe72\x2d3dbf\x2d4709\x2d9b1f\x2d12b18fde4750.mount: Deactivated successfully.
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.016 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c10dff-4d1b-4b86-af98-2c314d6caa09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.019 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 67d8d30d-6a2f-4232-8d4c-2605fda704a1 in datapath 41c0b1c4-2921-43cc-8517-bb7bf56a02a6 unbound from our chassis#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.023 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41c0b1c4-2921-43cc-8517-bb7bf56a02a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.024 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddfc4c4-67a5-4389-b661-dc50274e9793]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.025 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6 namespace which is not needed anymore#033[00m
Nov 29 03:07:30 np0005539564 neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6[256948]: [NOTICE]   (256952) : haproxy version is 2.8.14-c23fe91
Nov 29 03:07:30 np0005539564 neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6[256948]: [NOTICE]   (256952) : path to executable is /usr/sbin/haproxy
Nov 29 03:07:30 np0005539564 neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6[256948]: [WARNING]  (256952) : Exiting Master process...
Nov 29 03:07:30 np0005539564 neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6[256948]: [WARNING]  (256952) : Exiting Master process...
Nov 29 03:07:30 np0005539564 neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6[256948]: [ALERT]    (256952) : Current worker (256954) exited with code 143 (Terminated)
Nov 29 03:07:30 np0005539564 neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6[256948]: [WARNING]  (256952) : All workers exited. Exiting... (0)
Nov 29 03:07:30 np0005539564 systemd[1]: libpod-2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596.scope: Deactivated successfully.
Nov 29 03:07:30 np0005539564 podman[257093]: 2025-11-29 08:07:30.203817959 +0000 UTC m=+0.058729016 container died 2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:07:30 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596-userdata-shm.mount: Deactivated successfully.
Nov 29 03:07:30 np0005539564 systemd[1]: var-lib-containers-storage-overlay-1ebcc21189444dcc361af3814278be9ded242004de44c35e7a99b5f29f70f5cb-merged.mount: Deactivated successfully.
Nov 29 03:07:30 np0005539564 podman[257093]: 2025-11-29 08:07:30.265161213 +0000 UTC m=+0.120072210 container cleanup 2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:07:30 np0005539564 systemd[1]: libpod-conmon-2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596.scope: Deactivated successfully.
Nov 29 03:07:30 np0005539564 nova_compute[226295]: 2025-11-29 08:07:30.313 226310 INFO nova.virt.libvirt.driver [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Deleting instance files /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994_del#033[00m
Nov 29 03:07:30 np0005539564 nova_compute[226295]: 2025-11-29 08:07:30.314 226310 INFO nova.virt.libvirt.driver [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Deletion of /var/lib/nova/instances/13ac8506-e387-462d-a5ea-7f44a5e08994_del complete#033[00m
Nov 29 03:07:30 np0005539564 podman[257124]: 2025-11-29 08:07:30.346783125 +0000 UTC m=+0.047813561 container remove 2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.353 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4f2b3a-2b11-4b58-a50c-15b2f687e4cb]: (4, ('Sat Nov 29 08:07:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6 (2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596)\n2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596\nSat Nov 29 08:07:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6 (2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596)\n2f251e8b41cc7bf66219b70ce6b34f88dca98cbd2db393cfac7d7518b99a9596\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.358 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0442a372-e25d-40d8-b28f-eb6968471379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.359 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c0b1c4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:30 np0005539564 kernel: tap41c0b1c4-20: left promiscuous mode
Nov 29 03:07:30 np0005539564 nova_compute[226295]: 2025-11-29 08:07:30.362 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:30 np0005539564 nova_compute[226295]: 2025-11-29 08:07:30.376 226310 INFO nova.compute.manager [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:07:30 np0005539564 nova_compute[226295]: 2025-11-29 08:07:30.377 226310 DEBUG oslo.service.loopingcall [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:07:30 np0005539564 nova_compute[226295]: 2025-11-29 08:07:30.378 226310 DEBUG nova.compute.manager [-] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:07:30 np0005539564 nova_compute[226295]: 2025-11-29 08:07:30.378 226310 DEBUG nova.network.neutron [-] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:07:30 np0005539564 nova_compute[226295]: 2025-11-29 08:07:30.383 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.385 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5f8fba-9119-4c8f-b371-a4c363169e83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.400 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1f616d-48be-493c-b63a-10c43bd6d186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.402 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e609c375-29fb-43e6-985c-b260b364ba6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.421 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0a9208-aaee-42b4-8d18-bb03f99b863d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662296, 'reachable_time': 26868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257138, 'error': None, 'target': 'ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.423 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41c0b1c4-2921-43cc-8517-bb7bf56a02a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:07:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:30.424 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b4b535-c189-4030-b3d2-c855af18e371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:30 np0005539564 systemd[1]: run-netns-ovnmeta\x2d41c0b1c4\x2d2921\x2d43cc\x2d8517\x2dbb7bf56a02a6.mount: Deactivated successfully.
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.122 226310 DEBUG nova.compute.manager [req-d17a2efe-b2f0-4dab-8764-fba1540b3400 req-21c1f771-9a18-4fc3-a49f-9d4cea86daf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-deleted-0860edd1-f244-48a4-b41b-54ead40fd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.123 226310 INFO nova.compute.manager [req-d17a2efe-b2f0-4dab-8764-fba1540b3400 req-21c1f771-9a18-4fc3-a49f-9d4cea86daf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Neutron deleted interface 0860edd1-f244-48a4-b41b-54ead40fd453; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.124 226310 DEBUG nova.network.neutron [req-d17a2efe-b2f0-4dab-8764-fba1540b3400 req-21c1f771-9a18-4fc3-a49f-9d4cea86daf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Updating instance_info_cache with network_info: [{"id": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "address": "fa:16:3e:5c:1b:96", "network": {"id": "41c0b1c4-2921-43cc-8517-bb7bf56a02a6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-238160308", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.236", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fc18ed0bcfe45d99b2965a6745bb628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67d8d30d-6a", "ovs_interfaceid": "67d8d30d-6a2f-4232-8d4c-2605fda704a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.159 226310 DEBUG nova.compute.manager [req-d17a2efe-b2f0-4dab-8764-fba1540b3400 req-21c1f771-9a18-4fc3-a49f-9d4cea86daf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Detach interface failed, port_id=0860edd1-f244-48a4-b41b-54ead40fd453, reason: Instance 13ac8506-e387-462d-a5ea-7f44a5e08994 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:07:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:31.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.549 226310 DEBUG nova.compute.manager [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-unplugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.549 226310 DEBUG oslo_concurrency.lockutils [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.550 226310 DEBUG oslo_concurrency.lockutils [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.550 226310 DEBUG oslo_concurrency.lockutils [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.551 226310 DEBUG nova.compute.manager [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] No waiting events found dispatching network-vif-unplugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.551 226310 DEBUG nova.compute.manager [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-unplugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.552 226310 DEBUG nova.compute.manager [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.552 226310 DEBUG oslo_concurrency.lockutils [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.552 226310 DEBUG oslo_concurrency.lockutils [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.553 226310 DEBUG oslo_concurrency.lockutils [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.553 226310 DEBUG nova.compute.manager [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] No waiting events found dispatching network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.553 226310 WARNING nova.compute.manager [req-a6e46421-c9fb-4d05-bca0-1050b5799f35 req-21b7ebdd-69b9-4989-a560-2546e8ddc14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received unexpected event network-vif-plugged-67d8d30d-6a2f-4232-8d4c-2605fda704a1 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.560 226310 DEBUG nova.network.neutron [-] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.578 226310 INFO nova.compute.manager [-] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Took 1.20 seconds to deallocate network for instance.#033[00m
Nov 29 03:07:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:31.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.644 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.645 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.673 226310 DEBUG nova.compute.manager [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-unplugged-0860edd1-f244-48a4-b41b-54ead40fd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.673 226310 DEBUG oslo_concurrency.lockutils [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.674 226310 DEBUG oslo_concurrency.lockutils [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.674 226310 DEBUG oslo_concurrency.lockutils [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.674 226310 DEBUG nova.compute.manager [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] No waiting events found dispatching network-vif-unplugged-0860edd1-f244-48a4-b41b-54ead40fd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.675 226310 WARNING nova.compute.manager [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received unexpected event network-vif-unplugged-0860edd1-f244-48a4-b41b-54ead40fd453 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.675 226310 DEBUG nova.compute.manager [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.675 226310 DEBUG oslo_concurrency.lockutils [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.676 226310 DEBUG oslo_concurrency.lockutils [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.676 226310 DEBUG oslo_concurrency.lockutils [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.676 226310 DEBUG nova.compute.manager [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] No waiting events found dispatching network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.677 226310 WARNING nova.compute.manager [req-b0869071-bf5e-4391-8a44-6ed10f8357cf req-1efe78f9-62da-4cca-b971-b70b3a8c5892 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received unexpected event network-vif-plugged-0860edd1-f244-48a4-b41b-54ead40fd453 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.724 226310 DEBUG oslo_concurrency.processutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:31 np0005539564 nova_compute[226295]: 2025-11-29 08:07:31.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1174842904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:32 np0005539564 nova_compute[226295]: 2025-11-29 08:07:32.196 226310 DEBUG oslo_concurrency.processutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:32 np0005539564 nova_compute[226295]: 2025-11-29 08:07:32.207 226310 DEBUG nova.compute.provider_tree [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:32 np0005539564 nova_compute[226295]: 2025-11-29 08:07:32.341 226310 DEBUG nova.scheduler.client.report [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:32 np0005539564 nova_compute[226295]: 2025-11-29 08:07:32.436 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:32 np0005539564 nova_compute[226295]: 2025-11-29 08:07:32.544 226310 INFO nova.scheduler.client.report [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Deleted allocations for instance 13ac8506-e387-462d-a5ea-7f44a5e08994#033[00m
Nov 29 03:07:32 np0005539564 nova_compute[226295]: 2025-11-29 08:07:32.815 226310 DEBUG oslo_concurrency.lockutils [None req-affdcd85-1416-499a-9d68-1435a1a38124 484a7cf7f6cc49de97903a4efa4db0a5 3fc18ed0bcfe45d99b2965a6745bb628 - - default default] Lock "13ac8506-e387-462d-a5ea-7f44a5e08994" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:32 np0005539564 nova_compute[226295]: 2025-11-29 08:07:32.892 226310 DEBUG nova.compute.manager [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.038 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.039 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.080 226310 DEBUG nova.objects.instance [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_requests' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.116 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.116 226310 INFO nova.compute.claims [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.117 226310 DEBUG nova.objects.instance [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.132 226310 DEBUG nova.objects.instance [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.178 226310 INFO nova.compute.resource_tracker [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating resource usage from migration 475e32d2-8c8f-4918-aa47-f1847e144c60#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.178 226310 DEBUG nova.compute.resource_tracker [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Starting to track incoming migration 475e32d2-8c8f-4918-aa47-f1847e144c60 with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.249 226310 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.299 226310 DEBUG nova.compute.manager [req-4846efc9-c996-4dc2-9d59-13890822e77e req-8067e708-ce55-41c7-9524-d25dc3028f5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Received event network-vif-deleted-67d8d30d-6a2f-4232-8d4c-2605fda704a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:33.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:33.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:33 np0005539564 nova_compute[226295]: 2025-11-29 08:07:33.602 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.261 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.261 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.289 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.394 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.759 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/49730514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.913 226310 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.919 226310 DEBUG nova.compute.provider_tree [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.948 226310 DEBUG nova.scheduler.client.report [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.985 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.985 226310 INFO nova.compute.manager [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Migrating#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.991 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.998 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:34 np0005539564 nova_compute[226295]: 2025-11-29 08:07:34.998 226310 INFO nova.compute.claims [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.157 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:35.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1009189897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.705 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.713 226310 DEBUG nova.compute.provider_tree [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.734 226310 DEBUG nova.scheduler.client.report [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.772 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.773 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.826 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.827 226310 DEBUG nova.network.neutron [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.851 226310 INFO nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:07:35 np0005539564 nova_compute[226295]: 2025-11-29 08:07:35.879 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.004 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.006 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.006 226310 INFO nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Creating image(s)#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.039 226310 DEBUG nova.storage.rbd_utils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] rbd image 14addc5e-27d3-46d3-a93f-b22b3f400873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.069 226310 DEBUG nova.storage.rbd_utils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] rbd image 14addc5e-27d3-46d3-a93f-b22b3f400873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.094 226310 DEBUG nova.storage.rbd_utils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] rbd image 14addc5e-27d3-46d3-a93f-b22b3f400873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.097 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.123 226310 DEBUG nova.policy [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20d37020e7484e3ead9c61a89db491b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6104c57e0814f16958b14707debf843', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.157 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.159 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.160 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.160 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.194 226310 DEBUG nova.storage.rbd_utils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] rbd image 14addc5e-27d3-46d3-a93f-b22b3f400873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.199 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 14addc5e-27d3-46d3-a93f-b22b3f400873_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:36 np0005539564 nova_compute[226295]: 2025-11-29 08:07:36.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:37 np0005539564 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:07:37 np0005539564 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:07:37 np0005539564 systemd-logind[785]: New session 54 of user nova.
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.023 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 14addc5e-27d3-46d3-a93f-b22b3f400873_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.824s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:37 np0005539564 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:07:37 np0005539564 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.133 226310 DEBUG nova.storage.rbd_utils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] resizing rbd image 14addc5e-27d3-46d3-a93f-b22b3f400873_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.177 226310 DEBUG nova.network.neutron [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Successfully created port: 7fa1d4da-9208-4220-ae2e-26aada9fc93b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:07:37 np0005539564 systemd[257314]: Queued start job for default target Main User Target.
Nov 29 03:07:37 np0005539564 systemd[257314]: Created slice User Application Slice.
Nov 29 03:07:37 np0005539564 systemd[257314]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:07:37 np0005539564 systemd[257314]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:07:37 np0005539564 systemd[257314]: Reached target Paths.
Nov 29 03:07:37 np0005539564 systemd[257314]: Reached target Timers.
Nov 29 03:07:37 np0005539564 systemd[257314]: Starting D-Bus User Message Bus Socket...
Nov 29 03:07:37 np0005539564 systemd[257314]: Starting Create User's Volatile Files and Directories...
Nov 29 03:07:37 np0005539564 systemd[257314]: Finished Create User's Volatile Files and Directories.
Nov 29 03:07:37 np0005539564 systemd[257314]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:07:37 np0005539564 systemd[257314]: Reached target Sockets.
Nov 29 03:07:37 np0005539564 systemd[257314]: Reached target Basic System.
Nov 29 03:07:37 np0005539564 systemd[257314]: Reached target Main User Target.
Nov 29 03:07:37 np0005539564 systemd[257314]: Startup finished in 158ms.
Nov 29 03:07:37 np0005539564 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.272 226310 DEBUG nova.objects.instance [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'migration_context' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:37 np0005539564 systemd[1]: Started Session 54 of User nova.
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.287 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.287 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Ensure instance console log exists: /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.288 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.289 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.289 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:37 np0005539564 systemd[1]: session-54.scope: Deactivated successfully.
Nov 29 03:07:37 np0005539564 systemd-logind[785]: Session 54 logged out. Waiting for processes to exit.
Nov 29 03:07:37 np0005539564 systemd-logind[785]: Removed session 54.
Nov 29 03:07:37 np0005539564 nova_compute[226295]: 2025-11-29 08:07:37.358 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:37.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:37 np0005539564 systemd-logind[785]: New session 56 of user nova.
Nov 29 03:07:37 np0005539564 systemd[1]: Started Session 56 of User nova.
Nov 29 03:07:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:37.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:37 np0005539564 systemd[1]: session-56.scope: Deactivated successfully.
Nov 29 03:07:37 np0005539564 systemd-logind[785]: Session 56 logged out. Waiting for processes to exit.
Nov 29 03:07:37 np0005539564 systemd-logind[785]: Removed session 56.
Nov 29 03:07:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:38.335 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:38 np0005539564 nova_compute[226295]: 2025-11-29 08:07:38.443 226310 DEBUG nova.network.neutron [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Successfully updated port: 7fa1d4da-9208-4220-ae2e-26aada9fc93b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:07:38 np0005539564 nova_compute[226295]: 2025-11-29 08:07:38.461 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:38 np0005539564 nova_compute[226295]: 2025-11-29 08:07:38.461 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquired lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:38 np0005539564 nova_compute[226295]: 2025-11-29 08:07:38.462 226310 DEBUG nova.network.neutron [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:38 np0005539564 nova_compute[226295]: 2025-11-29 08:07:38.712 226310 DEBUG nova.network.neutron [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:07:39 np0005539564 nova_compute[226295]: 2025-11-29 08:07:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:39.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:39.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:39 np0005539564 nova_compute[226295]: 2025-11-29 08:07:39.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:39 np0005539564 nova_compute[226295]: 2025-11-29 08:07:39.833 226310 DEBUG nova.compute.manager [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-changed-7fa1d4da-9208-4220-ae2e-26aada9fc93b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:39 np0005539564 nova_compute[226295]: 2025-11-29 08:07:39.833 226310 DEBUG nova.compute.manager [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Refreshing instance network info cache due to event network-changed-7fa1d4da-9208-4220-ae2e-26aada9fc93b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:39 np0005539564 nova_compute[226295]: 2025-11-29 08:07:39.834 226310 DEBUG oslo_concurrency.lockutils [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.229 226310 DEBUG nova.network.neutron [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating instance_info_cache with network_info: [{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.275 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Releasing lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.276 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Instance network_info: |[{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.276 226310 DEBUG oslo_concurrency.lockutils [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.277 226310 DEBUG nova.network.neutron [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Refreshing network info cache for port 7fa1d4da-9208-4220-ae2e-26aada9fc93b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.281 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Start _get_guest_xml network_info=[{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.290 226310 WARNING nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.299 226310 DEBUG nova.virt.libvirt.host [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.300 226310 DEBUG nova.virt.libvirt.host [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.310 226310 DEBUG nova.virt.libvirt.host [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.311 226310 DEBUG nova.virt.libvirt.host [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.314 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.314 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.315 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.316 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.316 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.317 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.317 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.318 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.318 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.319 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.319 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.320 226310 DEBUG nova.virt.hardware [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.325 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.368 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.368 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:07:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/108798779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.800 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.847 226310 DEBUG nova.storage.rbd_utils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] rbd image 14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:40 np0005539564 nova_compute[226295]: 2025-11-29 08:07:40.853 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2229134424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.323 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.327 226310 DEBUG nova.virt.libvirt.vif [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.327 226310 DEBUG nova.network.os_vif_util [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converting VIF {"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.329 226310 DEBUG nova.network.os_vif_util [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:f0:45,bridge_name='br-int',has_traffic_filtering=True,id=7fa1d4da-9208-4220-ae2e-26aada9fc93b,network=Network(f96ca160-f806-4467-a92a-7669548852b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa1d4da-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.332 226310 DEBUG nova.objects.instance [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:07:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:41.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.673 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <uuid>14addc5e-27d3-46d3-a93f-b22b3f400873</uuid>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <name>instance-00000056</name>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <nova:name>tempest-device-tagging-server-1793423903</nova:name>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:07:40</nova:creationTime>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <nova:user uuid="20d37020e7484e3ead9c61a89db491b1">tempest-TaggedAttachmentsTest-392498407-project-member</nova:user>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <nova:project uuid="a6104c57e0814f16958b14707debf843">tempest-TaggedAttachmentsTest-392498407</nova:project>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <nova:port uuid="7fa1d4da-9208-4220-ae2e-26aada9fc93b">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <entry name="serial">14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <entry name="uuid">14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:fe:f0:45"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <target dev="tap7fa1d4da-92"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log" append="off"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:07:41 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:07:41 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:07:41 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:07:41 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.675 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Preparing to wait for external event network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.676 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.677 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.677 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.679 226310 DEBUG nova.virt.libvirt.vif [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.680 226310 DEBUG nova.network.os_vif_util [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converting VIF {"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.681 226310 DEBUG nova.network.os_vif_util [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:f0:45,bridge_name='br-int',has_traffic_filtering=True,id=7fa1d4da-9208-4220-ae2e-26aada9fc93b,network=Network(f96ca160-f806-4467-a92a-7669548852b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa1d4da-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.682 226310 DEBUG os_vif [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:f0:45,bridge_name='br-int',has_traffic_filtering=True,id=7fa1d4da-9208-4220-ae2e-26aada9fc93b,network=Network(f96ca160-f806-4467-a92a-7669548852b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa1d4da-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.683 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.684 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.685 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.690 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fa1d4da-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.691 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7fa1d4da-92, col_values=(('external_ids', {'iface-id': '7fa1d4da-9208-4220-ae2e-26aada9fc93b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:f0:45', 'vm-uuid': '14addc5e-27d3-46d3-a93f-b22b3f400873'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.693 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:41 np0005539564 NetworkManager[48997]: <info>  [1764403661.6944] manager: (tap7fa1d4da-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.696 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.703 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.705 226310 INFO os_vif [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:f0:45,bridge_name='br-int',has_traffic_filtering=True,id=7fa1d4da-9208-4220-ae2e-26aada9fc93b,network=Network(f96ca160-f806-4467-a92a-7669548852b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa1d4da-92')#033[00m
Nov 29 03:07:41 np0005539564 nova_compute[226295]: 2025-11-29 08:07:41.731 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.080 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.080 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.157 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.158 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.158 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No VIF found with MAC fa:16:3e:fe:f0:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.159 226310 INFO nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Using config drive#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.202 226310 DEBUG nova.storage.rbd_utils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] rbd image 14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.234 226310 DEBUG nova.network.neutron [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updated VIF entry in instance network info cache for port 7fa1d4da-9208-4220-ae2e-26aada9fc93b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.235 226310 DEBUG nova.network.neutron [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating instance_info_cache with network_info: [{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:42 np0005539564 nova_compute[226295]: 2025-11-29 08:07:42.387 226310 DEBUG oslo_concurrency.lockutils [req-ca59fa65-ddc6-45f6-9dbf-95830995291c req-bf9fcec1-f38d-4161-a246-cdb83d95eaf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.033 226310 INFO nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Creating config drive at /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/disk.config#033[00m
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.040 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwy_1en3y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.190 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwy_1en3y" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.229 226310 DEBUG nova.storage.rbd_utils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] rbd image 14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.234 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/disk.config 14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:43.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:43.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.873 226310 DEBUG oslo_concurrency.processutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/disk.config 14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.874 226310 INFO nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Deleting local config drive /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:43 np0005539564 kernel: tap7fa1d4da-92: entered promiscuous mode
Nov 29 03:07:43 np0005539564 NetworkManager[48997]: <info>  [1764403663.9391] manager: (tap7fa1d4da-92): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Nov 29 03:07:43 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:43Z|00272|binding|INFO|Claiming lport 7fa1d4da-9208-4220-ae2e-26aada9fc93b for this chassis.
Nov 29 03:07:43 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:43Z|00273|binding|INFO|7fa1d4da-9208-4220-ae2e-26aada9fc93b: Claiming fa:16:3e:fe:f0:45 10.100.0.6
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.939 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:43 np0005539564 nova_compute[226295]: 2025-11-29 08:07:43.941 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:43 np0005539564 systemd-machined[190128]: New machine qemu-36-instance-00000056.
Nov 29 03:07:44 np0005539564 systemd[1]: Started Virtual Machine qemu-36-instance-00000056.
Nov 29 03:07:44 np0005539564 systemd-udevd[257537]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.038 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.043 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:44Z|00274|binding|INFO|Setting lport 7fa1d4da-9208-4220-ae2e-26aada9fc93b ovn-installed in OVS
Nov 29 03:07:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:44Z|00275|binding|INFO|Setting lport 7fa1d4da-9208-4220-ae2e-26aada9fc93b up in Southbound
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.045 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:f0:45 10.100.0.6'], port_security=['fa:16:3e:fe:f0:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '14addc5e-27d3-46d3-a93f-b22b3f400873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f96ca160-f806-4467-a92a-7669548852b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6104c57e0814f16958b14707debf843', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bb58df77-ecf1-472b-b584-68079923e549', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26b0970c-a33b-47cb-8349-4ba02d2b286b, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7fa1d4da-9208-4220-ae2e-26aada9fc93b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.047 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7fa1d4da-9208-4220-ae2e-26aada9fc93b in datapath f96ca160-f806-4467-a92a-7669548852b0 bound to our chassis#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.049 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f96ca160-f806-4467-a92a-7669548852b0#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.049 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:44 np0005539564 NetworkManager[48997]: <info>  [1764403664.0542] device (tap7fa1d4da-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:44 np0005539564 NetworkManager[48997]: <info>  [1764403664.0553] device (tap7fa1d4da-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.062 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[272ff4fa-c4c3-4afb-ac8e-1a3b41d54be6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.063 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf96ca160-f1 in ovnmeta-f96ca160-f806-4467-a92a-7669548852b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.065 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf96ca160-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.066 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a5da67b0-a572-44af-9d1b-2cafff190e15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.066 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e7189e31-3ae9-446c-9a6c-a592bc9b9c85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.077 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[af44ce01-daa8-4ddb-af38-40b29db69084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.102 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d430eeaf-5eb8-4d75-b7b7-c6d85c1f8789]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.142 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9b483653-4fd0-4c7b-a3dd-b190c15b6e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.149 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5aaa1443-2c1f-4d2d-9fd1-a11bec572139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 NetworkManager[48997]: <info>  [1764403664.1540] manager: (tapf96ca160-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.193 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0e158f-8989-4d2f-8751-84e32eb598fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.197 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0455c4-41da-4b42-8f32-be43ee5476d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 NetworkManager[48997]: <info>  [1764403664.2320] device (tapf96ca160-f0): carrier: link connected
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.240 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7b943443-001f-4e09-b68b-30d967ca8af3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.266 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff6758b-a209-4bd4-adb9-44be7badc6be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf96ca160-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:53:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663891, 'reachable_time': 42349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257570, 'error': None, 'target': 'ovnmeta-f96ca160-f806-4467-a92a-7669548852b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.288 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[77807ddf-3a48-4942-8d97-7bfa3776f6b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:538b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663891, 'tstamp': 663891}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257571, 'error': None, 'target': 'ovnmeta-f96ca160-f806-4467-a92a-7669548852b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.314 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aba8d40d-8340-4a90-b6d3-8b67e7b3fd73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf96ca160-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:53:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663891, 'reachable_time': 42349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257572, 'error': None, 'target': 'ovnmeta-f96ca160-f806-4467-a92a-7669548852b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.353 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aadae89b-cff0-4738-ac02-76541aa28e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.465 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8c30da-c92c-418c-8324-e463b1bbcba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.467 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf96ca160-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.467 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.468 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf96ca160-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.470 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:44 np0005539564 NetworkManager[48997]: <info>  [1764403664.4714] manager: (tapf96ca160-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Nov 29 03:07:44 np0005539564 kernel: tapf96ca160-f0: entered promiscuous mode
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.474 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.475 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf96ca160-f0, col_values=(('external_ids', {'iface-id': '5db3ca27-ac70-4295-a4a2-f7ea7ed8fa99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.476 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:44Z|00276|binding|INFO|Releasing lport 5db3ca27-ac70-4295-a4a2-f7ea7ed8fa99 from this chassis (sb_readonly=0)
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.490 226310 DEBUG nova.compute.manager [req-ed262e6f-dda7-45ce-a2c3-dc2d3f28f53e req-2f54ac6c-047b-42ad-8128-03ef3642e918 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.491 226310 DEBUG oslo_concurrency.lockutils [req-ed262e6f-dda7-45ce-a2c3-dc2d3f28f53e req-2f54ac6c-047b-42ad-8128-03ef3642e918 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.492 226310 DEBUG oslo_concurrency.lockutils [req-ed262e6f-dda7-45ce-a2c3-dc2d3f28f53e req-2f54ac6c-047b-42ad-8128-03ef3642e918 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.493 226310 DEBUG oslo_concurrency.lockutils [req-ed262e6f-dda7-45ce-a2c3-dc2d3f28f53e req-2f54ac6c-047b-42ad-8128-03ef3642e918 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.494 226310 DEBUG nova.compute.manager [req-ed262e6f-dda7-45ce-a2c3-dc2d3f28f53e req-2f54ac6c-047b-42ad-8128-03ef3642e918 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Processing event network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.494 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f96ca160-f806-4467-a92a-7669548852b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f96ca160-f806-4467-a92a-7669548852b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.495 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.495 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[28caa03d-c51e-4389-abab-273daaf3932a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.497 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-f96ca160-f806-4467-a92a-7669548852b0
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/f96ca160-f806-4467-a92a-7669548852b0.pid.haproxy
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID f96ca160-f806-4467-a92a-7669548852b0
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:44.499 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f96ca160-f806-4467-a92a-7669548852b0', 'env', 'PROCESS_TAG=haproxy-f96ca160-f806-4467-a92a-7669548852b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f96ca160-f806-4467-a92a-7669548852b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.580 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403664.5800686, 14addc5e-27d3-46d3-a93f-b22b3f400873 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.581 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.584 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.589 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.592 226310 INFO nova.virt.libvirt.driver [-] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Instance spawned successfully.#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.592 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.668 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403649.6672733, 13ac8506-e387-462d-a5ea-7f44a5e08994 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.669 226310 INFO nova.compute.manager [-] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.680 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.684 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:44 np0005539564 podman[257646]: 2025-11-29 08:07:44.908230437 +0000 UTC m=+0.052615211 container create 587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.924 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.925 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403664.5814757, 14addc5e-27d3-46d3-a93f-b22b3f400873 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.927 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.929 226310 DEBUG nova.compute.manager [None req-fc1acac2-609e-4449-9150-eccdd1755970 - - - - - -] [instance: 13ac8506-e387-462d-a5ea-7f44a5e08994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.932 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.933 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.934 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.935 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.936 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.937 226310 DEBUG nova.virt.libvirt.driver [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:44 np0005539564 podman[257646]: 2025-11-29 08:07:44.879087851 +0000 UTC m=+0.023472595 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.977 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:44 np0005539564 systemd[1]: Started libpod-conmon-587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36.scope.
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.982 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403664.5877097, 14addc5e-27d3-46d3-a93f-b22b3f400873 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:44 np0005539564 nova_compute[226295]: 2025-11-29 08:07:44.982 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:45 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:07:45 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf68dccafd282e93f80c6dde8afbfb569572c87599849358d60b6d613bbf979/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.033 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.039 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:45 np0005539564 podman[257646]: 2025-11-29 08:07:45.046404543 +0000 UTC m=+0.190789277 container init 587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:07:45 np0005539564 podman[257646]: 2025-11-29 08:07:45.057067371 +0000 UTC m=+0.201452105 container start 587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.060 226310 INFO nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Took 9.06 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.061 226310 DEBUG nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.083 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:45 np0005539564 neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0[257662]: [NOTICE]   (257666) : New worker (257668) forked
Nov 29 03:07:45 np0005539564 neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0[257662]: [NOTICE]   (257666) : Loading success.
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.222 226310 INFO nova.compute.manager [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Took 10.86 seconds to build instance.#033[00m
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.251 226310 DEBUG oslo_concurrency.lockutils [None req-3028f276-02ef-4761-887f-c210158fd2c2 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:45 np0005539564 nova_compute[226295]: 2025-11-29 08:07:45.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:45.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:45.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:46 np0005539564 nova_compute[226295]: 2025-11-29 08:07:46.632 226310 DEBUG nova.compute.manager [req-accf04a4-5fde-47f5-b2d8-28bb92551d2f req-2a2ffb5e-4a82-45fd-9a3a-b2c897114be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:46 np0005539564 nova_compute[226295]: 2025-11-29 08:07:46.634 226310 DEBUG oslo_concurrency.lockutils [req-accf04a4-5fde-47f5-b2d8-28bb92551d2f req-2a2ffb5e-4a82-45fd-9a3a-b2c897114be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:46 np0005539564 nova_compute[226295]: 2025-11-29 08:07:46.634 226310 DEBUG oslo_concurrency.lockutils [req-accf04a4-5fde-47f5-b2d8-28bb92551d2f req-2a2ffb5e-4a82-45fd-9a3a-b2c897114be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:46 np0005539564 nova_compute[226295]: 2025-11-29 08:07:46.635 226310 DEBUG oslo_concurrency.lockutils [req-accf04a4-5fde-47f5-b2d8-28bb92551d2f req-2a2ffb5e-4a82-45fd-9a3a-b2c897114be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:46 np0005539564 nova_compute[226295]: 2025-11-29 08:07:46.636 226310 DEBUG nova.compute.manager [req-accf04a4-5fde-47f5-b2d8-28bb92551d2f req-2a2ffb5e-4a82-45fd-9a3a-b2c897114be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] No waiting events found dispatching network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:46 np0005539564 nova_compute[226295]: 2025-11-29 08:07:46.637 226310 WARNING nova.compute.manager [req-accf04a4-5fde-47f5-b2d8-28bb92551d2f req-2a2ffb5e-4a82-45fd-9a3a-b2c897114be0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received unexpected event network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:07:46 np0005539564 nova_compute[226295]: 2025-11-29 08:07:46.727 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:46 np0005539564 nova_compute[226295]: 2025-11-29 08:07:46.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:47.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:47 np0005539564 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:07:47 np0005539564 systemd[257314]: Activating special unit Exit the Session...
Nov 29 03:07:47 np0005539564 systemd[257314]: Stopped target Main User Target.
Nov 29 03:07:47 np0005539564 systemd[257314]: Stopped target Basic System.
Nov 29 03:07:47 np0005539564 systemd[257314]: Stopped target Paths.
Nov 29 03:07:47 np0005539564 systemd[257314]: Stopped target Sockets.
Nov 29 03:07:47 np0005539564 systemd[257314]: Stopped target Timers.
Nov 29 03:07:47 np0005539564 systemd[257314]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:07:47 np0005539564 systemd[257314]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:07:47 np0005539564 systemd[257314]: Closed D-Bus User Message Bus Socket.
Nov 29 03:07:47 np0005539564 systemd[257314]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:07:47 np0005539564 systemd[257314]: Removed slice User Application Slice.
Nov 29 03:07:47 np0005539564 systemd[257314]: Reached target Shutdown.
Nov 29 03:07:47 np0005539564 systemd[257314]: Finished Exit the Session.
Nov 29 03:07:47 np0005539564 systemd[257314]: Reached target Exit the Session.
Nov 29 03:07:47 np0005539564 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:07:47 np0005539564 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:07:47 np0005539564 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:07:47 np0005539564 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:07:47 np0005539564 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:07:47 np0005539564 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:07:47 np0005539564 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.374 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.374 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.374 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:48 np0005539564 NetworkManager[48997]: <info>  [1764403668.7902] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.789 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:48 np0005539564 NetworkManager[48997]: <info>  [1764403668.7911] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Nov 29 03:07:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2090791071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.843 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.921 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:07:48 np0005539564 nova_compute[226295]: 2025-11-29 08:07:48.922 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.054 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:49Z|00277|binding|INFO|Releasing lport 5db3ca27-ac70-4295-a4a2-f7ea7ed8fa99 from this chassis (sb_readonly=0)
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.089 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.167 226310 DEBUG nova.compute.manager [req-f012c9f7-6a84-45eb-a023-b61c5662183c req-b0d5dd69-8ac8-4cbe-834b-81e0a6aed324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-changed-7fa1d4da-9208-4220-ae2e-26aada9fc93b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.168 226310 DEBUG nova.compute.manager [req-f012c9f7-6a84-45eb-a023-b61c5662183c req-b0d5dd69-8ac8-4cbe-834b-81e0a6aed324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Refreshing instance network info cache due to event network-changed-7fa1d4da-9208-4220-ae2e-26aada9fc93b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.169 226310 DEBUG oslo_concurrency.lockutils [req-f012c9f7-6a84-45eb-a023-b61c5662183c req-b0d5dd69-8ac8-4cbe-834b-81e0a6aed324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.169 226310 DEBUG oslo_concurrency.lockutils [req-f012c9f7-6a84-45eb-a023-b61c5662183c req-b0d5dd69-8ac8-4cbe-834b-81e0a6aed324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.170 226310 DEBUG nova.network.neutron [req-f012c9f7-6a84-45eb-a023-b61c5662183c req-b0d5dd69-8ac8-4cbe-834b-81e0a6aed324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Refreshing network info cache for port 7fa1d4da-9208-4220-ae2e-26aada9fc93b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.426 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.427 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4414MB free_disk=20.921836853027344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.427 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.428 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.466 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Migration for instance a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.484 226310 INFO nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating resource usage from migration 475e32d2-8c8f-4918-aa47-f1847e144c60#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.484 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Starting to track incoming migration 475e32d2-8c8f-4918-aa47-f1847e144c60 with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.526 226310 WARNING nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.526 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 14addc5e-27d3-46d3-a93f-b22b3f400873 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.527 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.527 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.541 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:07:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:49.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.637 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.637 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.655 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.688 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:07:49 np0005539564 nova_compute[226295]: 2025-11-29 08:07:49.767 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2442679403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.220 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.228 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.257 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.279 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.280 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.661 226310 DEBUG nova.compute.manager [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.662 226310 DEBUG oslo_concurrency.lockutils [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.662 226310 DEBUG oslo_concurrency.lockutils [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.663 226310 DEBUG oslo_concurrency.lockutils [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.663 226310 DEBUG nova.compute.manager [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:50 np0005539564 nova_compute[226295]: 2025-11-29 08:07:50.664 226310 WARNING nova.compute.manager [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:07:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:51.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:51 np0005539564 nova_compute[226295]: 2025-11-29 08:07:51.656 226310 INFO nova.network.neutron [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating port 18497509-f640-42ef-b25c-ac9f121ce0db with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:07:51 np0005539564 nova_compute[226295]: 2025-11-29 08:07:51.731 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:51 np0005539564 nova_compute[226295]: 2025-11-29 08:07:51.737 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:51 np0005539564 podman[257802]: 2025-11-29 08:07:51.744620641 +0000 UTC m=+0.068464058 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:07:51 np0005539564 podman[257801]: 2025-11-29 08:07:51.757864268 +0000 UTC m=+0.088121437 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Nov 29 03:07:51 np0005539564 nova_compute[226295]: 2025-11-29 08:07:51.785 226310 DEBUG nova.network.neutron [req-f012c9f7-6a84-45eb-a023-b61c5662183c req-b0d5dd69-8ac8-4cbe-834b-81e0a6aed324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updated VIF entry in instance network info cache for port 7fa1d4da-9208-4220-ae2e-26aada9fc93b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:51 np0005539564 nova_compute[226295]: 2025-11-29 08:07:51.786 226310 DEBUG nova.network.neutron [req-f012c9f7-6a84-45eb-a023-b61c5662183c req-b0d5dd69-8ac8-4cbe-834b-81e0a6aed324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating instance_info_cache with network_info: [{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:51 np0005539564 podman[257800]: 2025-11-29 08:07:51.807498167 +0000 UTC m=+0.119152125 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:07:51 np0005539564 nova_compute[226295]: 2025-11-29 08:07:51.809 226310 DEBUG oslo_concurrency.lockutils [req-f012c9f7-6a84-45eb-a023-b61c5662183c req-b0d5dd69-8ac8-4cbe-834b-81e0a6aed324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.154 226310 DEBUG nova.compute.manager [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.156 226310 DEBUG oslo_concurrency.lockutils [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.157 226310 DEBUG oslo_concurrency.lockutils [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.157 226310 DEBUG oslo_concurrency.lockutils [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.158 226310 DEBUG nova.compute.manager [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.159 226310 WARNING nova.compute.manager [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.326 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.328 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.328 226310 DEBUG nova.network.neutron [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.348 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.763 226310 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.764 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.765 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.765 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.765 226310 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.766 226310 WARNING nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.766 226310 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.766 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.767 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.767 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.767 226310 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.768 226310 WARNING nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.768 226310 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.768 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.769 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.769 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.769 226310 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.770 226310 WARNING nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.770 226310 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.770 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.771 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.771 226310 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.771 226310 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539564 nova_compute[226295]: 2025-11-29 08:07:52.771 226310 WARNING nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:07:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:07:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:07:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:53.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:53 np0005539564 nova_compute[226295]: 2025-11-29 08:07:53.570 226310 DEBUG nova.network.neutron [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:53 np0005539564 nova_compute[226295]: 2025-11-29 08:07:53.608 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:53.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:53 np0005539564 nova_compute[226295]: 2025-11-29 08:07:53.682 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:07:53 np0005539564 nova_compute[226295]: 2025-11-29 08:07:53.685 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:07:53 np0005539564 nova_compute[226295]: 2025-11-29 08:07:53.687 226310 INFO nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Creating image(s)#033[00m
Nov 29 03:07:53 np0005539564 nova_compute[226295]: 2025-11-29 08:07:53.734 226310 DEBUG nova.storage.rbd_utils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] creating snapshot(nova-resize) on rbd image(a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:07:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e260 e260: 3 total, 3 up, 3 in
Nov 29 03:07:53 np0005539564 nova_compute[226295]: 2025-11-29 08:07:53.876 226310 DEBUG nova.objects.instance [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.018 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.019 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Ensure instance console log exists: /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.019 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.020 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.020 226310 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.023 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Start _get_guest_xml network_info=[{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:49:09:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.030 226310 WARNING nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.035 226310 DEBUG nova.virt.libvirt.host [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.037 226310 DEBUG nova.virt.libvirt.host [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.040 226310 DEBUG nova.virt.libvirt.host [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.041 226310 DEBUG nova.virt.libvirt.host [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.042 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.042 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.043 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.043 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.043 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.043 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.044 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.044 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.044 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.045 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.045 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.045 226310 DEBUG nova.virt.hardware [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.045 226310 DEBUG nova.objects.instance [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.063 226310 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.243 226310 DEBUG nova.compute.manager [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-changed-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.244 226310 DEBUG nova.compute.manager [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Refreshing instance network info cache due to event network-changed-18497509-f640-42ef-b25c-ac9f121ce0db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.245 226310 DEBUG oslo_concurrency.lockutils [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.245 226310 DEBUG oslo_concurrency.lockutils [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.246 226310 DEBUG nova.network.neutron [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Refreshing network info cache for port 18497509-f640-42ef-b25c-ac9f121ce0db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.576 226310 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:54 np0005539564 nova_compute[226295]: 2025-11-29 08:07:54.639 226310 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/243775416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.122 226310 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.124 226310 DEBUG nova.virt.libvirt.vif [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1335756739',display_name='tempest-ServerDiskConfigTestJSON-server-1335756739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1335756739',id=85,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-mr1u36f8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:51Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=a28f7dd6-9c8c-46f4-9ce0-7d40194d9749,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:49:09:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.125 226310 DEBUG nova.network.os_vif_util [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:49:09:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.125 226310 DEBUG nova.network.os_vif_util [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.129 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <uuid>a28f7dd6-9c8c-46f4-9ce0-7d40194d9749</uuid>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <name>instance-00000055</name>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <memory>196608</memory>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1335756739</nova:name>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:07:54</nova:creationTime>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.micro">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <nova:memory>192</nova:memory>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <nova:port uuid="18497509-f640-42ef-b25c-ac9f121ce0db">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <entry name="serial">a28f7dd6-9c8c-46f4-9ce0-7d40194d9749</entry>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <entry name="uuid">a28f7dd6-9c8c-46f4-9ce0-7d40194d9749</entry>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk.config">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:49:09:70"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <target dev="tap18497509-f6"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/console.log" append="off"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:07:55 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:07:55 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:07:55 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:07:55 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.131 226310 DEBUG nova.virt.libvirt.vif [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1335756739',display_name='tempest-ServerDiskConfigTestJSON-server-1335756739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1335756739',id=85,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-mr1u36f8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:51Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=a28f7dd6-9c8c-46f4-9ce0-7d40194d9749,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:49:09:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.131 226310 DEBUG nova.network.os_vif_util [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:49:09:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.132 226310 DEBUG nova.network.os_vif_util [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.132 226310 DEBUG os_vif [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.133 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.133 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.134 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.140 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.140 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18497509-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.141 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18497509-f6, col_values=(('external_ids', {'iface-id': '18497509-f640-42ef-b25c-ac9f121ce0db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:09:70', 'vm-uuid': 'a28f7dd6-9c8c-46f4-9ce0-7d40194d9749'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:55 np0005539564 NetworkManager[48997]: <info>  [1764403675.1458] manager: (tap18497509-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.157 226310 INFO os_vif [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6')#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.211 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.211 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.211 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:49:09:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.212 226310 INFO nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Using config drive#033[00m
Nov 29 03:07:55 np0005539564 NetworkManager[48997]: <info>  [1764403675.3206] manager: (tap18497509-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Nov 29 03:07:55 np0005539564 kernel: tap18497509-f6: entered promiscuous mode
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.327 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:55Z|00278|binding|INFO|Claiming lport 18497509-f640-42ef-b25c-ac9f121ce0db for this chassis.
Nov 29 03:07:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:55Z|00279|binding|INFO|18497509-f640-42ef-b25c-ac9f121ce0db: Claiming fa:16:3e:49:09:70 10.100.0.7
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.340 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:09:70 10.100.0.7'], port_security=['fa:16:3e:49:09:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a28f7dd6-9c8c-46f4-9ce0-7d40194d9749', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '7', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=18497509-f640-42ef-b25c-ac9f121ce0db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.342 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 18497509-f640-42ef-b25c-ac9f121ce0db in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.343 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741#033[00m
Nov 29 03:07:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:55Z|00280|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db ovn-installed in OVS
Nov 29 03:07:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:55Z|00281|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db up in Southbound
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.366 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.365 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[13a95458-7abb-4e0b-9ecc-64508eb68b71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.366 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.369 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.373 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.373 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c41f213b-b40e-4bfa-806b-2af21897a70f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.375 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e642d694-6945-4e5a-aaf8-5b96d319053f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.393 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[25b6826b-52f4-462f-8d3c-ca9333fa88d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 systemd-udevd[258082]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:55 np0005539564 systemd-machined[190128]: New machine qemu-37-instance-00000055.
Nov 29 03:07:55 np0005539564 NetworkManager[48997]: <info>  [1764403675.4132] device (tap18497509-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:55 np0005539564 NetworkManager[48997]: <info>  [1764403675.4145] device (tap18497509-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:55 np0005539564 systemd[1]: Started Virtual Machine qemu-37-instance-00000055.
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.423 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5cfe36e4-4e20-44ec-b07f-0917e9e77851]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:55.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.462 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[04cdd656-6f08-42a3-a334-651205db0a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 NetworkManager[48997]: <info>  [1764403675.4699] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Nov 29 03:07:55 np0005539564 systemd-udevd[258085]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.467 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c276f0-0cde-46c2-b3d4-092cacf5be49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.507 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[fefcd129-eee0-4f21-a99d-8c10bf57c641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.511 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cba897-36f9-4e77-8010-8747020b65ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 NetworkManager[48997]: <info>  [1764403675.5429] device (tap8665acc6-10): carrier: link connected
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.550 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf0dcb9-9b25-4bf9-8704-a84d62effe0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.567 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8c3b7afa-f4cd-46b7-85f0-1cf3d736447c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665022, 'reachable_time': 33325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258113, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.589 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a31866ef-d4a5-4101-a65f-abeaee402b73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665022, 'tstamp': 665022}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258114, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.605 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8f0385-45dc-4c55-a89c-11518984b524]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665022, 'reachable_time': 33325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258115, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:55.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.654 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21c7ead8-329a-4520-a3e9-dcf316fc044c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.731 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[10934877-0492-4155-a3be-dbac3a76109d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.733 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.733 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.733 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 NetworkManager[48997]: <info>  [1764403675.7362] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Nov 29 03:07:55 np0005539564 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.738 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.739 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:55Z|00282|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.766 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.767 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.769 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.773 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4d84ce-7e36-4cca-8c6e-39758ded33ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.774 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:07:55.774 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.807 226310 DEBUG nova.compute.manager [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.808 226310 DEBUG oslo_concurrency.lockutils [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.809 226310 DEBUG oslo_concurrency.lockutils [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.809 226310 DEBUG oslo_concurrency.lockutils [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.810 226310 DEBUG nova.compute.manager [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.810 226310 WARNING nova.compute.manager [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.891 226310 DEBUG nova.compute.manager [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.903 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403675.8922932, a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.903 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.912 226310 INFO nova.virt.libvirt.driver [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance running successfully.#033[00m
Nov 29 03:07:55 np0005539564 virtqemud[225880]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.917 226310 DEBUG nova.network.neutron [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updated VIF entry in instance network info cache for port 18497509-f640-42ef-b25c-ac9f121ce0db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.918 226310 DEBUG nova.network.neutron [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.920 226310 DEBUG nova.virt.libvirt.guest [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.921 226310 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.944 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.947 226310 DEBUG oslo_concurrency.lockutils [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.950 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.998 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:07:55 np0005539564 nova_compute[226295]: 2025-11-29 08:07:55.999 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403675.9024384, a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:56 np0005539564 nova_compute[226295]: 2025-11-29 08:07:56.000 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:56 np0005539564 nova_compute[226295]: 2025-11-29 08:07:56.027 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:56 np0005539564 nova_compute[226295]: 2025-11-29 08:07:56.031 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:56 np0005539564 podman[258189]: 2025-11-29 08:07:56.313369948 +0000 UTC m=+0.074257803 container create d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:07:56 np0005539564 podman[258189]: 2025-11-29 08:07:56.275867307 +0000 UTC m=+0.036755192 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:56 np0005539564 systemd[1]: Started libpod-conmon-d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218.scope.
Nov 29 03:07:56 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:07:56 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75ab3c743b48737f1bc55ad6e26a18ad8e180f44bcf7372c8c1a8d8a1ee4321/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:56 np0005539564 podman[258189]: 2025-11-29 08:07:56.409977494 +0000 UTC m=+0.170865399 container init d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:07:56 np0005539564 podman[258189]: 2025-11-29 08:07:56.416326565 +0000 UTC m=+0.177214430 container start d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:56 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258205]: [NOTICE]   (258209) : New worker (258211) forked
Nov 29 03:07:56 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258205]: [NOTICE]   (258209) : Loading success.
Nov 29 03:07:56 np0005539564 nova_compute[226295]: 2025-11-29 08:07:56.793 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:57.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:57.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:57 np0005539564 nova_compute[226295]: 2025-11-29 08:07:57.965 226310 DEBUG nova.compute.manager [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:57 np0005539564 nova_compute[226295]: 2025-11-29 08:07:57.965 226310 DEBUG oslo_concurrency.lockutils [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:57 np0005539564 nova_compute[226295]: 2025-11-29 08:07:57.966 226310 DEBUG oslo_concurrency.lockutils [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:57 np0005539564 nova_compute[226295]: 2025-11-29 08:07:57.966 226310 DEBUG oslo_concurrency.lockutils [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:57 np0005539564 nova_compute[226295]: 2025-11-29 08:07:57.966 226310 DEBUG nova.compute.manager [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:57 np0005539564 nova_compute[226295]: 2025-11-29 08:07:57.967 226310 WARNING nova.compute.manager [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:07:58 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:58Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:f0:45 10.100.0.6
Nov 29 03:07:58 np0005539564 ovn_controller[130591]: 2025-11-29T08:07:58Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:f0:45 10.100.0.6
Nov 29 03:07:58 np0005539564 nova_compute[226295]: 2025-11-29 08:07:58.928 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:59.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:07:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:59.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:00 np0005539564 nova_compute[226295]: 2025-11-29 08:08:00.145 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e261 e261: 3 total, 3 up, 3 in
Nov 29 03:08:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:01.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:01.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:01 np0005539564 nova_compute[226295]: 2025-11-29 08:08:01.796 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:01 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 29 03:08:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:03.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:03.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:03.718 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:03.719 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:03.721 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:05 np0005539564 nova_compute[226295]: 2025-11-29 08:08:05.148 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:05.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:05.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:05 np0005539564 nova_compute[226295]: 2025-11-29 08:08:05.964 226310 DEBUG oslo_concurrency.lockutils [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "interface-14addc5e-27d3-46d3-a93f-b22b3f400873-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:05 np0005539564 nova_compute[226295]: 2025-11-29 08:08:05.964 226310 DEBUG oslo_concurrency.lockutils [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "interface-14addc5e-27d3-46d3-a93f-b22b3f400873-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:05 np0005539564 nova_compute[226295]: 2025-11-29 08:08:05.965 226310 DEBUG nova.objects.instance [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'flavor' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 e262: 3 total, 3 up, 3 in
Nov 29 03:08:06 np0005539564 nova_compute[226295]: 2025-11-29 08:08:06.798 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:07 np0005539564 nova_compute[226295]: 2025-11-29 08:08:07.265 226310 DEBUG nova.objects.instance [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'pci_requests' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:07 np0005539564 nova_compute[226295]: 2025-11-29 08:08:07.296 226310 DEBUG nova.network.neutron [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:08:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:07.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:07 np0005539564 nova_compute[226295]: 2025-11-29 08:08:07.560 226310 DEBUG nova.policy [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20d37020e7484e3ead9c61a89db491b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6104c57e0814f16958b14707debf843', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:08:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:07.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.434 226310 DEBUG nova.network.neutron [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Successfully created port: 96f4ddc5-c950-4eab-9f01-08954642669a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.821 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.822 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.823 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.823 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.824 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.826 226310 INFO nova.compute.manager [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Terminating instance#033[00m
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.828 226310 DEBUG nova.compute.manager [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:08:08 np0005539564 kernel: tap18497509-f6 (unregistering): left promiscuous mode
Nov 29 03:08:08 np0005539564 NetworkManager[48997]: <info>  [1764403688.9428] device (tap18497509-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:08Z|00283|binding|INFO|Releasing lport 18497509-f640-42ef-b25c-ac9f121ce0db from this chassis (sb_readonly=0)
Nov 29 03:08:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:08Z|00284|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db down in Southbound
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.956 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:08Z|00285|binding|INFO|Removing iface tap18497509-f6 ovn-installed in OVS
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.959 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539564 nova_compute[226295]: 2025-11-29 08:08:08.977 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:08.981 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:09:70 10.100.0.7'], port_security=['fa:16:3e:49:09:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a28f7dd6-9c8c-46f4-9ce0-7d40194d9749', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '9', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=18497509-f640-42ef-b25c-ac9f121ce0db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:08.983 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 18497509-f640-42ef-b25c-ac9f121ce0db in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:08:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:08.985 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:08.986 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[34b27b89-e986-4cf8-9560-0ce94256cc32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:08.987 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore#033[00m
Nov 29 03:08:09 np0005539564 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 29 03:08:09 np0005539564 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000055.scope: Consumed 13.135s CPU time.
Nov 29 03:08:09 np0005539564 systemd-machined[190128]: Machine qemu-37-instance-00000055 terminated.
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.066 226310 INFO nova.virt.libvirt.driver [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance destroyed successfully.#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.067 226310 DEBUG nova.objects.instance [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.088 226310 DEBUG nova.virt.libvirt.vif [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1335756739',display_name='tempest-ServerDiskConfigTestJSON-server-1335756739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1335756739',id=85,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-mr1u36f8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:01Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=a28f7dd6-9c8c-46f4-9ce0-7d40194d9749,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.089 226310 DEBUG nova.network.os_vif_util [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.090 226310 DEBUG nova.network.os_vif_util [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.091 226310 DEBUG os_vif [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.093 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.093 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18497509-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.095 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.098 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.100 226310 INFO os_vif [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6')#033[00m
Nov 29 03:08:09 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258205]: [NOTICE]   (258209) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:09 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258205]: [NOTICE]   (258209) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:09 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258205]: [WARNING]  (258209) : Exiting Master process...
Nov 29 03:08:09 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258205]: [WARNING]  (258209) : Exiting Master process...
Nov 29 03:08:09 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258205]: [ALERT]    (258209) : Current worker (258211) exited with code 143 (Terminated)
Nov 29 03:08:09 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258205]: [WARNING]  (258209) : All workers exited. Exiting... (0)
Nov 29 03:08:09 np0005539564 systemd[1]: libpod-d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218.scope: Deactivated successfully.
Nov 29 03:08:09 np0005539564 podman[258302]: 2025-11-29 08:08:09.157224518 +0000 UTC m=+0.054723556 container died d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:08:09 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:09 np0005539564 systemd[1]: var-lib-containers-storage-overlay-d75ab3c743b48737f1bc55ad6e26a18ad8e180f44bcf7372c8c1a8d8a1ee4321-merged.mount: Deactivated successfully.
Nov 29 03:08:09 np0005539564 podman[258302]: 2025-11-29 08:08:09.195732017 +0000 UTC m=+0.093231065 container cleanup d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:08:09 np0005539564 systemd[1]: libpod-conmon-d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218.scope: Deactivated successfully.
Nov 29 03:08:09 np0005539564 podman[258349]: 2025-11-29 08:08:09.292992911 +0000 UTC m=+0.063898684 container remove d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.302 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ea93f1e9-c2ab-4cce-99ea-56663f9bb7d5]: (4, ('Sat Nov 29 08:08:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218)\nd871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218\nSat Nov 29 08:08:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (d871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218)\nd871c41238e0e3975df5e1f5a30ead2ad33365ec5d6c9cfe0065050226adf218\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.305 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea5c593-4c61-4251-9251-e72030798403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.306 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.344 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:09 np0005539564 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.349 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[82cbdeff-ffe5-4cf8-90c5-204ee031714e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.375 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.379 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9720c9fb-ee5f-488d-aff7-3a57f19f5243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.380 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa5df85-190d-48bf-a3bd-492279d40b46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.406 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a7adc0c9-8fa0-4df3-afc9-ccd8c2706532]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665013, 'reachable_time': 35835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258366, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.409 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:09.410 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[95ad9d2a-ef73-47cd-8c3d-e0ec80099eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:09 np0005539564 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 03:08:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.924 226310 INFO nova.virt.libvirt.driver [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Deleting instance files /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_del#033[00m
Nov 29 03:08:09 np0005539564 nova_compute[226295]: 2025-11-29 08:08:09.925 226310 INFO nova.virt.libvirt.driver [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Deletion of /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_del complete#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.159 226310 INFO nova.compute.manager [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Took 1.33 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.160 226310 DEBUG oslo.service.loopingcall [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.160 226310 DEBUG nova.compute.manager [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.160 226310 DEBUG nova.network.neutron [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.494 226310 DEBUG nova.compute.manager [req-b26757d7-de85-4824-97c4-461b2dafb39b req-5bb11b77-638b-4a5a-93d4-1a213d6c2c11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.494 226310 DEBUG oslo_concurrency.lockutils [req-b26757d7-de85-4824-97c4-461b2dafb39b req-5bb11b77-638b-4a5a-93d4-1a213d6c2c11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.495 226310 DEBUG oslo_concurrency.lockutils [req-b26757d7-de85-4824-97c4-461b2dafb39b req-5bb11b77-638b-4a5a-93d4-1a213d6c2c11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.496 226310 DEBUG oslo_concurrency.lockutils [req-b26757d7-de85-4824-97c4-461b2dafb39b req-5bb11b77-638b-4a5a-93d4-1a213d6c2c11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.496 226310 DEBUG nova.compute.manager [req-b26757d7-de85-4824-97c4-461b2dafb39b req-5bb11b77-638b-4a5a-93d4-1a213d6c2c11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.496 226310 DEBUG nova.compute.manager [req-b26757d7-de85-4824-97c4-461b2dafb39b req-5bb11b77-638b-4a5a-93d4-1a213d6c2c11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.511 226310 DEBUG nova.network.neutron [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Successfully updated port: 96f4ddc5-c950-4eab-9f01-08954642669a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.604 226310 DEBUG oslo_concurrency.lockutils [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.605 226310 DEBUG oslo_concurrency.lockutils [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquired lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.605 226310 DEBUG nova.network.neutron [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.753 226310 DEBUG nova.compute.manager [req-8d1cc972-1199-4fc5-ac38-884ddf258573 req-d13e8ef2-559a-45b1-9477-69aa94a86799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-changed-96f4ddc5-c950-4eab-9f01-08954642669a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.754 226310 DEBUG nova.compute.manager [req-8d1cc972-1199-4fc5-ac38-884ddf258573 req-d13e8ef2-559a-45b1-9477-69aa94a86799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Refreshing instance network info cache due to event network-changed-96f4ddc5-c950-4eab-9f01-08954642669a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.754 226310 DEBUG oslo_concurrency.lockutils [req-8d1cc972-1199-4fc5-ac38-884ddf258573 req-d13e8ef2-559a-45b1-9477-69aa94a86799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:10 np0005539564 nova_compute[226295]: 2025-11-29 08:08:10.982 226310 DEBUG nova.network.neutron [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:11 np0005539564 nova_compute[226295]: 2025-11-29 08:08:11.061 226310 INFO nova.compute.manager [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Took 0.90 seconds to deallocate network for instance.#033[00m
Nov 29 03:08:11 np0005539564 nova_compute[226295]: 2025-11-29 08:08:11.140 226310 DEBUG nova.compute.manager [req-a9ea7509-ebac-4eb4-954f-476e9329facd req-07bb7e36-3e9d-4fe7-8504-e59537ed8515 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-deleted-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:11 np0005539564 nova_compute[226295]: 2025-11-29 08:08:11.166 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:11 np0005539564 nova_compute[226295]: 2025-11-29 08:08:11.167 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:11 np0005539564 nova_compute[226295]: 2025-11-29 08:08:11.174 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:11 np0005539564 nova_compute[226295]: 2025-11-29 08:08:11.268 226310 INFO nova.scheduler.client.report [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Deleted allocations for instance a28f7dd6-9c8c-46f4-9ce0-7d40194d9749#033[00m
Nov 29 03:08:11 np0005539564 nova_compute[226295]: 2025-11-29 08:08:11.431 226310 DEBUG oslo_concurrency.lockutils [None req-a61ceffa-7266-4e23-b9af-3232c2ae839c 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:11.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:11.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:11 np0005539564 nova_compute[226295]: 2025-11-29 08:08:11.801 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.226 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.227 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.246 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.337 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.338 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.352 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.354 226310 INFO nova.compute.claims [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.537 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.860 226310 DEBUG nova.compute.manager [req-fe87abd6-99b5-4990-94e0-6f03e9990760 req-9667a1a5-6fa0-4a16-bcd7-45221ef3387d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.860 226310 DEBUG oslo_concurrency.lockutils [req-fe87abd6-99b5-4990-94e0-6f03e9990760 req-9667a1a5-6fa0-4a16-bcd7-45221ef3387d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.861 226310 DEBUG oslo_concurrency.lockutils [req-fe87abd6-99b5-4990-94e0-6f03e9990760 req-9667a1a5-6fa0-4a16-bcd7-45221ef3387d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.861 226310 DEBUG oslo_concurrency.lockutils [req-fe87abd6-99b5-4990-94e0-6f03e9990760 req-9667a1a5-6fa0-4a16-bcd7-45221ef3387d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.861 226310 DEBUG nova.compute.manager [req-fe87abd6-99b5-4990-94e0-6f03e9990760 req-9667a1a5-6fa0-4a16-bcd7-45221ef3387d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:12 np0005539564 nova_compute[226295]: 2025-11-29 08:08:12.862 226310 WARNING nova.compute.manager [req-fe87abd6-99b5-4990-94e0-6f03e9990760 req-9667a1a5-6fa0-4a16-bcd7-45221ef3387d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:08:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4130764057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.003 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.012 226310 DEBUG nova.compute.provider_tree [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.057 226310 DEBUG nova.scheduler.client.report [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.117 226310 DEBUG nova.network.neutron [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating instance_info_cache with network_info: [{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.238 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.239 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.244 226310 DEBUG oslo_concurrency.lockutils [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Releasing lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.247 226310 DEBUG oslo_concurrency.lockutils [req-8d1cc972-1199-4fc5-ac38-884ddf258573 req-d13e8ef2-559a-45b1-9477-69aa94a86799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.248 226310 DEBUG nova.network.neutron [req-8d1cc972-1199-4fc5-ac38-884ddf258573 req-d13e8ef2-559a-45b1-9477-69aa94a86799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Refreshing network info cache for port 96f4ddc5-c950-4eab-9f01-08954642669a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.253 226310 DEBUG nova.virt.libvirt.vif [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.254 226310 DEBUG nova.network.os_vif_util [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converting VIF {"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.256 226310 DEBUG nova.network.os_vif_util [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.256 226310 DEBUG os_vif [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.257 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.258 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.259 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.264 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.265 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96f4ddc5-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.266 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96f4ddc5-c9, col_values=(('external_ids', {'iface-id': '96f4ddc5-c950-4eab-9f01-08954642669a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:7d:3d', 'vm-uuid': '14addc5e-27d3-46d3-a93f-b22b3f400873'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.271 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:13 np0005539564 NetworkManager[48997]: <info>  [1764403693.2732] manager: (tap96f4ddc5-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.282 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.284 226310 INFO os_vif [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9')#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.286 226310 DEBUG nova.virt.libvirt.vif [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.286 226310 DEBUG nova.network.os_vif_util [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converting VIF {"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.287 226310 DEBUG nova.network.os_vif_util [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.293 226310 DEBUG nova.virt.libvirt.guest [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:5e:7d:3d"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <target dev="tap96f4ddc5-c9"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]: </interface>
Nov 29 03:08:13 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:08:13 np0005539564 kernel: tap96f4ddc5-c9: entered promiscuous mode
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.318 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:13 np0005539564 NetworkManager[48997]: <info>  [1764403693.3189] manager: (tap96f4ddc5-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Nov 29 03:08:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:13Z|00286|binding|INFO|Claiming lport 96f4ddc5-c950-4eab-9f01-08954642669a for this chassis.
Nov 29 03:08:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:13Z|00287|binding|INFO|96f4ddc5-c950-4eab-9f01-08954642669a: Claiming fa:16:3e:5e:7d:3d 10.10.10.211
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.329 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.329 226310 DEBUG nova.network.neutron [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.360 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:7d:3d 10.10.10.211'], port_security=['fa:16:3e:5e:7d:3d 10.10.10.211'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.211/24', 'neutron:device_id': '14addc5e-27d3-46d3-a93f-b22b3f400873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed4b55c0-489a-4302-a51c-6bb91006ca7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6104c57e0814f16958b14707debf843', 'neutron:revision_number': '2', 'neutron:security_group_ids': '503a0486-5a06-4485-aa0f-cf204d1283d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5369234-207b-4b87-81b4-c4d521bafb17, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=96f4ddc5-c950-4eab-9f01-08954642669a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.363 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 96f4ddc5-c950-4eab-9f01-08954642669a in datapath ed4b55c0-489a-4302-a51c-6bb91006ca7e bound to our chassis#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.366 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed4b55c0-489a-4302-a51c-6bb91006ca7e#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.389 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[48fe530c-7c93-42d2-92c1-050c17c84e00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.390 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped4b55c0-41 in ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:08:13 np0005539564 systemd-udevd[258397]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.393 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.393 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped4b55c0-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.393 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8b2624-d48c-4333-ad18-984c3bfc1498]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.396 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbd5069-4bd5-431b-b6e8-b2a8a60159a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:13Z|00288|binding|INFO|Setting lport 96f4ddc5-c950-4eab-9f01-08954642669a ovn-installed in OVS
Nov 29 03:08:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:13Z|00289|binding|INFO|Setting lport 96f4ddc5-c950-4eab-9f01-08954642669a up in Southbound
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.403 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:13 np0005539564 NetworkManager[48997]: <info>  [1764403693.4230] device (tap96f4ddc5-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:08:13 np0005539564 NetworkManager[48997]: <info>  [1764403693.4250] device (tap96f4ddc5-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.425 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d987506e-2c40-4159-a6c6-a930a145436b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.451 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9b140099-8d2d-45e7-a99b-bd9dde4fd1e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:13.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.484 226310 INFO nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.498 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7f29efb9-1ea2-48d7-8138-c038599a22eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.506 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.506 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[373a2cc3-a739-4fda-ac04-2e9cbb981a57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 NetworkManager[48997]: <info>  [1764403693.5092] manager: (taped4b55c0-40): new Veth device (/org/freedesktop/NetworkManager/Devices/153)
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.539 226310 DEBUG nova.virt.libvirt.driver [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.540 226310 DEBUG nova.virt.libvirt.driver [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.540 226310 DEBUG nova.virt.libvirt.driver [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No VIF found with MAC fa:16:3e:fe:f0:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.558 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a3963777-fe9f-4c1b-becf-d72ba51da8f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.561 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[72ae4849-7661-4282-b597-6003d01441ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.586 226310 DEBUG nova.virt.libvirt.guest [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <nova:name>tempest-device-tagging-server-1793423903</nova:name>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:08:13</nova:creationTime>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:user uuid="20d37020e7484e3ead9c61a89db491b1">tempest-TaggedAttachmentsTest-392498407-project-member</nova:user>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:project uuid="a6104c57e0814f16958b14707debf843">tempest-TaggedAttachmentsTest-392498407</nova:project>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:port uuid="7fa1d4da-9208-4220-ae2e-26aada9fc93b">
Nov 29 03:08:13 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    <nova:port uuid="96f4ddc5-c950-4eab-9f01-08954642669a">
Nov 29 03:08:13 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.10.10.211" ipVersion="4"/>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:13 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:08:13 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:08:13 np0005539564 nova_compute[226295]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:08:13 np0005539564 NetworkManager[48997]: <info>  [1764403693.6020] device (taped4b55c0-40): carrier: link connected
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.611 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[84678982-7f58-4dd3-aba1-d01f0e3f9e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.619 226310 DEBUG oslo_concurrency.lockutils [None req-8f2d3b0d-3adf-4c19-90bc-c6fa769ce2c7 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "interface-14addc5e-27d3-46d3-a93f-b22b3f400873-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.635 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[25ed2d6c-795d-4367-8574-09db622b37d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped4b55c0-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:d5:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666828, 'reachable_time': 22789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258423, 'error': None, 'target': 'ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.663 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cd58e7eb-76a9-43f3-a78a-1c2498c5f70f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:d509'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666828, 'tstamp': 666828}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258424, 'error': None, 'target': 'ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.691 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[52318d61-8438-47f1-b2c8-e8350f438ad7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped4b55c0-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:d5:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666828, 'reachable_time': 22789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258425, 'error': None, 'target': 'ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.733 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.736 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.737 226310 INFO nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Creating image(s)#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.739 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc39605-ff01-4a16-8c84-637976d7658b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.779 226310 DEBUG nova.storage.rbd_utils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.825 226310 DEBUG nova.storage.rbd_utils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.826 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ac38a8a6-67ce-4abc-a9bf-69f4076b108e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.827 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped4b55c0-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.827 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.828 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped4b55c0-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:13 np0005539564 NetworkManager[48997]: <info>  [1764403693.8309] manager: (taped4b55c0-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Nov 29 03:08:13 np0005539564 kernel: taped4b55c0-40: entered promiscuous mode
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.833 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped4b55c0-40, col_values=(('external_ids', {'iface-id': '78338d29-ec19-44d7-9ef3-b6d08be68c1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:13Z|00290|binding|INFO|Releasing lport 78338d29-ec19-44d7-9ef3-b6d08be68c1b from this chassis (sb_readonly=0)
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.855 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed4b55c0-489a-4302-a51c-6bb91006ca7e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed4b55c0-489a-4302-a51c-6bb91006ca7e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.856 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c386b301-e415-4f09-88e6-b2c4c8fa1b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.857 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-ed4b55c0-489a-4302-a51c-6bb91006ca7e
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/ed4b55c0-489a-4302-a51c-6bb91006ca7e.pid.haproxy
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID ed4b55c0-489a-4302-a51c-6bb91006ca7e
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:08:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:13.857 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e', 'env', 'PROCESS_TAG=haproxy-ed4b55c0-489a-4302-a51c-6bb91006ca7e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed4b55c0-489a-4302-a51c-6bb91006ca7e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.859 226310 DEBUG nova.storage.rbd_utils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.864 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.895 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.907 226310 DEBUG nova.policy [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a7b61623f854cf59636f192ab8af005', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.950 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.964 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.965 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:13 np0005539564 nova_compute[226295]: 2025-11-29 08:08:13.965 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.005 226310 DEBUG nova.storage.rbd_utils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.010 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.062 226310 DEBUG nova.compute.manager [req-4393d197-2a0a-4721-9437-62e00c952361 req-3d87849b-ba91-4283-a510-e3d9979eb1a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.063 226310 DEBUG oslo_concurrency.lockutils [req-4393d197-2a0a-4721-9437-62e00c952361 req-3d87849b-ba91-4283-a510-e3d9979eb1a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.064 226310 DEBUG oslo_concurrency.lockutils [req-4393d197-2a0a-4721-9437-62e00c952361 req-3d87849b-ba91-4283-a510-e3d9979eb1a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.064 226310 DEBUG oslo_concurrency.lockutils [req-4393d197-2a0a-4721-9437-62e00c952361 req-3d87849b-ba91-4283-a510-e3d9979eb1a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.064 226310 DEBUG nova.compute.manager [req-4393d197-2a0a-4721-9437-62e00c952361 req-3d87849b-ba91-4283-a510-e3d9979eb1a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] No waiting events found dispatching network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.065 226310 WARNING nova.compute.manager [req-4393d197-2a0a-4721-9437-62e00c952361 req-3d87849b-ba91-4283-a510-e3d9979eb1a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received unexpected event network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:14 np0005539564 podman[258550]: 2025-11-29 08:08:14.297064482 +0000 UTC m=+0.095551148 container create 277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:08:14 np0005539564 podman[258550]: 2025-11-29 08:08:14.23102226 +0000 UTC m=+0.029508926 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:08:14 np0005539564 systemd[1]: Started libpod-conmon-277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a.scope.
Nov 29 03:08:14 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:08:14 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4162dc9b87e90569e1be548b98e27dca22c2b7649521e69deb79ac838a3c50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:14 np0005539564 podman[258550]: 2025-11-29 08:08:14.430870992 +0000 UTC m=+0.229357658 container init 277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:08:14 np0005539564 podman[258550]: 2025-11-29 08:08:14.442042243 +0000 UTC m=+0.240528899 container start 277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:08:14 np0005539564 neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e[258566]: [NOTICE]   (258570) : New worker (258572) forked
Nov 29 03:08:14 np0005539564 neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e[258566]: [NOTICE]   (258570) : Loading success.
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.630 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.751 226310 DEBUG nova.storage.rbd_utils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] resizing rbd image 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.895 226310 DEBUG nova.objects.instance [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.957 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.958 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Ensure instance console log exists: /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.959 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.959 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:14 np0005539564 nova_compute[226295]: 2025-11-29 08:08:14.959 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:15 np0005539564 nova_compute[226295]: 2025-11-29 08:08:15.298 226310 DEBUG oslo_concurrency.lockutils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:15 np0005539564 nova_compute[226295]: 2025-11-29 08:08:15.298 226310 DEBUG oslo_concurrency.lockutils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:15 np0005539564 nova_compute[226295]: 2025-11-29 08:08:15.320 226310 DEBUG nova.objects.instance [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'flavor' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:15.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:15 np0005539564 nova_compute[226295]: 2025-11-29 08:08:15.549 226310 DEBUG nova.network.neutron [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Successfully created port: ccf625f8-471d-4406-9844-a3872b34137c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:08:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:15.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:15 np0005539564 nova_compute[226295]: 2025-11-29 08:08:15.701 226310 DEBUG oslo_concurrency.lockutils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:15 np0005539564 nova_compute[226295]: 2025-11-29 08:08:15.808 226310 DEBUG nova.network.neutron [req-8d1cc972-1199-4fc5-ac38-884ddf258573 req-d13e8ef2-559a-45b1-9477-69aa94a86799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updated VIF entry in instance network info cache for port 96f4ddc5-c950-4eab-9f01-08954642669a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:08:15 np0005539564 nova_compute[226295]: 2025-11-29 08:08:15.809 226310 DEBUG nova.network.neutron [req-8d1cc972-1199-4fc5-ac38-884ddf258573 req-d13e8ef2-559a-45b1-9477-69aa94a86799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating instance_info_cache with network_info: [{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:15 np0005539564 nova_compute[226295]: 2025-11-29 08:08:15.890 226310 DEBUG oslo_concurrency.lockutils [req-8d1cc972-1199-4fc5-ac38-884ddf258573 req-d13e8ef2-559a-45b1-9477-69aa94a86799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.153 226310 DEBUG nova.compute.manager [req-9b3a4647-66bd-4ff7-87fd-69edb4af5f34 req-6acd9dee-dcee-4d5c-b2c9-d37fd0d0aeeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.154 226310 DEBUG oslo_concurrency.lockutils [req-9b3a4647-66bd-4ff7-87fd-69edb4af5f34 req-6acd9dee-dcee-4d5c-b2c9-d37fd0d0aeeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.155 226310 DEBUG oslo_concurrency.lockutils [req-9b3a4647-66bd-4ff7-87fd-69edb4af5f34 req-6acd9dee-dcee-4d5c-b2c9-d37fd0d0aeeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.155 226310 DEBUG oslo_concurrency.lockutils [req-9b3a4647-66bd-4ff7-87fd-69edb4af5f34 req-6acd9dee-dcee-4d5c-b2c9-d37fd0d0aeeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.155 226310 DEBUG nova.compute.manager [req-9b3a4647-66bd-4ff7-87fd-69edb4af5f34 req-6acd9dee-dcee-4d5c-b2c9-d37fd0d0aeeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] No waiting events found dispatching network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.156 226310 WARNING nova.compute.manager [req-9b3a4647-66bd-4ff7-87fd-69edb4af5f34 req-6acd9dee-dcee-4d5c-b2c9-d37fd0d0aeeb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received unexpected event network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.158 226310 DEBUG oslo_concurrency.lockutils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.158 226310 DEBUG oslo_concurrency.lockutils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.159 226310 INFO nova.compute.manager [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Attaching volume b58f9fa1-0337-49a9-a381-b6da4150ee36 to /dev/vdb#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.343 226310 DEBUG os_brick.utils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.345 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.362 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.363 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1396b2-0b8a-48c0-b44e-1d4d2b96b8df]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.364 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.376 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.377 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[998a61ea-ac09-4466-8dc0-1e84f0f2f58c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.378 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.391 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.391 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[5872efe1-d2fa-4b14-8171-631517a3e1a7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.393 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[cfee3bab-6a05-4e64-9b8d-e90fda3a5683]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.393 226310 DEBUG oslo_concurrency.processutils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:16Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:7d:3d 10.10.10.211
Nov 29 03:08:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:16Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:7d:3d 10.10.10.211
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.440 226310 DEBUG oslo_concurrency.processutils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "nvme version" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.443 226310 DEBUG os_brick.initiator.connectors.lightos [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.444 226310 DEBUG os_brick.initiator.connectors.lightos [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.444 226310 DEBUG os_brick.initiator.connectors.lightos [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.445 226310 DEBUG os_brick.utils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] <== get_connector_properties: return (101ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.445 226310 DEBUG nova.virt.block_device [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating existing volume attachment record: 622644a8-9365-4e62-b9e4-af2519635d95 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.805 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:16 np0005539564 nova_compute[226295]: 2025-11-29 08:08:16.952 226310 DEBUG nova.network.neutron [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Successfully updated port: ccf625f8-471d-4406-9844-a3872b34137c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.000 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.000 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.001 226310 DEBUG nova.network.neutron [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.335 226310 DEBUG nova.network.neutron [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:08:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:17 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1936587442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:17.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.569 226310 DEBUG nova.objects.instance [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'flavor' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.639 226310 DEBUG nova.virt.libvirt.driver [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Attempting to attach volume b58f9fa1-0337-49a9-a381-b6da4150ee36 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.643 226310 DEBUG nova.virt.libvirt.guest [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:08:17 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:08:17 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-b58f9fa1-0337-49a9-a381-b6da4150ee36">
Nov 29 03:08:17 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:17 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:17 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:17 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:08:17 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:08:17 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:08:17 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:08:17 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:08:17 np0005539564 nova_compute[226295]:  <serial>b58f9fa1-0337-49a9-a381-b6da4150ee36</serial>
Nov 29 03:08:17 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:08:17 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:08:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:17.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.971 226310 DEBUG nova.virt.libvirt.driver [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.972 226310 DEBUG nova.virt.libvirt.driver [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:17 np0005539564 nova_compute[226295]: 2025-11-29 08:08:17.973 226310 DEBUG nova.virt.libvirt.driver [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] No VIF found with MAC fa:16:3e:fe:f0:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.024 226310 DEBUG nova.compute.manager [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-changed-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.025 226310 DEBUG nova.compute.manager [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Refreshing instance network info cache due to event network-changed-ccf625f8-471d-4406-9844-a3872b34137c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.026 226310 DEBUG oslo_concurrency.lockutils [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.307 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.435 226310 DEBUG oslo_concurrency.lockutils [None req-304e71b3-7e11-4978-8fdd-fb203e4066a4 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:18.679 139890 DEBUG eventlet.wsgi.server [-] (139890) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:18.681 139890 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: Accept: */*#015
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: Connection: close#015
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: Content-Type: text/plain#015
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: Host: 169.254.169.254#015
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: User-Agent: curl/7.84.0#015
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: X-Forwarded-For: 10.100.0.6#015
Nov 29 03:08:18 np0005539564 ovn_metadata_agent[139775]: X-Ovn-Network-Id: f96ca160-f806-4467-a92a-7669548852b0 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.810 226310 DEBUG nova.network.neutron [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.842 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.843 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance network_info: |[{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.844 226310 DEBUG oslo_concurrency.lockutils [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.844 226310 DEBUG nova.network.neutron [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Refreshing network info cache for port ccf625f8-471d-4406-9844-a3872b34137c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.848 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Start _get_guest_xml network_info=[{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.854 226310 WARNING nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.859 226310 DEBUG nova.virt.libvirt.host [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.859 226310 DEBUG nova.virt.libvirt.host [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.863 226310 DEBUG nova.virt.libvirt.host [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.864 226310 DEBUG nova.virt.libvirt.host [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.867 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.867 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.869 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.869 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.870 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.871 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.871 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.872 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.873 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.873 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.874 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.874 226310 DEBUG nova.virt.hardware [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:08:18 np0005539564 nova_compute[226295]: 2025-11-29 08:08:18.880 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:19.255 139890 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 03:08:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:19.256 139890 INFO eventlet.wsgi.server [-] 10.100.0.6,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1916 time: 0.5748489#033[00m
Nov 29 03:08:19 np0005539564 haproxy-metadata-proxy-f96ca160-f806-4467-a92a-7669548852b0[257668]: 10.100.0.6:54676 [29/Nov/2025:08:08:18.677] listener listener/metadata 0/0/0/578/578 200 1900 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Nov 29 03:08:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:19 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/499904687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.319 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.366 226310 DEBUG nova.storage.rbd_utils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.374 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:19.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:19.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.740 226310 DEBUG oslo_concurrency.lockutils [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.741 226310 DEBUG oslo_concurrency.lockutils [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.758 226310 INFO nova.compute.manager [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Detaching volume b58f9fa1-0337-49a9-a381-b6da4150ee36#033[00m
Nov 29 03:08:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:19 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1066852043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.844 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.846 226310 DEBUG nova.virt.libvirt.vif [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1617536379',display_name='tempest-ServerDiskConfigTestJSON-server-1617536379',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1617536379',id=88,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-indog4zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:13Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=9b9952a8-61d7-410f-9f29-081ff912c4cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.847 226310 DEBUG nova.network.os_vif_util [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.848 226310 DEBUG nova.network.os_vif_util [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.850 226310 DEBUG nova.objects.instance [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.866 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <uuid>9b9952a8-61d7-410f-9f29-081ff912c4cb</uuid>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <name>instance-00000058</name>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1617536379</nova:name>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:08:18</nova:creationTime>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <nova:port uuid="ccf625f8-471d-4406-9844-a3872b34137c">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <entry name="serial">9b9952a8-61d7-410f-9f29-081ff912c4cb</entry>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <entry name="uuid">9b9952a8-61d7-410f-9f29-081ff912c4cb</entry>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/9b9952a8-61d7-410f-9f29-081ff912c4cb_disk">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/9b9952a8-61d7-410f-9f29-081ff912c4cb_disk.config">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:89:02:a4"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <target dev="tapccf625f8-47"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/console.log" append="off"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:08:19 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:08:19 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.868 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Preparing to wait for external event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.868 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.869 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.869 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.870 226310 DEBUG nova.virt.libvirt.vif [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1617536379',display_name='tempest-ServerDiskConfigTestJSON-server-1617536379',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1617536379',id=88,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-indog4zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:13Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=9b9952a8-61d7-410f-9f29-081ff912c4cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.871 226310 DEBUG nova.network.os_vif_util [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.872 226310 DEBUG nova.network.os_vif_util [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.873 226310 DEBUG os_vif [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.874 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.874 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.875 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.879 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.879 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccf625f8-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:19 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.880 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapccf625f8-47, col_values=(('external_ids', {'iface-id': 'ccf625f8-471d-4406-9844-a3872b34137c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:02:a4', 'vm-uuid': '9b9952a8-61d7-410f-9f29-081ff912c4cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.882 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:19 np0005539564 NetworkManager[48997]: <info>  [1764403699.8845] manager: (tapccf625f8-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.886 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.894 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.896 226310 INFO os_vif [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47')#033[00m
Nov 29 03:08:19 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.962 226310 INFO nova.virt.block_device [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Attempting to driver detach volume b58f9fa1-0337-49a9-a381-b6da4150ee36 from mountpoint /dev/vdb#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.978 226310 DEBUG nova.virt.libvirt.driver [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Attempting to detach device vdb from instance 14addc5e-27d3-46d3-a93f-b22b3f400873 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.979 226310 DEBUG nova.virt.libvirt.guest [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-b58f9fa1-0337-49a9-a381-b6da4150ee36">
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <serial>b58f9fa1-0337-49a9-a381-b6da4150ee36</serial>
Nov 29 03:08:19 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:08:19 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:08:19 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.981 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.981 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.982 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:89:02:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:08:19 np0005539564 nova_compute[226295]: 2025-11-29 08:08:19.983 226310 INFO nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Using config drive#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.023 226310 DEBUG nova.storage.rbd_utils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.034 226310 INFO nova.virt.libvirt.driver [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Successfully detached device vdb from instance 14addc5e-27d3-46d3-a93f-b22b3f400873 from the persistent domain config.#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.035 226310 DEBUG nova.virt.libvirt.driver [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 14addc5e-27d3-46d3-a93f-b22b3f400873 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.035 226310 DEBUG nova.virt.libvirt.guest [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:08:20 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:08:20 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-b58f9fa1-0337-49a9-a381-b6da4150ee36">
Nov 29 03:08:20 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:20 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:20 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:20 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:08:20 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:08:20 np0005539564 nova_compute[226295]:  <serial>b58f9fa1-0337-49a9-a381-b6da4150ee36</serial>
Nov 29 03:08:20 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:08:20 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:08:20 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.162 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764403700.1623912, 14addc5e-27d3-46d3-a93f-b22b3f400873 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.164 226310 DEBUG nova.virt.libvirt.driver [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 14addc5e-27d3-46d3-a93f-b22b3f400873 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.167 226310 INFO nova.virt.libvirt.driver [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Successfully detached device vdb from instance 14addc5e-27d3-46d3-a93f-b22b3f400873 from the live domain config.#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.517 226310 DEBUG nova.objects.instance [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'flavor' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.603 226310 DEBUG oslo_concurrency.lockutils [None req-e485ad4d-6508-47ba-9206-d6ed6f93c43b 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.724 226310 INFO nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Creating config drive at /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/disk.config#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.733 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkca0b4x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.852 226310 DEBUG nova.network.neutron [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updated VIF entry in instance network info cache for port ccf625f8-471d-4406-9844-a3872b34137c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.853 226310 DEBUG nova.network.neutron [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.875 226310 DEBUG oslo_concurrency.lockutils [req-b1848c47-42ed-43bd-98d7-1c64c0c74563 req-fceb25a5-2753-421a-a730-82402ed6f596 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.884 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkca0b4x" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.919 226310 DEBUG nova.storage.rbd_utils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:20 np0005539564 nova_compute[226295]: 2025-11-29 08:08:20.924 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/disk.config 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.118 226310 DEBUG oslo_concurrency.processutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/disk.config 9b9952a8-61d7-410f-9f29-081ff912c4cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.119 226310 INFO nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Deleting local config drive /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/disk.config because it was imported into RBD.#033[00m
Nov 29 03:08:21 np0005539564 virtqemud[225880]: End of file while reading data: Input/output error
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.159 226310 DEBUG oslo_concurrency.lockutils [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "interface-14addc5e-27d3-46d3-a93f-b22b3f400873-96f4ddc5-c950-4eab-9f01-08954642669a" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.159 226310 DEBUG oslo_concurrency.lockutils [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "interface-14addc5e-27d3-46d3-a93f-b22b3f400873-96f4ddc5-c950-4eab-9f01-08954642669a" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.179 226310 DEBUG nova.objects.instance [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'flavor' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.200 226310 DEBUG nova.virt.libvirt.vif [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.200 226310 DEBUG nova.network.os_vif_util [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converting VIF {"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.201 226310 DEBUG nova.network.os_vif_util [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:21 np0005539564 kernel: tapccf625f8-47: entered promiscuous mode
Nov 29 03:08:21 np0005539564 NetworkManager[48997]: <info>  [1764403701.2077] manager: (tapccf625f8-47): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Nov 29 03:08:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:21Z|00291|binding|INFO|Claiming lport ccf625f8-471d-4406-9844-a3872b34137c for this chassis.
Nov 29 03:08:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:21Z|00292|binding|INFO|ccf625f8-471d-4406-9844-a3872b34137c: Claiming fa:16:3e:89:02:a4 10.100.0.9
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.210 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.215 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.219 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.223 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:02:a4 10.100.0.9'], port_security=['fa:16:3e:89:02:a4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9b9952a8-61d7-410f-9f29-081ff912c4cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=ccf625f8-471d-4406-9844-a3872b34137c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.223 226310 DEBUG nova.virt.libvirt.driver [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Attempting to detach device tap96f4ddc5-c9 from instance 14addc5e-27d3-46d3-a93f-b22b3f400873 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.224 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:5e:7d:3d"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <target dev="tap96f4ddc5-c9"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: </interface>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.226 139780 INFO neutron.agent.ovn.metadata.agent [-] Port ccf625f8-471d-4406-9844-a3872b34137c in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.229 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.237 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:08:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:21Z|00293|binding|INFO|Setting lport ccf625f8-471d-4406-9844-a3872b34137c ovn-installed in OVS
Nov 29 03:08:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:21Z|00294|binding|INFO|Setting lport ccf625f8-471d-4406-9844-a3872b34137c up in Southbound
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.244 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface>not found in domain: <domain type='kvm' id='36'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <name>instance-00000056</name>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <uuid>14addc5e-27d3-46d3-a93f-b22b3f400873</uuid>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:name>tempest-device-tagging-server-1793423903</nova:name>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:08:13</nova:creationTime>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:user uuid="20d37020e7484e3ead9c61a89db491b1">tempest-TaggedAttachmentsTest-392498407-project-member</nova:user>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:project uuid="a6104c57e0814f16958b14707debf843">tempest-TaggedAttachmentsTest-392498407</nova:project>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:port uuid="7fa1d4da-9208-4220-ae2e-26aada9fc93b">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:port uuid="96f4ddc5-c950-4eab-9f01-08954642669a">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.10.10.211" ipVersion="4"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <resource>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <partition>/machine</partition>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </resource>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='serial'>14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='uuid'>14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <feature policy='require' name='x2apic'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <feature policy='require' name='vme'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk' index='2'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='virtio-disk0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config' index='1'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='sata0-0-0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pcie.0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.3'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.4'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.5'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.6'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.7'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.8'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.9'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.10'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.11'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.12'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.13'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.14'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.15'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.16'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.17'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.18'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.19'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.20'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.21'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.22'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.23'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.24'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.25'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.26'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='usb'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='ide'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:fe:f0:45'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target dev='tap7fa1d4da-92'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='net0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:5e:7d:3d'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target dev='tap96f4ddc5-c9'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='net1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log' append='off'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </target>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log' append='off'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </console>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='input0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'>
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.244 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9aaf7777-adf7-4d0b-aa95-70924512eef5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='input1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.245 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='input2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='video0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='watchdog0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </watchdog>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='balloon0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='rng0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <label>system_u:system_r:svirt_t:s0:c109,c159</label>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c109,c159</imagelabel>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <label>+107:+107</label>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.245 226310 INFO nova.virt.libvirt.driver [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Successfully detached device tap96f4ddc5-c9 from instance 14addc5e-27d3-46d3-a93f-b22b3f400873 from the persistent domain config.#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.245 226310 DEBUG nova.virt.libvirt.driver [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] (1/8): Attempting to detach device tap96f4ddc5-c9 with device alias net1 from instance 14addc5e-27d3-46d3-a93f-b22b3f400873 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.245 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <mac address="fa:16:3e:5e:7d:3d"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <model type="virtio"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <mtu size="1442"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <target dev="tap96f4ddc5-c9"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: </interface>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.248 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 systemd-udevd[258818]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.251 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.252 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[eff3936d-fffc-4f9b-8ffc-d3bff19d5cbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.253 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1a6f10-dd3f-43ef-98d1-9ed482ae9da1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 systemd-machined[190128]: New machine qemu-38-instance-00000058.
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.271 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[b3169f82-c79e-4672-8de8-3aa50877694b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 NetworkManager[48997]: <info>  [1764403701.2740] device (tapccf625f8-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:08:21 np0005539564 NetworkManager[48997]: <info>  [1764403701.2763] device (tapccf625f8-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:08:21 np0005539564 systemd[1]: Started Virtual Machine qemu-38-instance-00000058.
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.290 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9f69d1-d1d5-4c15-a695-ae7eb6fb821e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 kernel: tap96f4ddc5-c9 (unregistering): left promiscuous mode
Nov 29 03:08:21 np0005539564 NetworkManager[48997]: <info>  [1764403701.3131] device (tap96f4ddc5-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:21Z|00295|binding|INFO|Releasing lport 96f4ddc5-c950-4eab-9f01-08954642669a from this chassis (sb_readonly=0)
Nov 29 03:08:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:21Z|00296|binding|INFO|Setting lport 96f4ddc5-c950-4eab-9f01-08954642669a down in Southbound
Nov 29 03:08:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:21Z|00297|binding|INFO|Removing iface tap96f4ddc5-c9 ovn-installed in OVS
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.323 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.325 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764403701.3255043, 14addc5e-27d3-46d3-a93f-b22b3f400873 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.330 226310 DEBUG nova.virt.libvirt.driver [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Start waiting for the detach event from libvirt for device tap96f4ddc5-c9 with device alias net1 for instance 14addc5e-27d3-46d3-a93f-b22b3f400873 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.331 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.330 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[10be2a83-6194-4181-bb7b-b76bdf8547d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.334 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:7d:3d 10.10.10.211'], port_security=['fa:16:3e:5e:7d:3d 10.10.10.211'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.211/24', 'neutron:device_id': '14addc5e-27d3-46d3-a93f-b22b3f400873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed4b55c0-489a-4302-a51c-6bb91006ca7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6104c57e0814f16958b14707debf843', 'neutron:revision_number': '4', 'neutron:security_group_ids': '503a0486-5a06-4485-aa0f-cf204d1283d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5369234-207b-4b87-81b4-c4d521bafb17, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=96f4ddc5-c950-4eab-9f01-08954642669a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.336 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface>not found in domain: <domain type='kvm' id='36'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <name>instance-00000056</name>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <uuid>14addc5e-27d3-46d3-a93f-b22b3f400873</uuid>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:name>tempest-device-tagging-server-1793423903</nova:name>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:08:13</nova:creationTime>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:user uuid="20d37020e7484e3ead9c61a89db491b1">tempest-TaggedAttachmentsTest-392498407-project-member</nova:user>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:project uuid="a6104c57e0814f16958b14707debf843">tempest-TaggedAttachmentsTest-392498407</nova:project>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:port uuid="7fa1d4da-9208-4220-ae2e-26aada9fc93b">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:port uuid="96f4ddc5-c950-4eab-9f01-08954642669a">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.10.10.211" ipVersion="4"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <resource>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <partition>/machine</partition>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </resource>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='serial'>14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='uuid'>14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <feature policy='require' name='x2apic'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <feature policy='require' name='vme'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk' index='2'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='virtio-disk0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config' index='1'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='sata0-0-0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pcie.0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.3'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.4'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.5'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.6'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.7'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.8'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.9'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.10'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.11'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.12'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.13'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.14'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.15'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.16'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.17'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.18'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.19'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.20'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.21'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.22'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.23'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.24'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.25'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='pci.26'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='usb'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='ide'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:fe:f0:45'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target dev='tap7fa1d4da-92'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='net0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log' append='off'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      </target>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log' append='off'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </console>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='input0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='input1'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='input2'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='video0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='watchdog0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </watchdog>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='balloon0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <alias name='rng0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <label>system_u:system_r:svirt_t:s0:c109,c159</label>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c109,c159</imagelabel>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <label>+107:+107</label>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.337 226310 INFO nova.virt.libvirt.driver [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Successfully detached device tap96f4ddc5-c9 from instance 14addc5e-27d3-46d3-a93f-b22b3f400873 from the live domain config.#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.337 226310 DEBUG nova.virt.libvirt.vif [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.338 226310 DEBUG nova.network.os_vif_util [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converting VIF {"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.339 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[000de4d1-9009-4020-88ba-eecfc425daa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.338 226310 DEBUG nova.network.os_vif_util [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.338 226310 DEBUG os_vif [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:21 np0005539564 NetworkManager[48997]: <info>  [1764403701.3417] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/157)
Nov 29 03:08:21 np0005539564 systemd-udevd[258822]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.340 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.340 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96f4ddc5-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.343 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.344 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.351 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.353 226310 INFO os_vif [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9')#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.354 226310 DEBUG nova.virt.libvirt.guest [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:name>tempest-device-tagging-server-1793423903</nova:name>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:08:21</nova:creationTime>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:user uuid="20d37020e7484e3ead9c61a89db491b1">tempest-TaggedAttachmentsTest-392498407-project-member</nova:user>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:project uuid="a6104c57e0814f16958b14707debf843">tempest-TaggedAttachmentsTest-392498407</nova:project>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    <nova:port uuid="7fa1d4da-9208-4220-ae2e-26aada9fc93b">
Nov 29 03:08:21 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:21 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:08:21 np0005539564 nova_compute[226295]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.378 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8605fb-6b04-4438-a394-1c6a40b4fcd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.381 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a4958d34-e26f-4867-ad6b-7c04fe38d94b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 NetworkManager[48997]: <info>  [1764403701.4056] device (tap8665acc6-10): carrier: link connected
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.412 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3d22c4-a51d-4b41-9d33-7319a681baa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.435 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[67044803-5fb1-42b5-acf7-496bb6939303]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667608, 'reachable_time': 18998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258855, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.460 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[89e77858-507f-42a4-83aa-c7aaad11fc1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258856, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:21.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.479 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f24e6aa6-95a7-4c85-afe3-75158967f291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667608, 'reachable_time': 18998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258857, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.517 226310 DEBUG nova.compute.manager [req-51c84c79-1732-4a43-be09-280ea971c01a req-a1a1fbfe-4eca-4c20-a8ec-31edaf7d6ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.518 226310 DEBUG oslo_concurrency.lockutils [req-51c84c79-1732-4a43-be09-280ea971c01a req-a1a1fbfe-4eca-4c20-a8ec-31edaf7d6ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.519 226310 DEBUG oslo_concurrency.lockutils [req-51c84c79-1732-4a43-be09-280ea971c01a req-a1a1fbfe-4eca-4c20-a8ec-31edaf7d6ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.519 226310 DEBUG oslo_concurrency.lockutils [req-51c84c79-1732-4a43-be09-280ea971c01a req-a1a1fbfe-4eca-4c20-a8ec-31edaf7d6ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.520 226310 DEBUG nova.compute.manager [req-51c84c79-1732-4a43-be09-280ea971c01a req-a1a1fbfe-4eca-4c20-a8ec-31edaf7d6ed1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Processing event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.519 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b2169cf0-ea77-489e-bd82-0d0c55cbaf62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.584 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[714a67c9-2ee3-44f0-8022-00af2df2973d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.586 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.586 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.587 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:21 np0005539564 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.588 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 NetworkManager[48997]: <info>  [1764403701.5893] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.591 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.592 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:21Z|00298|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.593 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.624 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.625 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.629 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c4579b6f-8f06-4db5-b983-68170258b2a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.630 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:08:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:21.631 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:08:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.872 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403701.8719838, 9b9952a8-61d7-410f-9f29-081ff912c4cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.873 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] VM Started (Lifecycle Event)#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.875 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.880 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.884 226310 INFO nova.virt.libvirt.driver [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance spawned successfully.#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.885 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.915 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.920 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.921 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.922 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.922 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.923 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.923 226310 DEBUG nova.virt.libvirt.driver [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.929 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.971 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.972 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403701.872101, 9b9952a8-61d7-410f-9f29-081ff912c4cb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.972 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:08:21 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.996 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:21.999 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403701.8789716, 9b9952a8-61d7-410f-9f29-081ff912c4cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.000 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.005 226310 INFO nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Took 8.27 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.005 226310 DEBUG nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.016 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.019 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.038 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:22 np0005539564 podman[258933]: 2025-11-29 08:08:22.048655533 +0000 UTC m=+0.084245453 container create 9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.082 226310 INFO nova.compute.manager [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Took 9.76 seconds to build instance.#033[00m
Nov 29 03:08:22 np0005539564 podman[258933]: 2025-11-29 08:08:22.009839476 +0000 UTC m=+0.045429486 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.101 226310 DEBUG oslo_concurrency.lockutils [None req-cec781f4-02db-4f04-9229-8aab1bc7e764 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:22 np0005539564 systemd[1]: Started libpod-conmon-9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf.scope.
Nov 29 03:08:22 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:08:22 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ce517ba7faf9db1e0ac3cbee680b91c1e0a6bd95b800259702075bdef630e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:22 np0005539564 podman[258933]: 2025-11-29 08:08:22.189304527 +0000 UTC m=+0.224894467 container init 9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:08:22 np0005539564 podman[258933]: 2025-11-29 08:08:22.195440772 +0000 UTC m=+0.231030682 container start 9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:08:22 np0005539564 podman[258950]: 2025-11-29 08:08:22.209256745 +0000 UTC m=+0.075767985 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:08:22 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258959]: [NOTICE]   (259003) : New worker (259013) forked
Nov 29 03:08:22 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258959]: [NOTICE]   (259003) : Loading success.
Nov 29 03:08:22 np0005539564 podman[258949]: 2025-11-29 08:08:22.228748761 +0000 UTC m=+0.091765217 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:08:22 np0005539564 podman[258948]: 2025-11-29 08:08:22.253032796 +0000 UTC m=+0.125720082 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.269 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 96f4ddc5-c950-4eab-9f01-08954642669a in datapath ed4b55c0-489a-4302-a51c-6bb91006ca7e unbound from our chassis#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.272 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed4b55c0-489a-4302-a51c-6bb91006ca7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.273 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[74f74eb9-e239-44ae-ac42-aca4721f110b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.274 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e namespace which is not needed anymore#033[00m
Nov 29 03:08:22 np0005539564 neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e[258566]: [NOTICE]   (258570) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:22 np0005539564 neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e[258566]: [NOTICE]   (258570) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:22 np0005539564 neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e[258566]: [WARNING]  (258570) : Exiting Master process...
Nov 29 03:08:22 np0005539564 neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e[258566]: [WARNING]  (258570) : Exiting Master process...
Nov 29 03:08:22 np0005539564 neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e[258566]: [ALERT]    (258570) : Current worker (258572) exited with code 143 (Terminated)
Nov 29 03:08:22 np0005539564 neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e[258566]: [WARNING]  (258570) : All workers exited. Exiting... (0)
Nov 29 03:08:22 np0005539564 systemd[1]: libpod-277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a.scope: Deactivated successfully.
Nov 29 03:08:22 np0005539564 podman[259044]: 2025-11-29 08:08:22.428667324 +0000 UTC m=+0.056446804 container died 277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:08:22 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:22 np0005539564 systemd[1]: var-lib-containers-storage-overlay-7f4162dc9b87e90569e1be548b98e27dca22c2b7649521e69deb79ac838a3c50-merged.mount: Deactivated successfully.
Nov 29 03:08:22 np0005539564 podman[259044]: 2025-11-29 08:08:22.471313634 +0000 UTC m=+0.099093074 container cleanup 277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:08:22 np0005539564 systemd[1]: libpod-conmon-277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a.scope: Deactivated successfully.
Nov 29 03:08:22 np0005539564 podman[259073]: 2025-11-29 08:08:22.532825694 +0000 UTC m=+0.040894195 container remove 277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.538 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[241f9bca-4868-4255-9055-5d6671e4de56]: (4, ('Sat Nov 29 08:08:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e (277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a)\n277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a\nSat Nov 29 08:08:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e (277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a)\n277953052b1ec48872047d988e9f65b8b68dc455f8d2941b97d859ed3d68189a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.539 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c82b1f-78b5-4b81-8218-063943d2926b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.540 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped4b55c0-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.542 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:22 np0005539564 kernel: taped4b55c0-40: left promiscuous mode
Nov 29 03:08:22 np0005539564 nova_compute[226295]: 2025-11-29 08:08:22.557 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.561 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d0540d-8d02-4449-97e5-9306c9495fc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.585 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[40793499-4ad4-42f8-a545-e8a78d686413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.587 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9265980b-d158-41b7-b88e-868fd01b0912]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.604 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b87c5fc4-3854-4dd6-a312-8ef93a4e5c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666816, 'reachable_time': 18403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259088, 'error': None, 'target': 'ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539564 systemd[1]: run-netns-ovnmeta\x2ded4b55c0\x2d489a\x2d4302\x2da51c\x2d6bb91006ca7e.mount: Deactivated successfully.
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.609 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed4b55c0-489a-4302-a51c-6bb91006ca7e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:22.610 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c4190d-c864-43b7-839b-7cf517a63888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.123 226310 DEBUG nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-unplugged-96f4ddc5-c950-4eab-9f01-08954642669a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.124 226310 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.124 226310 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.124 226310 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.124 226310 DEBUG nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] No waiting events found dispatching network-vif-unplugged-96f4ddc5-c950-4eab-9f01-08954642669a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.125 226310 WARNING nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received unexpected event network-vif-unplugged-96f4ddc5-c950-4eab-9f01-08954642669a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.125 226310 DEBUG nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.125 226310 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.126 226310 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.126 226310 DEBUG oslo_concurrency.lockutils [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.126 226310 DEBUG nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] No waiting events found dispatching network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.126 226310 WARNING nova.compute.manager [req-7191ae1c-1733-4ab3-8c36-19de9f8726a7 req-478d6c4a-01f3-4e46-a533-6b551cfdaccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received unexpected event network-vif-plugged-96f4ddc5-c950-4eab-9f01-08954642669a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.252 226310 DEBUG oslo_concurrency.lockutils [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.253 226310 DEBUG oslo_concurrency.lockutils [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquired lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.253 226310 DEBUG nova.network.neutron [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:23.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.582 226310 DEBUG nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-deleted-96f4ddc5-c950-4eab-9f01-08954642669a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.583 226310 INFO nova.compute.manager [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Neutron deleted interface 96f4ddc5-c950-4eab-9f01-08954642669a; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.583 226310 DEBUG nova.network.neutron [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating instance_info_cache with network_info: [{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.614 226310 DEBUG nova.compute.manager [req-dfd5ea3d-d9ec-4ba0-9f41-2872f6ac3be3 req-a0b7225d-2a58-4d5e-bc88-687e9f86e62f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.614 226310 DEBUG oslo_concurrency.lockutils [req-dfd5ea3d-d9ec-4ba0-9f41-2872f6ac3be3 req-a0b7225d-2a58-4d5e-bc88-687e9f86e62f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.615 226310 DEBUG oslo_concurrency.lockutils [req-dfd5ea3d-d9ec-4ba0-9f41-2872f6ac3be3 req-a0b7225d-2a58-4d5e-bc88-687e9f86e62f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.616 226310 DEBUG oslo_concurrency.lockutils [req-dfd5ea3d-d9ec-4ba0-9f41-2872f6ac3be3 req-a0b7225d-2a58-4d5e-bc88-687e9f86e62f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.616 226310 DEBUG nova.compute.manager [req-dfd5ea3d-d9ec-4ba0-9f41-2872f6ac3be3 req-a0b7225d-2a58-4d5e-bc88-687e9f86e62f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.617 226310 WARNING nova.compute.manager [req-dfd5ea3d-d9ec-4ba0-9f41-2872f6ac3be3 req-a0b7225d-2a58-4d5e-bc88-687e9f86e62f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.629 226310 DEBUG nova.objects.instance [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'system_metadata' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.669 226310 DEBUG nova.objects.instance [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'flavor' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.695 226310 DEBUG nova.virt.libvirt.vif [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.697 226310 DEBUG nova.network.os_vif_util [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.698 226310 DEBUG nova.network.os_vif_util [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.702 226310 DEBUG nova.virt.libvirt.guest [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:08:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:23.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.707 226310 DEBUG nova.virt.libvirt.guest [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface>not found in domain: <domain type='kvm' id='36'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <name>instance-00000056</name>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <uuid>14addc5e-27d3-46d3-a93f-b22b3f400873</uuid>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:name>tempest-device-tagging-server-1793423903</nova:name>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:08:21</nova:creationTime>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:user uuid="20d37020e7484e3ead9c61a89db491b1">tempest-TaggedAttachmentsTest-392498407-project-member</nova:user>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:project uuid="a6104c57e0814f16958b14707debf843">tempest-TaggedAttachmentsTest-392498407</nova:project>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:port uuid="7fa1d4da-9208-4220-ae2e-26aada9fc93b">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:08:23 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <resource>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <partition>/machine</partition>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </resource>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='serial'>14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='uuid'>14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <feature policy='require' name='x2apic'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <feature policy='require' name='vme'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk' index='2'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='virtio-disk0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config' index='1'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='sata0-0-0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pcie.0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.3'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.4'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.5'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.6'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.7'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.8'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.9'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.10'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.11'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.12'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.13'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.14'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.15'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.16'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.17'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.18'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.19'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.20'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.21'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.22'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.23'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.24'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.25'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.26'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='usb'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='ide'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:fe:f0:45'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target dev='tap7fa1d4da-92'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='net0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log' append='off'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </target>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log' append='off'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </console>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='input0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='input1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='input2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='video0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='watchdog0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </watchdog>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='balloon0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='rng0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <label>system_u:system_r:svirt_t:s0:c109,c159</label>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c109,c159</imagelabel>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <label>+107:+107</label>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:08:23 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:08:23 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.711 226310 DEBUG nova.virt.libvirt.guest [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.716 226310 DEBUG nova.virt.libvirt.guest [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5e:7d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap96f4ddc5-c9"/></interface>not found in domain: <domain type='kvm' id='36'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <name>instance-00000056</name>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <uuid>14addc5e-27d3-46d3-a93f-b22b3f400873</uuid>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:name>tempest-device-tagging-server-1793423903</nova:name>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:08:21</nova:creationTime>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:user uuid="20d37020e7484e3ead9c61a89db491b1">tempest-TaggedAttachmentsTest-392498407-project-member</nova:user>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:project uuid="a6104c57e0814f16958b14707debf843">tempest-TaggedAttachmentsTest-392498407</nova:project>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:port uuid="7fa1d4da-9208-4220-ae2e-26aada9fc93b">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:08:23 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <memory unit='KiB'>131072</memory>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <resource>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <partition>/machine</partition>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </resource>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <sysinfo type='smbios'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='serial'>14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='uuid'>14addc5e-27d3-46d3-a93f-b22b3f400873</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <boot dev='hd'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <smbios mode='sysinfo'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <vmcoreinfo state='on'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <feature policy='require' name='x2apic'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <feature policy='require' name='vme'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <clock offset='utc'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <timer name='hpet' present='no'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <on_reboot>restart</on_reboot>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <on_crash>destroy</on_crash>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <disk type='network' device='disk'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk' index='2'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target dev='vda' bus='virtio'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='virtio-disk0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <disk type='network' device='cdrom'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <auth username='openstack'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <source protocol='rbd' name='vms/14addc5e-27d3-46d3-a93f-b22b3f400873_disk.config' index='1'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target dev='sda' bus='sata'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <readonly/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='sata0-0-0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pcie.0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='1' port='0x10'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='2' port='0x11'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='3' port='0x12'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.3'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='4' port='0x13'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.4'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='5' port='0x14'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.5'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='6' port='0x15'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.6'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='7' port='0x16'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.7'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='8' port='0x17'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.8'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='9' port='0x18'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.9'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='10' port='0x19'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.10'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='11' port='0x1a'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.11'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='12' port='0x1b'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.12'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='13' port='0x1c'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.13'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='14' port='0x1d'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.14'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='15' port='0x1e'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.15'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='16' port='0x1f'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.16'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='17' port='0x20'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.17'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='18' port='0x21'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.18'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='19' port='0x22'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.19'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='20' port='0x23'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.20'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='21' port='0x24'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.21'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='22' port='0x25'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.22'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='23' port='0x26'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.23'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='24' port='0x27'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.24'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-root-port'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target chassis='25' port='0x28'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.25'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model name='pcie-pci-bridge'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='pci.26'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='usb'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <controller type='sata' index='0'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='ide'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </controller>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <interface type='ethernet'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <mac address='fa:16:3e:fe:f0:45'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target dev='tap7fa1d4da-92'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model type='virtio'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <mtu size='1442'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='net0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <serial type='pty'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log' append='off'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target type='isa-serial' port='0'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:        <model name='isa-serial'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      </target>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <source path='/dev/pts/0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <log file='/var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873/console.log' append='off'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <target type='serial' port='0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='serial0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </console>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <input type='tablet' bus='usb'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='input0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <input type='mouse' bus='ps2'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='input1'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <input type='keyboard' bus='ps2'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='input2'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </input>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <listen type='address' address='::0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </graphics>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <audio id='1' type='none'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='video0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <watchdog model='itco' action='reset'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='watchdog0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </watchdog>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <memballoon model='virtio'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <stats period='10'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='balloon0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <rng model='virtio'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <alias name='rng0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <label>system_u:system_r:svirt_t:s0:c109,c159</label>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c109,c159</imagelabel>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <label>+107:+107</label>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </seclabel>
Nov 29 03:08:23 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:08:23 np0005539564 nova_compute[226295]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.721 226310 WARNING nova.virt.libvirt.driver [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Detaching interface fa:16:3e:5e:7d:3d failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap96f4ddc5-c9' not found.#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.723 226310 DEBUG nova.virt.libvirt.vif [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.723 226310 DEBUG nova.network.os_vif_util [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "96f4ddc5-c950-4eab-9f01-08954642669a", "address": "fa:16:3e:5e:7d:3d", "network": {"id": "ed4b55c0-489a-4302-a51c-6bb91006ca7e", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1159441313", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f4ddc5-c9", "ovs_interfaceid": "96f4ddc5-c950-4eab-9f01-08954642669a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.724 226310 DEBUG nova.network.os_vif_util [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.725 226310 DEBUG os_vif [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.727 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.727 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96f4ddc5-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.728 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.730 226310 INFO os_vif [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:7d:3d,bridge_name='br-int',has_traffic_filtering=True,id=96f4ddc5-c950-4eab-9f01-08954642669a,network=Network(ed4b55c0-489a-4302-a51c-6bb91006ca7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f4ddc5-c9')#033[00m
Nov 29 03:08:23 np0005539564 nova_compute[226295]: 2025-11-29 08:08:23.731 226310 DEBUG nova.virt.libvirt.guest [req-0ab4d9af-74fa-4228-8309-3810cde16247 req-7af830eb-dd7f-4686-b29d-e7f9da7f9b56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:name>tempest-device-tagging-server-1793423903</nova:name>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:creationTime>2025-11-29 08:08:23</nova:creationTime>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:flavor name="m1.nano">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:memory>128</nova:memory>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:disk>1</nova:disk>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:swap>0</nova:swap>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:flavor>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:owner>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:user uuid="20d37020e7484e3ead9c61a89db491b1">tempest-TaggedAttachmentsTest-392498407-project-member</nova:user>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:project uuid="a6104c57e0814f16958b14707debf843">tempest-TaggedAttachmentsTest-392498407</nova:project>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:owner>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  <nova:ports>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    <nova:port uuid="7fa1d4da-9208-4220-ae2e-26aada9fc93b">
Nov 29 03:08:23 np0005539564 nova_compute[226295]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:    </nova:port>
Nov 29 03:08:23 np0005539564 nova_compute[226295]:  </nova:ports>
Nov 29 03:08:23 np0005539564 nova_compute[226295]: </nova:instance>
Nov 29 03:08:23 np0005539564 nova_compute[226295]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:08:24 np0005539564 nova_compute[226295]: 2025-11-29 08:08:24.065 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403689.0639405, a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:24 np0005539564 nova_compute[226295]: 2025-11-29 08:08:24.065 226310 INFO nova.compute.manager [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:24 np0005539564 nova_compute[226295]: 2025-11-29 08:08:24.091 226310 DEBUG nova.compute.manager [None req-15b43560-571c-4f0c-b125-cf0f899c8056 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:24 np0005539564 nova_compute[226295]: 2025-11-29 08:08:24.882 226310 INFO nova.network.neutron [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Port 96f4ddc5-c950-4eab-9f01-08954642669a from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:08:24 np0005539564 nova_compute[226295]: 2025-11-29 08:08:24.883 226310 DEBUG nova.network.neutron [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating instance_info_cache with network_info: [{"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:24 np0005539564 nova_compute[226295]: 2025-11-29 08:08:24.921 226310 DEBUG oslo_concurrency.lockutils [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Releasing lock "refresh_cache-14addc5e-27d3-46d3-a93f-b22b3f400873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:24Z|00299|binding|INFO|Releasing lport 5db3ca27-ac70-4295-a4a2-f7ea7ed8fa99 from this chassis (sb_readonly=0)
Nov 29 03:08:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:24Z|00300|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:08:24 np0005539564 nova_compute[226295]: 2025-11-29 08:08:24.946 226310 DEBUG oslo_concurrency.lockutils [None req-338b5d07-508b-44ed-9446-628880148f50 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "interface-14addc5e-27d3-46d3-a93f-b22b3f400873-96f4ddc5-c950-4eab-9f01-08954642669a" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:25 np0005539564 nova_compute[226295]: 2025-11-29 08:08:25.046 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:25.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.382 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.520 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.521 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.522 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.523 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.523 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.525 226310 INFO nova.compute.manager [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Terminating instance#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.526 226310 DEBUG nova.compute.manager [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:08:26 np0005539564 kernel: tap7fa1d4da-92 (unregistering): left promiscuous mode
Nov 29 03:08:26 np0005539564 NetworkManager[48997]: <info>  [1764403706.5975] device (tap7fa1d4da-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:26Z|00301|binding|INFO|Releasing lport 7fa1d4da-9208-4220-ae2e-26aada9fc93b from this chassis (sb_readonly=0)
Nov 29 03:08:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:26Z|00302|binding|INFO|Setting lport 7fa1d4da-9208-4220-ae2e-26aada9fc93b down in Southbound
Nov 29 03:08:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:26Z|00303|binding|INFO|Removing iface tap7fa1d4da-92 ovn-installed in OVS
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.610 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.651 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:26.658 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:f0:45 10.100.0.6'], port_security=['fa:16:3e:fe:f0:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '14addc5e-27d3-46d3-a93f-b22b3f400873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f96ca160-f806-4467-a92a-7669548852b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6104c57e0814f16958b14707debf843', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bb58df77-ecf1-472b-b584-68079923e549', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26b0970c-a33b-47cb-8349-4ba02d2b286b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7fa1d4da-9208-4220-ae2e-26aada9fc93b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:26.660 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7fa1d4da-9208-4220-ae2e-26aada9fc93b in datapath f96ca160-f806-4467-a92a-7669548852b0 unbound from our chassis#033[00m
Nov 29 03:08:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:26.664 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f96ca160-f806-4467-a92a-7669548852b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:26.665 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[02caae6a-ca01-4b94-9f81-d07cfdb5b351]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:26.666 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f96ca160-f806-4467-a92a-7669548852b0 namespace which is not needed anymore#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.670 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539564 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 29 03:08:26 np0005539564 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000056.scope: Consumed 16.386s CPU time.
Nov 29 03:08:26 np0005539564 systemd-machined[190128]: Machine qemu-36-instance-00000056 terminated.
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.769 226310 INFO nova.virt.libvirt.driver [-] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Instance destroyed successfully.#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.770 226310 DEBUG nova.objects.instance [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lazy-loading 'resources' on Instance uuid 14addc5e-27d3-46d3-a93f-b22b3f400873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.789 226310 DEBUG nova.virt.libvirt.vif [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1793423903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1793423903',id=86,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbPhmRzTkQm2LL7llHwXmde9U0uI+rBMNbooDLe9TK8uFGu2MWDNNFlxdqcFGpEXOxxGpH+eZsXb5EWj4gaq3WgxVHUcO3ffL3yywL7QaK4fsxNbY1WerF6n4fC7HmUGg==',key_name='tempest-keypair-921015683',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6104c57e0814f16958b14707debf843',ramdisk_id='',reservation_id='r-fjo7fgu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-392498407',owner_user_name='tempest-TaggedAttachmentsTest-392498407-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='20d37020e7484e3ead9c61a89db491b1',uuid=14addc5e-27d3-46d3-a93f-b22b3f400873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.790 226310 DEBUG nova.network.os_vif_util [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converting VIF {"id": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "address": "fa:16:3e:fe:f0:45", "network": {"id": "f96ca160-f806-4467-a92a-7669548852b0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1120864816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6104c57e0814f16958b14707debf843", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa1d4da-92", "ovs_interfaceid": "7fa1d4da-9208-4220-ae2e-26aada9fc93b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.791 226310 DEBUG nova.network.os_vif_util [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:f0:45,bridge_name='br-int',has_traffic_filtering=True,id=7fa1d4da-9208-4220-ae2e-26aada9fc93b,network=Network(f96ca160-f806-4467-a92a-7669548852b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa1d4da-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.791 226310 DEBUG os_vif [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:f0:45,bridge_name='br-int',has_traffic_filtering=True,id=7fa1d4da-9208-4220-ae2e-26aada9fc93b,network=Network(f96ca160-f806-4467-a92a-7669548852b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa1d4da-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.793 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.793 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fa1d4da-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.795 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.798 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.806 226310 INFO os_vif [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:f0:45,bridge_name='br-int',has_traffic_filtering=True,id=7fa1d4da-9208-4220-ae2e-26aada9fc93b,network=Network(f96ca160-f806-4467-a92a-7669548852b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa1d4da-92')#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.834 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539564 neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0[257662]: [NOTICE]   (257666) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:26 np0005539564 neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0[257662]: [NOTICE]   (257666) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:26 np0005539564 neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0[257662]: [WARNING]  (257666) : Exiting Master process...
Nov 29 03:08:26 np0005539564 neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0[257662]: [ALERT]    (257666) : Current worker (257668) exited with code 143 (Terminated)
Nov 29 03:08:26 np0005539564 neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0[257662]: [WARNING]  (257666) : All workers exited. Exiting... (0)
Nov 29 03:08:26 np0005539564 systemd[1]: libpod-587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36.scope: Deactivated successfully.
Nov 29 03:08:26 np0005539564 podman[259122]: 2025-11-29 08:08:26.867896177 +0000 UTC m=+0.048145759 container died 587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:08:26 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:26 np0005539564 systemd[1]: var-lib-containers-storage-overlay-7cf68dccafd282e93f80c6dde8afbfb569572c87599849358d60b6d613bbf979-merged.mount: Deactivated successfully.
Nov 29 03:08:26 np0005539564 podman[259122]: 2025-11-29 08:08:26.919465909 +0000 UTC m=+0.099715521 container cleanup 587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:08:26 np0005539564 systemd[1]: libpod-conmon-587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36.scope: Deactivated successfully.
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.977 226310 DEBUG nova.compute.manager [req-be154d86-2ab0-41d7-912b-7f6c4404d6e5 req-d226b5ef-db25-4681-89e2-a6afdcd3c898 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-unplugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.978 226310 DEBUG oslo_concurrency.lockutils [req-be154d86-2ab0-41d7-912b-7f6c4404d6e5 req-d226b5ef-db25-4681-89e2-a6afdcd3c898 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.978 226310 DEBUG oslo_concurrency.lockutils [req-be154d86-2ab0-41d7-912b-7f6c4404d6e5 req-d226b5ef-db25-4681-89e2-a6afdcd3c898 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.978 226310 DEBUG oslo_concurrency.lockutils [req-be154d86-2ab0-41d7-912b-7f6c4404d6e5 req-d226b5ef-db25-4681-89e2-a6afdcd3c898 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.978 226310 DEBUG nova.compute.manager [req-be154d86-2ab0-41d7-912b-7f6c4404d6e5 req-d226b5ef-db25-4681-89e2-a6afdcd3c898 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] No waiting events found dispatching network-vif-unplugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:26 np0005539564 nova_compute[226295]: 2025-11-29 08:08:26.978 226310 DEBUG nova.compute.manager [req-be154d86-2ab0-41d7-912b-7f6c4404d6e5 req-d226b5ef-db25-4681-89e2-a6afdcd3c898 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-unplugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:08:27 np0005539564 podman[259168]: 2025-11-29 08:08:27.020107884 +0000 UTC m=+0.066517166 container remove 587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.049 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d2106abb-054e-4e4c-8105-2182a2751ad3]: (4, ('Sat Nov 29 08:08:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0 (587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36)\n587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36\nSat Nov 29 08:08:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f96ca160-f806-4467-a92a-7669548852b0 (587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36)\n587ae86851a04c35eb8d638624e32d40d1ce5ff6528caa02febe590342180d36\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.052 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8a96acff-3989-402d-b874-560692a73490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.054 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf96ca160-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.057 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:27 np0005539564 kernel: tapf96ca160-f0: left promiscuous mode
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.078 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.079 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.086 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6eabab-b7bd-4427-a798-6881ece40196]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.104 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[01d7f82c-e156-43b0-bff3-e73018d5d429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.106 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0864415f-d208-4dad-83d3-0adb26d069bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.135 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[98086fb3-1bc7-47a2-a789-d17e6ec07b29]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663881, 'reachable_time': 17617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259184, 'error': None, 'target': 'ovnmeta-f96ca160-f806-4467-a92a-7669548852b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.140 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f96ca160-f806-4467-a92a-7669548852b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:27.140 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6c5307-4061-4309-a83d-a8713e8452ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:27 np0005539564 systemd[1]: run-netns-ovnmeta\x2df96ca160\x2df806\x2d4467\x2da92a\x2d7669548852b0.mount: Deactivated successfully.
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.338 226310 INFO nova.virt.libvirt.driver [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Deleting instance files /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873_del#033[00m
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.340 226310 INFO nova.virt.libvirt.driver [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Deletion of /var/lib/nova/instances/14addc5e-27d3-46d3-a93f-b22b3f400873_del complete#033[00m
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.444 226310 INFO nova.compute.manager [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.445 226310 DEBUG oslo.service.loopingcall [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.446 226310 DEBUG nova.compute.manager [-] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:08:27 np0005539564 nova_compute[226295]: 2025-11-29 08:08:27.446 226310 DEBUG nova.network.neutron [-] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:08:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:27.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:27.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:28.839 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:28.841 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:08:28 np0005539564 nova_compute[226295]: 2025-11-29 08:08:28.841 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.458 226310 DEBUG nova.network.neutron [-] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.485 226310 INFO nova.compute.manager [-] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Took 2.04 seconds to deallocate network for instance.#033[00m
Nov 29 03:08:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:29.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.580 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.581 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.637 226310 DEBUG nova.compute.manager [req-3d8a5d3e-3776-4adc-b5bd-759cf43e738d req-a4b52b2a-e832-440f-a1be-6bdf1c0b3285 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-deleted-7fa1d4da-9208-4220-ae2e-26aada9fc93b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.639 226310 DEBUG nova.compute.manager [req-8f3dad73-06b9-4652-a9fe-888e9bac2123 req-6fcd0d96-4275-4bc9-8a40-d989bad6e08a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received event network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.639 226310 DEBUG oslo_concurrency.lockutils [req-8f3dad73-06b9-4652-a9fe-888e9bac2123 req-6fcd0d96-4275-4bc9-8a40-d989bad6e08a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.640 226310 DEBUG oslo_concurrency.lockutils [req-8f3dad73-06b9-4652-a9fe-888e9bac2123 req-6fcd0d96-4275-4bc9-8a40-d989bad6e08a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.640 226310 DEBUG oslo_concurrency.lockutils [req-8f3dad73-06b9-4652-a9fe-888e9bac2123 req-6fcd0d96-4275-4bc9-8a40-d989bad6e08a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.641 226310 DEBUG nova.compute.manager [req-8f3dad73-06b9-4652-a9fe-888e9bac2123 req-6fcd0d96-4275-4bc9-8a40-d989bad6e08a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] No waiting events found dispatching network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.641 226310 WARNING nova.compute.manager [req-8f3dad73-06b9-4652-a9fe-888e9bac2123 req-6fcd0d96-4275-4bc9-8a40-d989bad6e08a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Received unexpected event network-vif-plugged-7fa1d4da-9208-4220-ae2e-26aada9fc93b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:08:29 np0005539564 nova_compute[226295]: 2025-11-29 08:08:29.675 226310 DEBUG oslo_concurrency.processutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:29.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3379344817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.211 226310 DEBUG oslo_concurrency.processutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.220 226310 DEBUG nova.compute.provider_tree [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.239 226310 DEBUG nova.scheduler.client.report [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.268 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.309 226310 INFO nova.scheduler.client.report [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Deleted allocations for instance 14addc5e-27d3-46d3-a93f-b22b3f400873#033[00m
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.419 226310 DEBUG oslo_concurrency.lockutils [None req-3b43eedc-21d8-4614-bd41-524d1f6758b3 20d37020e7484e3ead9c61a89db491b1 a6104c57e0814f16958b14707debf843 - - default default] Lock "14addc5e-27d3-46d3-a93f-b22b3f400873" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.592 226310 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.593 226310 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:30 np0005539564 nova_compute[226295]: 2025-11-29 08:08:30.594 226310 DEBUG nova.network.neutron [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:31.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:31.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:31 np0005539564 nova_compute[226295]: 2025-11-29 08:08:31.797 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:31 np0005539564 nova_compute[226295]: 2025-11-29 08:08:31.811 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.053 226310 DEBUG nova.network.neutron [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.078 226310 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.216 226310 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.217 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Creating file /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/dd354d409e7f4d4a8147a205c97c8c89.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.217 226310 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/dd354d409e7f4d4a8147a205c97c8c89.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.851 226310 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/dd354d409e7f4d4a8147a205c97c8c89.tmp" returned: 1 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.852 226310 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb/dd354d409e7f4d4a8147a205c97c8c89.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.853 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Creating directory /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:08:32 np0005539564 nova_compute[226295]: 2025-11-29 08:08:32.854 226310 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:33 np0005539564 nova_compute[226295]: 2025-11-29 08:08:33.131 226310 DEBUG oslo_concurrency.processutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/9b9952a8-61d7-410f-9f29-081ff912c4cb" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:33 np0005539564 nova_compute[226295]: 2025-11-29 08:08:33.141 226310 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:08:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:33.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:33.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:34 np0005539564 nova_compute[226295]: 2025-11-29 08:08:34.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:35.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:35.844 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:36Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:02:a4 10.100.0.9
Nov 29 03:08:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:36Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:02:a4 10.100.0.9
Nov 29 03:08:36 np0005539564 nova_compute[226295]: 2025-11-29 08:08:36.802 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:36 np0005539564 nova_compute[226295]: 2025-11-29 08:08:36.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:37.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:37.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:38 np0005539564 nova_compute[226295]: 2025-11-29 08:08:38.079 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:39 np0005539564 nova_compute[226295]: 2025-11-29 08:08:39.273 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:39.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:39.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:40Z|00304|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:08:40 np0005539564 nova_compute[226295]: 2025-11-29 08:08:40.794 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:41Z|00305|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:08:41 np0005539564 nova_compute[226295]: 2025-11-29 08:08:41.115 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539564 nova_compute[226295]: 2025-11-29 08:08:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:41.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:41.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:41 np0005539564 nova_compute[226295]: 2025-11-29 08:08:41.768 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403706.7665825, 14addc5e-27d3-46d3-a93f-b22b3f400873 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:41 np0005539564 nova_compute[226295]: 2025-11-29 08:08:41.768 226310 INFO nova.compute.manager [-] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:41 np0005539564 nova_compute[226295]: 2025-11-29 08:08:41.799 226310 DEBUG nova.compute.manager [None req-b94207a6-13f2-4d40-bc96-495b33210ad9 - - - - - -] [instance: 14addc5e-27d3-46d3-a93f-b22b3f400873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:41 np0005539564 nova_compute[226295]: 2025-11-29 08:08:41.804 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539564 nova_compute[226295]: 2025-11-29 08:08:41.816 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539564 nova_compute[226295]: 2025-11-29 08:08:42.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:42 np0005539564 nova_compute[226295]: 2025-11-29 08:08:42.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:08:42 np0005539564 nova_compute[226295]: 2025-11-29 08:08:42.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:08:42 np0005539564 nova_compute[226295]: 2025-11-29 08:08:42.364 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:42 np0005539564 nova_compute[226295]: 2025-11-29 08:08:42.365 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:42 np0005539564 nova_compute[226295]: 2025-11-29 08:08:42.365 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:08:42 np0005539564 nova_compute[226295]: 2025-11-29 08:08:42.366 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:43 np0005539564 nova_compute[226295]: 2025-11-29 08:08:43.197 226310 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:08:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:43.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:43.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:44 np0005539564 nova_compute[226295]: 2025-11-29 08:08:44.017 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:44 np0005539564 nova_compute[226295]: 2025-11-29 08:08:44.040 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:44 np0005539564 nova_compute[226295]: 2025-11-29 08:08:44.040 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:08:44 np0005539564 nova_compute[226295]: 2025-11-29 08:08:44.040 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:44 np0005539564 nova_compute[226295]: 2025-11-29 08:08:44.041 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:44 np0005539564 nova_compute[226295]: 2025-11-29 08:08:44.041 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:45.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:45 np0005539564 kernel: tapccf625f8-47 (unregistering): left promiscuous mode
Nov 29 03:08:45 np0005539564 NetworkManager[48997]: <info>  [1764403725.5428] device (tapccf625f8-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:45Z|00306|binding|INFO|Releasing lport ccf625f8-471d-4406-9844-a3872b34137c from this chassis (sb_readonly=0)
Nov 29 03:08:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:45Z|00307|binding|INFO|Setting lport ccf625f8-471d-4406-9844-a3872b34137c down in Southbound
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.553 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:08:45Z|00308|binding|INFO|Removing iface tapccf625f8-47 ovn-installed in OVS
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.556 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.568 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:02:a4 10.100.0.9'], port_security=['fa:16:3e:89:02:a4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9b9952a8-61d7-410f-9f29-081ff912c4cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=ccf625f8-471d-4406-9844-a3872b34137c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.571 139780 INFO neutron.agent.ovn.metadata.agent [-] Port ccf625f8-471d-4406-9844-a3872b34137c in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.575 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.576 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffed635-e8b6-45f2-9ce1-dd0906637e34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.576 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.578 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore#033[00m
Nov 29 03:08:45 np0005539564 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000058.scope: Deactivated successfully.
Nov 29 03:08:45 np0005539564 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000058.scope: Consumed 14.704s CPU time.
Nov 29 03:08:45 np0005539564 systemd-machined[190128]: Machine qemu-38-instance-00000058 terminated.
Nov 29 03:08:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:45 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258959]: [NOTICE]   (259003) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:45 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258959]: [NOTICE]   (259003) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:45 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258959]: [WARNING]  (259003) : Exiting Master process...
Nov 29 03:08:45 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258959]: [ALERT]    (259003) : Current worker (259013) exited with code 143 (Terminated)
Nov 29 03:08:45 np0005539564 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[258959]: [WARNING]  (259003) : All workers exited. Exiting... (0)
Nov 29 03:08:45 np0005539564 systemd[1]: libpod-9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf.scope: Deactivated successfully.
Nov 29 03:08:45 np0005539564 podman[259239]: 2025-11-29 08:08:45.738150375 +0000 UTC m=+0.054980193 container died 9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:08:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:45.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:45 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:45 np0005539564 systemd[1]: var-lib-containers-storage-overlay-7b5ce517ba7faf9db1e0ac3cbee680b91c1e0a6bd95b800259702075bdef630e-merged.mount: Deactivated successfully.
Nov 29 03:08:45 np0005539564 podman[259239]: 2025-11-29 08:08:45.848098422 +0000 UTC m=+0.164928250 container cleanup 9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:08:45 np0005539564 systemd[1]: libpod-conmon-9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf.scope: Deactivated successfully.
Nov 29 03:08:45 np0005539564 podman[259278]: 2025-11-29 08:08:45.936596358 +0000 UTC m=+0.062009213 container remove 9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.945 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[413817ab-6414-413f-b763-c5d72504c559]: (4, ('Sat Nov 29 08:08:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf)\n9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf\nSat Nov 29 08:08:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf)\n9af572a93cac2565d69a97f766c3de7757c9d1ce28811d4f4cc78413a872b9cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.947 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5e5b76-09fc-4263-963a-4b7b1dd944e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.948 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.949 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539564 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.963 226310 DEBUG nova.compute.manager [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.964 226310 DEBUG oslo_concurrency.lockutils [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.964 226310 DEBUG oslo_concurrency.lockutils [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.964 226310 DEBUG oslo_concurrency.lockutils [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.965 226310 DEBUG nova.compute.manager [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.965 226310 WARNING nova.compute.manager [req-fe7dc949-2972-446c-a29b-fa9b0df04d8c req-f816d4f0-fc2e-419e-8dae-ffefbbe6206d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-unplugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:08:45 np0005539564 nova_compute[226295]: 2025-11-29 08:08:45.974 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.976 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[254247e5-6dd2-442c-9fa1-e42f1bf53869]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.992 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[690a7fd9-ae68-4c4e-a13e-d7d7a28b74b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:45.997 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[09ad7ab1-ead3-41f8-986c-da933c64267b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:46.011 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c840d10d-8a78-49e7-9e05-f13e997251d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667600, 'reachable_time': 44688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259294, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:46.015 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:08:46.015 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2f6d55-acc8-434f-804f-6960ac97d8af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:46 np0005539564 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.215 226310 INFO nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.223 226310 INFO nova.virt.libvirt.driver [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Instance destroyed successfully.#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.225 226310 DEBUG nova.virt.libvirt.vif [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1617536379',display_name='tempest-ServerDiskConfigTestJSON-server-1617536379',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1617536379',id=88,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-indog4zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:29Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=9b9952a8-61d7-410f-9f29-081ff912c4cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:89:02:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.225 226310 DEBUG nova.network.os_vif_util [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:89:02:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.227 226310 DEBUG nova.network.os_vif_util [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.228 226310 DEBUG os_vif [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.231 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.232 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccf625f8-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.234 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.237 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.240 226310 INFO os_vif [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47')#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.247 226310 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.247 226310 DEBUG nova.virt.libvirt.driver [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.544 226310 DEBUG neutronclient.v2_0.client [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ccf625f8-471d-4406-9844-a3872b34137c for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.699 226310 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.700 226310 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.700 226310 DEBUG oslo_concurrency.lockutils [None req-172ab6d9-7f94-4fcf-b64a-19dbbdd4d515 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:46 np0005539564 nova_compute[226295]: 2025-11-29 08:08:46.819 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:47 np0005539564 nova_compute[226295]: 2025-11-29 08:08:47.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:47.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:47.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:48 np0005539564 nova_compute[226295]: 2025-11-29 08:08:48.130 226310 DEBUG nova.compute.manager [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:48 np0005539564 nova_compute[226295]: 2025-11-29 08:08:48.131 226310 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:48 np0005539564 nova_compute[226295]: 2025-11-29 08:08:48.131 226310 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:48 np0005539564 nova_compute[226295]: 2025-11-29 08:08:48.132 226310 DEBUG oslo_concurrency.lockutils [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:48 np0005539564 nova_compute[226295]: 2025-11-29 08:08:48.132 226310 DEBUG nova.compute.manager [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:48 np0005539564 nova_compute[226295]: 2025-11-29 08:08:48.132 226310 WARNING nova.compute.manager [req-51e91a04-ac98-4d28-a53a-b83ec7465de7 req-cbf97e41-cc94-4707-9eb3-e9515bcfe86c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:08:49 np0005539564 nova_compute[226295]: 2025-11-29 08:08:49.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:49.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:49 np0005539564 nova_compute[226295]: 2025-11-29 08:08:49.527 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:49 np0005539564 nova_compute[226295]: 2025-11-29 08:08:49.528 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:49 np0005539564 nova_compute[226295]: 2025-11-29 08:08:49.528 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:49 np0005539564 nova_compute[226295]: 2025-11-29 08:08:49.528 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:08:49 np0005539564 nova_compute[226295]: 2025-11-29 08:08:49.528 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:49.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:49 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3934048991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:49 np0005539564 nova_compute[226295]: 2025-11-29 08:08:49.993 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.079 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.079 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.319 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.321 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4603MB free_disk=20.910842895507812GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.321 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.321 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.368 226310 DEBUG nova.compute.manager [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-changed-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.368 226310 DEBUG nova.compute.manager [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Refreshing instance network info cache due to event network-changed-ccf625f8-471d-4406-9844-a3872b34137c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.369 226310 DEBUG oslo_concurrency.lockutils [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.369 226310 DEBUG oslo_concurrency.lockutils [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.369 226310 DEBUG nova.network.neutron [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Refreshing network info cache for port ccf625f8-471d-4406-9844-a3872b34137c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.377 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Migration for instance 9b9952a8-61d7-410f-9f29-081ff912c4cb refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.417 226310 INFO nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating resource usage from migration 58f403b9-08e9-4f12-8ffa-9a463890ee42#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.418 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Starting to track outgoing migration 58f403b9-08e9-4f12-8ffa-9a463890ee42 with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.454 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Migration 58f403b9-08e9-4f12-8ffa-9a463890ee42 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.455 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.455 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.497 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3342535025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.980 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:50 np0005539564 nova_compute[226295]: 2025-11-29 08:08:50.988 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:51 np0005539564 nova_compute[226295]: 2025-11-29 08:08:51.010 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e263 e263: 3 total, 3 up, 3 in
Nov 29 03:08:51 np0005539564 nova_compute[226295]: 2025-11-29 08:08:51.047 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:08:51 np0005539564 nova_compute[226295]: 2025-11-29 08:08:51.048 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:51 np0005539564 nova_compute[226295]: 2025-11-29 08:08:51.235 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:51.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:51.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:51 np0005539564 nova_compute[226295]: 2025-11-29 08:08:51.822 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:52 np0005539564 nova_compute[226295]: 2025-11-29 08:08:52.049 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:52 np0005539564 podman[259341]: 2025-11-29 08:08:52.512365105 +0000 UTC m=+0.070048671 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 03:08:52 np0005539564 podman[259342]: 2025-11-29 08:08:52.528548302 +0000 UTC m=+0.074253564 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:08:52 np0005539564 nova_compute[226295]: 2025-11-29 08:08:52.545 226310 DEBUG nova.network.neutron [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updated VIF entry in instance network info cache for port ccf625f8-471d-4406-9844-a3872b34137c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:08:52 np0005539564 nova_compute[226295]: 2025-11-29 08:08:52.545 226310 DEBUG nova.network.neutron [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:52 np0005539564 nova_compute[226295]: 2025-11-29 08:08:52.571 226310 DEBUG oslo_concurrency.lockutils [req-da88255c-3799-4601-8ee2-17259d07da28 req-d85a836d-c49c-4dab-8978-cbee59acdff1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:52 np0005539564 podman[259340]: 2025-11-29 08:08:52.582524588 +0000 UTC m=+0.134986183 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:08:53 np0005539564 nova_compute[226295]: 2025-11-29 08:08:53.037 226310 DEBUG nova.compute.manager [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:53 np0005539564 nova_compute[226295]: 2025-11-29 08:08:53.037 226310 DEBUG oslo_concurrency.lockutils [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:53 np0005539564 nova_compute[226295]: 2025-11-29 08:08:53.038 226310 DEBUG oslo_concurrency.lockutils [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:53 np0005539564 nova_compute[226295]: 2025-11-29 08:08:53.038 226310 DEBUG oslo_concurrency.lockutils [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:53 np0005539564 nova_compute[226295]: 2025-11-29 08:08:53.038 226310 DEBUG nova.compute.manager [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:53 np0005539564 nova_compute[226295]: 2025-11-29 08:08:53.039 226310 WARNING nova.compute.manager [req-91272926-3806-4e88-8f77-4286641b5b50 req-79fb9ce2-91fb-4073-afa8-5723c0fba76d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:08:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:53.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:53.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:54 np0005539564 nova_compute[226295]: 2025-11-29 08:08:54.666 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:54 np0005539564 nova_compute[226295]: 2025-11-29 08:08:54.667 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:54 np0005539564 nova_compute[226295]: 2025-11-29 08:08:54.667 226310 DEBUG nova.compute.manager [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Going to confirm migration 13 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.275 226310 DEBUG neutronclient.v2_0.client [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ccf625f8-471d-4406-9844-a3872b34137c for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.276 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.277 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.277 226310 DEBUG nova.network.neutron [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.277 226310 DEBUG nova.objects.instance [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'info_cache' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.283 226310 DEBUG nova.compute.manager [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.283 226310 DEBUG oslo_concurrency.lockutils [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.284 226310 DEBUG oslo_concurrency.lockutils [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.284 226310 DEBUG oslo_concurrency.lockutils [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.285 226310 DEBUG nova.compute.manager [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] No waiting events found dispatching network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:55 np0005539564 nova_compute[226295]: 2025-11-29 08:08:55.285 226310 WARNING nova.compute.manager [req-a5709866-5004-4858-b819-57540d62ac76 req-ebc26bb5-01cd-453b-ad1f-2dac408bd281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Received unexpected event network-vif-plugged-ccf625f8-471d-4406-9844-a3872b34137c for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:08:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:55.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:56 np0005539564 nova_compute[226295]: 2025-11-29 08:08:56.238 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:56 np0005539564 nova_compute[226295]: 2025-11-29 08:08:56.877 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:57 np0005539564 nova_compute[226295]: 2025-11-29 08:08:57.142 226310 DEBUG nova.network.neutron [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Updating instance_info_cache with network_info: [{"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:57 np0005539564 nova_compute[226295]: 2025-11-29 08:08:57.178 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-9b9952a8-61d7-410f-9f29-081ff912c4cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:57 np0005539564 nova_compute[226295]: 2025-11-29 08:08:57.178 226310 DEBUG nova.objects.instance [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b9952a8-61d7-410f-9f29-081ff912c4cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:57 np0005539564 nova_compute[226295]: 2025-11-29 08:08:57.304 226310 DEBUG nova.storage.rbd_utils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] removing snapshot(nova-resize) on rbd image(9b9952a8-61d7-410f-9f29-081ff912c4cb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:08:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:57.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:57.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e264 e264: 3 total, 3 up, 3 in
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.218 226310 DEBUG nova.virt.libvirt.vif [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1617536379',display_name='tempest-ServerDiskConfigTestJSON-server-1617536379',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1617536379',id=88,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-indog4zv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:53Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=9b9952a8-61d7-410f-9f29-081ff912c4cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.220 226310 DEBUG nova.network.os_vif_util [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "ccf625f8-471d-4406-9844-a3872b34137c", "address": "fa:16:3e:89:02:a4", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf625f8-47", "ovs_interfaceid": "ccf625f8-471d-4406-9844-a3872b34137c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.221 226310 DEBUG nova.network.os_vif_util [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.222 226310 DEBUG os_vif [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.226 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccf625f8-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.227 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.232 226310 INFO os_vif [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:02:a4,bridge_name='br-int',has_traffic_filtering=True,id=ccf625f8-471d-4406-9844-a3872b34137c,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf625f8-47')#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.234 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.234 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.349 226310 DEBUG oslo_concurrency.processutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2440657159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.848 226310 DEBUG oslo_concurrency.processutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.858 226310 DEBUG nova.compute.provider_tree [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.887 226310 DEBUG nova.scheduler.client.report [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:58 np0005539564 nova_compute[226295]: 2025-11-29 08:08:58.956 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:59 np0005539564 nova_compute[226295]: 2025-11-29 08:08:59.135 226310 INFO nova.scheduler.client.report [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Deleted allocation for migration 58f403b9-08e9-4f12-8ffa-9a463890ee42#033[00m
Nov 29 03:08:59 np0005539564 nova_compute[226295]: 2025-11-29 08:08:59.212 226310 DEBUG oslo_concurrency.lockutils [None req-24d2e8ed-21da-4986-9f0d-1b4f789c0a21 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b9952a8-61d7-410f-9f29-081ff912c4cb" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:59.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:08:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:59.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:00 np0005539564 nova_compute[226295]: 2025-11-29 08:09:00.794 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403725.7928493, 9b9952a8-61d7-410f-9f29-081ff912c4cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:00 np0005539564 nova_compute[226295]: 2025-11-29 08:09:00.795 226310 INFO nova.compute.manager [-] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:00 np0005539564 nova_compute[226295]: 2025-11-29 08:09:00.852 226310 DEBUG nova.compute.manager [None req-b93f721f-51ad-468e-8396-177d875b209e - - - - - -] [instance: 9b9952a8-61d7-410f-9f29-081ff912c4cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:09:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:09:01 np0005539564 nova_compute[226295]: 2025-11-29 08:09:01.241 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:09:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:01.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:09:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:01.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:01 np0005539564 nova_compute[226295]: 2025-11-29 08:09:01.876 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.206 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "16462a8e-2c63-4420-bac0-80b125611501" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.207 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.255 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.374 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.375 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.383 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.384 226310 INFO nova.compute.claims [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.514 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2716884571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.964 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.972 226310 DEBUG nova.compute.provider_tree [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:02 np0005539564 nova_compute[226295]: 2025-11-29 08:09:02.991 226310 DEBUG nova.scheduler.client.report [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.024 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.025 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.085 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.086 226310 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.113 226310 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.136 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.264 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.266 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.267 226310 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Creating image(s)#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.309 226310 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 16462a8e-2c63-4420-bac0-80b125611501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.353 226310 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 16462a8e-2c63-4420-bac0-80b125611501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.395 226310 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 16462a8e-2c63-4420-bac0-80b125611501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.400 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.440 226310 DEBUG nova.policy [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8306d30b5b844909866bec7b9c8242d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8e860226190f4eb8971376b16032da1b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.492 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.493 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.495 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.495 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:03.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.538 226310 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 16462a8e-2c63-4420-bac0-80b125611501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.544 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 16462a8e-2c63-4420-bac0-80b125611501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:03.718 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:03.719 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:03.719 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:03.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:03 np0005539564 nova_compute[226295]: 2025-11-29 08:09:03.991 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 16462a8e-2c63-4420-bac0-80b125611501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:04 np0005539564 nova_compute[226295]: 2025-11-29 08:09:04.086 226310 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] resizing rbd image 16462a8e-2c63-4420-bac0-80b125611501_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:04 np0005539564 nova_compute[226295]: 2025-11-29 08:09:04.200 226310 DEBUG nova.objects.instance [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'migration_context' on Instance uuid 16462a8e-2c63-4420-bac0-80b125611501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:04 np0005539564 nova_compute[226295]: 2025-11-29 08:09:04.464 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:04 np0005539564 nova_compute[226295]: 2025-11-29 08:09:04.465 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Ensure instance console log exists: /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:04 np0005539564 nova_compute[226295]: 2025-11-29 08:09:04.465 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:04 np0005539564 nova_compute[226295]: 2025-11-29 08:09:04.466 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:04 np0005539564 nova_compute[226295]: 2025-11-29 08:09:04.466 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:04 np0005539564 nova_compute[226295]: 2025-11-29 08:09:04.577 226310 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Successfully created port: 1b3f8757-2022-4d12-8fc1-e4109095d0e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:05 np0005539564 nova_compute[226295]: 2025-11-29 08:09:05.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:05.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:05 np0005539564 nova_compute[226295]: 2025-11-29 08:09:05.748 226310 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Successfully updated port: 1b3f8757-2022-4d12-8fc1-e4109095d0e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:05 np0005539564 nova_compute[226295]: 2025-11-29 08:09:05.764 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "refresh_cache-16462a8e-2c63-4420-bac0-80b125611501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:05 np0005539564 nova_compute[226295]: 2025-11-29 08:09:05.764 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquired lock "refresh_cache-16462a8e-2c63-4420-bac0-80b125611501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:05 np0005539564 nova_compute[226295]: 2025-11-29 08:09:05.764 226310 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:05.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:05 np0005539564 nova_compute[226295]: 2025-11-29 08:09:05.998 226310 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:06 np0005539564 nova_compute[226295]: 2025-11-29 08:09:06.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 e265: 3 total, 3 up, 3 in
Nov 29 03:09:06 np0005539564 nova_compute[226295]: 2025-11-29 08:09:06.804 226310 DEBUG nova.compute.manager [req-f194ec7d-8f10-451d-9737-e413590987c5 req-95565d7d-d78f-45f6-8c64-e211458b51e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received event network-changed-1b3f8757-2022-4d12-8fc1-e4109095d0e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:06 np0005539564 nova_compute[226295]: 2025-11-29 08:09:06.805 226310 DEBUG nova.compute.manager [req-f194ec7d-8f10-451d-9737-e413590987c5 req-95565d7d-d78f-45f6-8c64-e211458b51e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Refreshing instance network info cache due to event network-changed-1b3f8757-2022-4d12-8fc1-e4109095d0e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:06 np0005539564 nova_compute[226295]: 2025-11-29 08:09:06.805 226310 DEBUG oslo_concurrency.lockutils [req-f194ec7d-8f10-451d-9737-e413590987c5 req-95565d7d-d78f-45f6-8c64-e211458b51e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-16462a8e-2c63-4420-bac0-80b125611501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:06 np0005539564 nova_compute[226295]: 2025-11-29 08:09:06.880 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.079 226310 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Updating instance_info_cache with network_info: [{"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.107 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Releasing lock "refresh_cache-16462a8e-2c63-4420-bac0-80b125611501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.107 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Instance network_info: |[{"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.108 226310 DEBUG oslo_concurrency.lockutils [req-f194ec7d-8f10-451d-9737-e413590987c5 req-95565d7d-d78f-45f6-8c64-e211458b51e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-16462a8e-2c63-4420-bac0-80b125611501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.109 226310 DEBUG nova.network.neutron [req-f194ec7d-8f10-451d-9737-e413590987c5 req-95565d7d-d78f-45f6-8c64-e211458b51e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Refreshing network info cache for port 1b3f8757-2022-4d12-8fc1-e4109095d0e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.113 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Start _get_guest_xml network_info=[{"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.120 226310 WARNING nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.125 226310 DEBUG nova.virt.libvirt.host [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.127 226310 DEBUG nova.virt.libvirt.host [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.156 226310 DEBUG nova.virt.libvirt.host [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.157 226310 DEBUG nova.virt.libvirt.host [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.158 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.159 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.160 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.160 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.161 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.161 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.161 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.162 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.162 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.163 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.163 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.164 226310 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.168 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:07.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1804108564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.685 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.719 226310 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 16462a8e-2c63-4420-bac0-80b125611501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:07 np0005539564 nova_compute[226295]: 2025-11-29 08:09:07.725 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:07.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:08 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4049388428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.189 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.191 226310 DEBUG nova.virt.libvirt.vif [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1728114398',display_name='tempest-MultipleCreateTestJSON-server-1728114398-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1728114398-2',id=92,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-cjh5wutt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:03Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=16462a8e-2c63-4420-bac0-80b125611501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.191 226310 DEBUG nova.network.os_vif_util [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.192 226310 DEBUG nova.network.os_vif_util [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b3:79,bridge_name='br-int',has_traffic_filtering=True,id=1b3f8757-2022-4d12-8fc1-e4109095d0e1,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b3f8757-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.193 226310 DEBUG nova.objects.instance [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 16462a8e-2c63-4420-bac0-80b125611501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.227 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <uuid>16462a8e-2c63-4420-bac0-80b125611501</uuid>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <name>instance-0000005c</name>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <nova:name>tempest-MultipleCreateTestJSON-server-1728114398-2</nova:name>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:09:07</nova:creationTime>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <nova:user uuid="f8306d30b5b844909866bec7b9c8242d">tempest-MultipleCreateTestJSON-36900569-project-member</nova:user>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <nova:project uuid="8e860226190f4eb8971376b16032da1b">tempest-MultipleCreateTestJSON-36900569</nova:project>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <nova:port uuid="1b3f8757-2022-4d12-8fc1-e4109095d0e1">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <entry name="serial">16462a8e-2c63-4420-bac0-80b125611501</entry>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <entry name="uuid">16462a8e-2c63-4420-bac0-80b125611501</entry>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/16462a8e-2c63-4420-bac0-80b125611501_disk">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/16462a8e-2c63-4420-bac0-80b125611501_disk.config">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:1b:b3:79"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <target dev="tap1b3f8757-20"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501/console.log" append="off"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:09:08 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:09:08 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:09:08 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:09:08 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.230 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Preparing to wait for external event network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.232 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "16462a8e-2c63-4420-bac0-80b125611501-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.232 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.233 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.234 226310 DEBUG nova.virt.libvirt.vif [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1728114398',display_name='tempest-MultipleCreateTestJSON-server-1728114398-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1728114398-2',id=92,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-cjh5wutt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:03Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=16462a8e-2c63-4420-bac0-80b125611501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.235 226310 DEBUG nova.network.os_vif_util [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.236 226310 DEBUG nova.network.os_vif_util [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b3:79,bridge_name='br-int',has_traffic_filtering=True,id=1b3f8757-2022-4d12-8fc1-e4109095d0e1,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b3f8757-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.237 226310 DEBUG os_vif [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b3:79,bridge_name='br-int',has_traffic_filtering=True,id=1b3f8757-2022-4d12-8fc1-e4109095d0e1,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b3f8757-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.245 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.245 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.246 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.251 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.252 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b3f8757-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.253 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b3f8757-20, col_values=(('external_ids', {'iface-id': '1b3f8757-2022-4d12-8fc1-e4109095d0e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:b3:79', 'vm-uuid': '16462a8e-2c63-4420-bac0-80b125611501'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.304 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:08 np0005539564 NetworkManager[48997]: <info>  [1764403748.3054] manager: (tap1b3f8757-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.307 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.313 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.314 226310 INFO os_vif [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b3:79,bridge_name='br-int',has_traffic_filtering=True,id=1b3f8757-2022-4d12-8fc1-e4109095d0e1,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b3f8757-20')#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.428 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.429 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.429 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No VIF found with MAC fa:16:3e:1b:b3:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.430 226310 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Using config drive#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.465 226310 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 16462a8e-2c63-4420-bac0-80b125611501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.679 226310 DEBUG nova.network.neutron [req-f194ec7d-8f10-451d-9737-e413590987c5 req-95565d7d-d78f-45f6-8c64-e211458b51e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Updated VIF entry in instance network info cache for port 1b3f8757-2022-4d12-8fc1-e4109095d0e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.681 226310 DEBUG nova.network.neutron [req-f194ec7d-8f10-451d-9737-e413590987c5 req-95565d7d-d78f-45f6-8c64-e211458b51e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Updating instance_info_cache with network_info: [{"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:08 np0005539564 nova_compute[226295]: 2025-11-29 08:09:08.712 226310 DEBUG oslo_concurrency.lockutils [req-f194ec7d-8f10-451d-9737-e413590987c5 req-95565d7d-d78f-45f6-8c64-e211458b51e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-16462a8e-2c63-4420-bac0-80b125611501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.017 226310 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Creating config drive at /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501/disk.config#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.024 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdfouhz5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.160 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdfouhz5" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.206 226310 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 16462a8e-2c63-4420-bac0-80b125611501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.212 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501/disk.config 16462a8e-2c63-4420-bac0-80b125611501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.444 226310 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501/disk.config 16462a8e-2c63-4420-bac0-80b125611501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.446 226310 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Deleting local config drive /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:09 np0005539564 kernel: tap1b3f8757-20: entered promiscuous mode
Nov 29 03:09:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:09Z|00309|binding|INFO|Claiming lport 1b3f8757-2022-4d12-8fc1-e4109095d0e1 for this chassis.
Nov 29 03:09:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:09Z|00310|binding|INFO|1b3f8757-2022-4d12-8fc1-e4109095d0e1: Claiming fa:16:3e:1b:b3:79 10.100.0.6
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.532 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:09 np0005539564 NetworkManager[48997]: <info>  [1764403749.5335] manager: (tap1b3f8757-20): new Tun device (/org/freedesktop/NetworkManager/Devices/160)
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.536 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:09.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:09 np0005539564 systemd-udevd[259962]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.608 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:b3:79 10.100.0.6'], port_security=['fa:16:3e:1b:b3:79 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '16462a8e-2c63-4420-bac0-80b125611501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e860226190f4eb8971376b16032da1b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd8dc1d4-70a8-4fbe-bcb1-1a2eb3ad39c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aee2888b-87dd-4143-b028-b945f3d151f3, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=1b3f8757-2022-4d12-8fc1-e4109095d0e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.610 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 1b3f8757-2022-4d12-8fc1-e4109095d0e1 in datapath 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 bound to our chassis#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.611 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.611 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05#033[00m
Nov 29 03:09:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:09Z|00311|binding|INFO|Setting lport 1b3f8757-2022-4d12-8fc1-e4109095d0e1 ovn-installed in OVS
Nov 29 03:09:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:09Z|00312|binding|INFO|Setting lport 1b3f8757-2022-4d12-8fc1-e4109095d0e1 up in Southbound
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.619 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:09 np0005539564 NetworkManager[48997]: <info>  [1764403749.6250] device (tap1b3f8757-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:09 np0005539564 NetworkManager[48997]: <info>  [1764403749.6265] device (tap1b3f8757-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:09 np0005539564 systemd-machined[190128]: New machine qemu-39-instance-0000005c.
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.636 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[82c0672e-af1c-457b-b487-3b93e552ba27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.637 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a4a6f7c-91 in ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.641 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a4a6f7c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.641 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1758be4b-668d-4396-bdaf-22d1ff540440]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.643 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[190d627f-959b-4c4e-86a5-ec56a9db6260]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 systemd[1]: Started Virtual Machine qemu-39-instance-0000005c.
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.663 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c30330-4b39-4767-b6b8-05eff4d83791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.696 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ed397c62-5951-44d5-bca0-06d2fb5868ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.735 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb66080-1513-42ed-8e6e-39a992d7b625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 NetworkManager[48997]: <info>  [1764403749.7437] manager: (tap6a4a6f7c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/161)
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.743 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[99902cd8-916b-4fed-8055-39a61596eac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.792 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3db7a8-9329-4b64-a851-4d478be5bf9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.797 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5d64fd63-a916-4702-8b53-1e3e234c4b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 NetworkManager[48997]: <info>  [1764403749.8305] device (tap6a4a6f7c-90): carrier: link connected
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.839 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0e990e0f-92bf-4fa3-9776-146c91b274f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.844 226310 DEBUG nova.compute.manager [req-c21c27f5-3c92-4b73-b9f1-9d3c7c0fc535 req-d5149398-6780-4605-b74c-fcb3f19e804c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received event network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.845 226310 DEBUG oslo_concurrency.lockutils [req-c21c27f5-3c92-4b73-b9f1-9d3c7c0fc535 req-d5149398-6780-4605-b74c-fcb3f19e804c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16462a8e-2c63-4420-bac0-80b125611501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.845 226310 DEBUG oslo_concurrency.lockutils [req-c21c27f5-3c92-4b73-b9f1-9d3c7c0fc535 req-d5149398-6780-4605-b74c-fcb3f19e804c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.846 226310 DEBUG oslo_concurrency.lockutils [req-c21c27f5-3c92-4b73-b9f1-9d3c7c0fc535 req-d5149398-6780-4605-b74c-fcb3f19e804c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:09 np0005539564 nova_compute[226295]: 2025-11-29 08:09:09.846 226310 DEBUG nova.compute.manager [req-c21c27f5-3c92-4b73-b9f1-9d3c7c0fc535 req-d5149398-6780-4605-b74c-fcb3f19e804c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Processing event network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.876 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e65e00a8-9d28-4545-8572-852a41cfcb89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a4a6f7c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ed:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672451, 'reachable_time': 27105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260002, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.901 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[36a4e47d-7f57-4d56-9711-ea0293707774]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:ede0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672451, 'tstamp': 672451}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260017, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.928 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7c2533-2298-428f-9687-a96884b9e3c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a4a6f7c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ed:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672451, 'reachable_time': 27105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260029, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:09.964 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[94a8de19-c58f-4db9-a77e-c48b6b8b7db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.032 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[00bf939f-9e3e-4b0f-b6e1-9d9897b802cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.033 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a4a6f7c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.033 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.034 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a4a6f7c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.035 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539564 NetworkManager[48997]: <info>  [1764403750.0368] manager: (tap6a4a6f7c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Nov 29 03:09:10 np0005539564 kernel: tap6a4a6f7c-90: entered promiscuous mode
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.039 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a4a6f7c-90, col_values=(('external_ids', {'iface-id': 'b10f5520-b53f-45d0-9de3-4af0dc481ad3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.038 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.040 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:10Z|00313|binding|INFO|Releasing lport b10f5520-b53f-45d0-9de3-4af0dc481ad3 from this chassis (sb_readonly=0)
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.061 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.061 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b11d675b-8e07-44fe-9f2b-aa23c7382168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.062 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:10.063 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'env', 'PROCESS_TAG=haproxy-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.089 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403750.0884686, 16462a8e-2c63-4420-bac0-80b125611501 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.090 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.093 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.097 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.100 226310 INFO nova.virt.libvirt.driver [-] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Instance spawned successfully.#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.100 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.113 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.116 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.128 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.128 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.129 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.129 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.129 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.130 226310 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.136 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.136 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403750.0898862, 16462a8e-2c63-4420-bac0-80b125611501 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.136 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.164 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.168 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403750.0962508, 16462a8e-2c63-4420-bac0-80b125611501 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.168 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.195 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.201 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.218 226310 INFO nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Took 6.95 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.218 226310 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.222 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.278 226310 INFO nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Took 7.93 seconds to build instance.#033[00m
Nov 29 03:09:10 np0005539564 nova_compute[226295]: 2025-11-29 08:09:10.307 226310 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:10 np0005539564 podman[260072]: 2025-11-29 08:09:10.458716812 +0000 UTC m=+0.065514808 container create 34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:09:10 np0005539564 podman[260072]: 2025-11-29 08:09:10.420547482 +0000 UTC m=+0.027345518 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:10 np0005539564 systemd[1]: Started libpod-conmon-34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e.scope.
Nov 29 03:09:10 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:09:10 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24e94e1cc834ff2ab5c79a36754fa22ca74cdba0aef6b6e62e2d43f89e299ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:10 np0005539564 podman[260072]: 2025-11-29 08:09:10.607136036 +0000 UTC m=+0.213934042 container init 34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:09:10 np0005539564 podman[260072]: 2025-11-29 08:09:10.616899559 +0000 UTC m=+0.223697555 container start 34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:10 np0005539564 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[260088]: [NOTICE]   (260092) : New worker (260094) forked
Nov 29 03:09:10 np0005539564 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[260088]: [NOTICE]   (260092) : Loading success.
Nov 29 03:09:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.010 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "6753610d-4170-4449-9727-e7162083a6cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.011 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.036 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.140 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.142 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.152 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.153 226310 INFO nova.compute.claims [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.350 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:11.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3311797751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.824 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.833 226310 DEBUG nova.compute.provider_tree [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.851 226310 DEBUG nova.scheduler.client.report [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.881 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.882 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.886 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.967 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.968 226310 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:11 np0005539564 nova_compute[226295]: 2025-11-29 08:09:11.999 226310 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.018 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3692096887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.316 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.319 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.319 226310 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Creating image(s)#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.366 226310 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 6753610d-4170-4449-9727-e7162083a6cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.405 226310 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 6753610d-4170-4449-9727-e7162083a6cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.436 226310 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 6753610d-4170-4449-9727-e7162083a6cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.440 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.527 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.528 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.529 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.530 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.564 226310 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 6753610d-4170-4449-9727-e7162083a6cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.569 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6753610d-4170-4449-9727-e7162083a6cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:12 np0005539564 nova_compute[226295]: 2025-11-29 08:09:12.868 226310 DEBUG nova.policy [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95361d3a276f4d7f81e9f9a4bcafd2ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.293 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6753610d-4170-4449-9727-e7162083a6cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.341 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.410 226310 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] resizing rbd image 6753610d-4170-4449-9727-e7162083a6cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:13.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.568 226310 DEBUG nova.objects.instance [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'migration_context' on Instance uuid 6753610d-4170-4449-9727-e7162083a6cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.594 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.595 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Ensure instance console log exists: /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.596 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.596 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.597 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:13.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.950 226310 DEBUG nova.compute.manager [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received event network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.950 226310 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16462a8e-2c63-4420-bac0-80b125611501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.951 226310 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.951 226310 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.951 226310 DEBUG nova.compute.manager [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] No waiting events found dispatching network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:13 np0005539564 nova_compute[226295]: 2025-11-29 08:09:13.952 226310 WARNING nova.compute.manager [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received unexpected event network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:15 np0005539564 nova_compute[226295]: 2025-11-29 08:09:15.323 226310 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Successfully created port: d9f42b2a-6b46-481c-8afe-61daef359711 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:15.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.398 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "16462a8e-2c63-4420-bac0-80b125611501" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.399 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.399 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "16462a8e-2c63-4420-bac0-80b125611501-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.399 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.399 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.401 226310 INFO nova.compute.manager [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Terminating instance#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.402 226310 DEBUG nova.compute.manager [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:09:16 np0005539564 kernel: tap1b3f8757-20 (unregistering): left promiscuous mode
Nov 29 03:09:16 np0005539564 NetworkManager[48997]: <info>  [1764403756.4523] device (tap1b3f8757-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:16Z|00314|binding|INFO|Releasing lport 1b3f8757-2022-4d12-8fc1-e4109095d0e1 from this chassis (sb_readonly=0)
Nov 29 03:09:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:16Z|00315|binding|INFO|Setting lport 1b3f8757-2022-4d12-8fc1-e4109095d0e1 down in Southbound
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.456 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:16Z|00316|binding|INFO|Removing iface tap1b3f8757-20 ovn-installed in OVS
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.466 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:b3:79 10.100.0.6'], port_security=['fa:16:3e:1b:b3:79 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '16462a8e-2c63-4420-bac0-80b125611501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e860226190f4eb8971376b16032da1b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd8dc1d4-70a8-4fbe-bcb1-1a2eb3ad39c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aee2888b-87dd-4143-b028-b945f3d151f3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=1b3f8757-2022-4d12-8fc1-e4109095d0e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.469 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 1b3f8757-2022-4d12-8fc1-e4109095d0e1 in datapath 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 unbound from our chassis#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.472 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.474 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5aeb4cf1-60f4-434e-9e05-0f8da2da1948]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.475 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 namespace which is not needed anymore#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.482 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539564 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Nov 29 03:09:16 np0005539564 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005c.scope: Consumed 6.946s CPU time.
Nov 29 03:09:16 np0005539564 systemd-machined[190128]: Machine qemu-39-instance-0000005c terminated.
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.517 226310 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Successfully updated port: d9f42b2a-6b46-481c-8afe-61daef359711 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.541 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "refresh_cache-6753610d-4170-4449-9727-e7162083a6cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.541 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquired lock "refresh_cache-6753610d-4170-4449-9727-e7162083a6cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.542 226310 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:16 np0005539564 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[260088]: [NOTICE]   (260092) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:16 np0005539564 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[260088]: [NOTICE]   (260092) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:16 np0005539564 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[260088]: [WARNING]  (260092) : Exiting Master process...
Nov 29 03:09:16 np0005539564 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[260088]: [WARNING]  (260092) : Exiting Master process...
Nov 29 03:09:16 np0005539564 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[260088]: [ALERT]    (260092) : Current worker (260094) exited with code 143 (Terminated)
Nov 29 03:09:16 np0005539564 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[260088]: [WARNING]  (260092) : All workers exited. Exiting... (0)
Nov 29 03:09:16 np0005539564 systemd[1]: libpod-34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e.scope: Deactivated successfully.
Nov 29 03:09:16 np0005539564 podman[260314]: 2025-11-29 08:09:16.629176044 +0000 UTC m=+0.052692242 container died 34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.644 226310 INFO nova.virt.libvirt.driver [-] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Instance destroyed successfully.#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.645 226310 DEBUG nova.objects.instance [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'resources' on Instance uuid 16462a8e-2c63-4420-bac0-80b125611501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.675 226310 DEBUG nova.virt.libvirt.vif [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1728114398',display_name='tempest-MultipleCreateTestJSON-server-1728114398-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1728114398-2',id=92,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-29T08:09:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-cjh5wutt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:10Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=16462a8e-2c63-4420-bac0-80b125611501,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.675 226310 DEBUG nova.network.os_vif_util [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "address": "fa:16:3e:1b:b3:79", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b3f8757-20", "ovs_interfaceid": "1b3f8757-2022-4d12-8fc1-e4109095d0e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.677 226310 DEBUG nova.network.os_vif_util [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b3:79,bridge_name='br-int',has_traffic_filtering=True,id=1b3f8757-2022-4d12-8fc1-e4109095d0e1,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b3f8757-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.677 226310 DEBUG os_vif [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b3:79,bridge_name='br-int',has_traffic_filtering=True,id=1b3f8757-2022-4d12-8fc1-e4109095d0e1,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b3f8757-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.679 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.680 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b3f8757-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.682 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539564 systemd[1]: var-lib-containers-storage-overlay-c24e94e1cc834ff2ab5c79a36754fa22ca74cdba0aef6b6e62e2d43f89e299ba-merged.mount: Deactivated successfully.
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.685 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.688 226310 INFO os_vif [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b3:79,bridge_name='br-int',has_traffic_filtering=True,id=1b3f8757-2022-4d12-8fc1-e4109095d0e1,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b3f8757-20')#033[00m
Nov 29 03:09:16 np0005539564 podman[260314]: 2025-11-29 08:09:16.697289912 +0000 UTC m=+0.120806110 container cleanup 34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:09:16 np0005539564 systemd[1]: libpod-conmon-34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e.scope: Deactivated successfully.
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.802 226310 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:16 np0005539564 podman[260367]: 2025-11-29 08:09:16.818372337 +0000 UTC m=+0.092956798 container remove 34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.827 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c1e52d-7d69-4073-b848-84a8679ec797]: (4, ('Sat Nov 29 08:09:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 (34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e)\n34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e\nSat Nov 29 08:09:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 (34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e)\n34bcf602b6dfd8512d6197814657a36957851c38752fd0e7a10c74db98244f6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.830 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef032fc-c498-40ee-ae00-851f53310e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.831 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a4a6f7c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.834 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539564 kernel: tap6a4a6f7c-90: left promiscuous mode
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.850 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.855 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d83614-d213-4f93-8c7b-6be960b9b1ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.869 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[be06a376-d0f7-4861-b3ad-60abd5c73f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.871 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[70a7735e-8519-4edf-8eec-0878c39b0c8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539564 nova_compute[226295]: 2025-11-29 08:09:16.883 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.898 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e6401cc1-daa5-49f2-8492-28e90085944c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672440, 'reachable_time': 25747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260389, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539564 systemd[1]: run-netns-ovnmeta\x2d6a4a6f7c\x2d9da4\x2d4d0a\x2db32b\x2d578ab4776e05.mount: Deactivated successfully.
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.903 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:16.903 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[62627301-2afd-4b6e-b847-af81c5d39f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:17 np0005539564 nova_compute[226295]: 2025-11-29 08:09:17.390 226310 INFO nova.virt.libvirt.driver [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Deleting instance files /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501_del#033[00m
Nov 29 03:09:17 np0005539564 nova_compute[226295]: 2025-11-29 08:09:17.392 226310 INFO nova.virt.libvirt.driver [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Deletion of /var/lib/nova/instances/16462a8e-2c63-4420-bac0-80b125611501_del complete#033[00m
Nov 29 03:09:17 np0005539564 nova_compute[226295]: 2025-11-29 08:09:17.461 226310 INFO nova.compute.manager [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:09:17 np0005539564 nova_compute[226295]: 2025-11-29 08:09:17.462 226310 DEBUG oslo.service.loopingcall [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:09:17 np0005539564 nova_compute[226295]: 2025-11-29 08:09:17.462 226310 DEBUG nova.compute.manager [-] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:09:17 np0005539564 nova_compute[226295]: 2025-11-29 08:09:17.463 226310 DEBUG nova.network.neutron [-] [instance: 16462a8e-2c63-4420-bac0-80b125611501] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:09:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:17.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.321 226310 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received event network-vif-unplugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.321 226310 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16462a8e-2c63-4420-bac0-80b125611501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.322 226310 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.322 226310 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.322 226310 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] No waiting events found dispatching network-vif-unplugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.322 226310 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received event network-vif-unplugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.323 226310 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received event network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.323 226310 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16462a8e-2c63-4420-bac0-80b125611501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.323 226310 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.323 226310 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.324 226310 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] No waiting events found dispatching network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.324 226310 WARNING nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received unexpected event network-vif-plugged-1b3f8757-2022-4d12-8fc1-e4109095d0e1 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.445 226310 DEBUG nova.compute.manager [req-8d260a4b-72aa-4838-b518-28421b1a79e6 req-045a1e6a-7cd8-4cf6-a68e-edbd02852dc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received event network-changed-d9f42b2a-6b46-481c-8afe-61daef359711 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.445 226310 DEBUG nova.compute.manager [req-8d260a4b-72aa-4838-b518-28421b1a79e6 req-045a1e6a-7cd8-4cf6-a68e-edbd02852dc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Refreshing instance network info cache due to event network-changed-d9f42b2a-6b46-481c-8afe-61daef359711. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.446 226310 DEBUG oslo_concurrency.lockutils [req-8d260a4b-72aa-4838-b518-28421b1a79e6 req-045a1e6a-7cd8-4cf6-a68e-edbd02852dc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6753610d-4170-4449-9727-e7162083a6cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.931 226310 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Updating instance_info_cache with network_info: [{"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621329738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.966 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Releasing lock "refresh_cache-6753610d-4170-4449-9727-e7162083a6cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.966 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Instance network_info: |[{"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.967 226310 DEBUG oslo_concurrency.lockutils [req-8d260a4b-72aa-4838-b518-28421b1a79e6 req-045a1e6a-7cd8-4cf6-a68e-edbd02852dc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6753610d-4170-4449-9727-e7162083a6cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.967 226310 DEBUG nova.network.neutron [req-8d260a4b-72aa-4838-b518-28421b1a79e6 req-045a1e6a-7cd8-4cf6-a68e-edbd02852dc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Refreshing network info cache for port d9f42b2a-6b46-481c-8afe-61daef359711 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.971 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Start _get_guest_xml network_info=[{"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.978 226310 WARNING nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.985 226310 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.986 226310 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.994 226310 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.995 226310 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.996 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.997 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.997 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.997 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.998 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.998 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:18 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.998 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.998 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.999 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.999 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.999 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:18.999 226310 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:19.002 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:19 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4044653041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:19.459 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:19.499 226310 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 6753610d-4170-4449-9727-e7162083a6cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:19.505 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:19.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:09:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:19.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:09:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:19 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2347223908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:19.995 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:19.997 226310 DEBUG nova.virt.libvirt.vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-2',id=96,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:12Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=6753610d-4170-4449-9727-e7162083a6cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:19.998 226310 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:19 np0005539564 nova_compute[226295]: 2025-11-29 08:09:19.999 226310 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:22,bridge_name='br-int',has_traffic_filtering=True,id=d9f42b2a-6b46-481c-8afe-61daef359711,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9f42b2a-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.000 226310 DEBUG nova.objects.instance [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6753610d-4170-4449-9727-e7162083a6cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.033 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <uuid>6753610d-4170-4449-9727-e7162083a6cd</uuid>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <name>instance-00000060</name>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <nova:name>tempest-ListServersNegativeTestJSON-server-313770805-2</nova:name>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:09:18</nova:creationTime>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <nova:user uuid="95361d3a276f4d7f81e9f9a4bcafd2ea">tempest-ListServersNegativeTestJSON-1935238201-project-member</nova:user>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <nova:project uuid="e3e18973b82a4071bdc187ede8c1afb8">tempest-ListServersNegativeTestJSON-1935238201</nova:project>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <nova:port uuid="d9f42b2a-6b46-481c-8afe-61daef359711">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <entry name="serial">6753610d-4170-4449-9727-e7162083a6cd</entry>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <entry name="uuid">6753610d-4170-4449-9727-e7162083a6cd</entry>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/6753610d-4170-4449-9727-e7162083a6cd_disk">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/6753610d-4170-4449-9727-e7162083a6cd_disk.config">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:06:42:22"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <target dev="tapd9f42b2a-6b"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd/console.log" append="off"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:09:20 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:09:20 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:09:20 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:09:20 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.036 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Preparing to wait for external event network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.037 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "6753610d-4170-4449-9727-e7162083a6cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.037 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.038 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.039 226310 DEBUG nova.virt.libvirt.vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-2',id=96,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:12Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=6753610d-4170-4449-9727-e7162083a6cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.039 226310 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.040 226310 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:22,bridge_name='br-int',has_traffic_filtering=True,id=d9f42b2a-6b46-481c-8afe-61daef359711,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9f42b2a-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.040 226310 DEBUG os_vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:22,bridge_name='br-int',has_traffic_filtering=True,id=d9f42b2a-6b46-481c-8afe-61daef359711,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9f42b2a-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.041 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.041 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.042 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.045 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9f42b2a-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.046 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9f42b2a-6b, col_values=(('external_ids', {'iface-id': 'd9f42b2a-6b46-481c-8afe-61daef359711', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:42:22', 'vm-uuid': '6753610d-4170-4449-9727-e7162083a6cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.047 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539564 NetworkManager[48997]: <info>  [1764403760.0483] manager: (tapd9f42b2a-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.050 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.054 226310 INFO os_vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:22,bridge_name='br-int',has_traffic_filtering=True,id=d9f42b2a-6b46-481c-8afe-61daef359711,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9f42b2a-6b')#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.126 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.126 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.127 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No VIF found with MAC fa:16:3e:06:42:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.128 226310 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Using config drive#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.172 226310 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 6753610d-4170-4449-9727-e7162083a6cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.212 226310 DEBUG nova.network.neutron [-] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.231 226310 INFO nova.compute.manager [-] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Took 2.77 seconds to deallocate network for instance.#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.312 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.313 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.465 226310 DEBUG nova.compute.manager [req-3c15442c-95cd-4b36-8629-578cc129784a req-41c70deb-c96d-41c0-a51e-54a919054670 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Received event network-vif-deleted-1b3f8757-2022-4d12-8fc1-e4109095d0e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.497 226310 DEBUG oslo_concurrency.processutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.653 226310 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Creating config drive at /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd/disk.config#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.658 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp02uobrl2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.807 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp02uobrl2" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.850 226310 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 6753610d-4170-4449-9727-e7162083a6cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.854 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd/disk.config 6753610d-4170-4449-9727-e7162083a6cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3798714948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.959 226310 DEBUG oslo_concurrency.processutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.968 226310 DEBUG nova.compute.provider_tree [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:20 np0005539564 nova_compute[226295]: 2025-11-29 08:09:20.998 226310 DEBUG nova.scheduler.client.report [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.039 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.068 226310 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd/disk.config 6753610d-4170-4449-9727-e7162083a6cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.069 226310 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Deleting local config drive /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.083 226310 INFO nova.scheduler.client.report [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Deleted allocations for instance 16462a8e-2c63-4420-bac0-80b125611501#033[00m
Nov 29 03:09:21 np0005539564 kernel: tapd9f42b2a-6b: entered promiscuous mode
Nov 29 03:09:21 np0005539564 NetworkManager[48997]: <info>  [1764403761.1428] manager: (tapd9f42b2a-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.143 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:21Z|00317|binding|INFO|Claiming lport d9f42b2a-6b46-481c-8afe-61daef359711 for this chassis.
Nov 29 03:09:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:21Z|00318|binding|INFO|d9f42b2a-6b46-481c-8afe-61daef359711: Claiming fa:16:3e:06:42:22 10.100.0.5
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.150 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.159 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:42:22 10.100.0.5'], port_security=['fa:16:3e:06:42:22 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6753610d-4170-4449-9727-e7162083a6cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1da66fc3-7f9f-49ea-a35d-351f9e777793', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ed36bb-bd1a-404c-bed2-6bc7af2884c4, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=d9f42b2a-6b46-481c-8afe-61daef359711) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.161 139780 INFO neutron.agent.ovn.metadata.agent [-] Port d9f42b2a-6b46-481c-8afe-61daef359711 in datapath 4c0a06e3-8d77-4f81-85b4-47e57dafff04 bound to our chassis#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.163 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c0a06e3-8d77-4f81-85b4-47e57dafff04#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.171 226310 DEBUG oslo_concurrency.lockutils [None req-b1b344b9-d5f7-4fb1-83b4-13a4284801d0 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "16462a8e-2c63-4420-bac0-80b125611501" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.178 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[07272fc5-88a0-4e91-a7ff-e38e2460af9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.180 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c0a06e3-81 in ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.182 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c0a06e3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.183 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d96d242d-a082-4c21-9a8f-740a60b7e5b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.183 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2875e9-1d1e-4a58-a80e-bc0ba3c6c9c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 systemd-machined[190128]: New machine qemu-40-instance-00000060.
Nov 29 03:09:21 np0005539564 systemd-udevd[260549]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.193 226310 DEBUG nova.network.neutron [req-8d260a4b-72aa-4838-b518-28421b1a79e6 req-045a1e6a-7cd8-4cf6-a68e-edbd02852dc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Updated VIF entry in instance network info cache for port d9f42b2a-6b46-481c-8afe-61daef359711. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.193 226310 DEBUG nova.network.neutron [req-8d260a4b-72aa-4838-b518-28421b1a79e6 req-045a1e6a-7cd8-4cf6-a68e-edbd02852dc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Updating instance_info_cache with network_info: [{"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.200 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[87948c9b-d558-41ae-a1fc-2d40240c09c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 NetworkManager[48997]: <info>  [1764403761.2081] device (tapd9f42b2a-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:21 np0005539564 NetworkManager[48997]: <info>  [1764403761.2096] device (tapd9f42b2a-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.222 226310 DEBUG oslo_concurrency.lockutils [req-8d260a4b-72aa-4838-b518-28421b1a79e6 req-045a1e6a-7cd8-4cf6-a68e-edbd02852dc7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6753610d-4170-4449-9727-e7162083a6cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:21 np0005539564 systemd[1]: Started Virtual Machine qemu-40-instance-00000060.
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.228 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7c257f60-a77a-4055-9527-9acd5014c778]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.248 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:21Z|00319|binding|INFO|Setting lport d9f42b2a-6b46-481c-8afe-61daef359711 ovn-installed in OVS
Nov 29 03:09:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:21Z|00320|binding|INFO|Setting lport d9f42b2a-6b46-481c-8afe-61daef359711 up in Southbound
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.255 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.267 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[611cfffc-f11a-4c5c-8ab7-0800fc6311ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 NetworkManager[48997]: <info>  [1764403761.2745] manager: (tap4c0a06e3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/165)
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.275 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[19a81e7e-c471-4140-8f71-0f4f39bf466d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.316 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff0b3a5-032a-42d6-b7ff-3a0c4c40e2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.321 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[84f45052-d532-43b8-b481-ca7ab0e9d55e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 NetworkManager[48997]: <info>  [1764403761.3640] device (tap4c0a06e3-80): carrier: link connected
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.375 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7217b402-020b-49c6-9b23-b4ad7e926227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.401 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e96810-3be1-4154-a1c6-aa76a3c1eb97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c0a06e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:42:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673604, 'reachable_time': 36104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260582, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.425 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bf931cc6-112d-442e-9080-892f4bef7cac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:42d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673604, 'tstamp': 673604}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260583, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.449 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b790bd73-8d55-4986-8dad-7554c4bac815]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c0a06e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:42:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673604, 'reachable_time': 36104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260584, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.509 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f58715dd-572b-4954-89c8-7d6490ed679b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:21.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.584 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8cfc7d71-96c1-4133-bf7d-62b61719bbbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.586 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c0a06e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.586 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.587 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c0a06e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.589 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 kernel: tap4c0a06e3-80: entered promiscuous mode
Nov 29 03:09:21 np0005539564 NetworkManager[48997]: <info>  [1764403761.5900] manager: (tap4c0a06e3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.591 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.592 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c0a06e3-80, col_values=(('external_ids', {'iface-id': '25db3838-7764-409c-8606-f0c90f681664'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.593 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:21Z|00321|binding|INFO|Releasing lport 25db3838-7764-409c-8606-f0c90f681664 from this chassis (sb_readonly=0)
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.594 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.595 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.596 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[43b328f6-17a9-4a7d-b109-adc11838295a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.597 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-4c0a06e3-8d77-4f81-85b4-47e57dafff04
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 4c0a06e3-8d77-4f81-85b4-47e57dafff04
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:21.599 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'env', 'PROCESS_TAG=haproxy-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c0a06e3-8d77-4f81-85b4-47e57dafff04.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.609 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.727 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403761.7271805, 6753610d-4170-4449-9727-e7162083a6cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.728 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.761 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.767 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403761.7284772, 6753610d-4170-4449-9727-e7162083a6cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.767 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.795 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.799 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.827 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:21 np0005539564 nova_compute[226295]: 2025-11-29 08:09:21.885 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539564 podman[260658]: 2025-11-29 08:09:22.009143073 +0000 UTC m=+0.049594839 container create b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:09:22 np0005539564 systemd[1]: Started libpod-conmon-b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d.scope.
Nov 29 03:09:22 np0005539564 podman[260658]: 2025-11-29 08:09:21.984212671 +0000 UTC m=+0.024664457 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:22 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:09:22 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a3172d326dfff56b6ab5405e1f64700af75646fa2a994696c71b5800023a92/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:22 np0005539564 podman[260658]: 2025-11-29 08:09:22.125721267 +0000 UTC m=+0.166173053 container init b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:22 np0005539564 podman[260658]: 2025-11-29 08:09:22.130662241 +0000 UTC m=+0.171114007 container start b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:09:22 np0005539564 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[260673]: [NOTICE]   (260677) : New worker (260679) forked
Nov 29 03:09:22 np0005539564 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[260673]: [NOTICE]   (260677) : Loading success.
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.203 226310 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received event network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.204 226310 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6753610d-4170-4449-9727-e7162083a6cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.204 226310 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.205 226310 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.205 226310 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Processing event network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.206 226310 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received event network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.206 226310 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6753610d-4170-4449-9727-e7162083a6cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.206 226310 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.207 226310 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.207 226310 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] No waiting events found dispatching network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.207 226310 WARNING nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received unexpected event network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.209 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.215 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403763.215263, 6753610d-4170-4449-9727-e7162083a6cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.216 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.219 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.223 226310 INFO nova.virt.libvirt.driver [-] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Instance spawned successfully.#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.223 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.241 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.247 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.251 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.251 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.252 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.252 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.252 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.253 226310 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.284 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.325 226310 INFO nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Took 11.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.325 226310 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.448 226310 INFO nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Took 12.34 seconds to build instance.#033[00m
Nov 29 03:09:23 np0005539564 nova_compute[226295]: 2025-11-29 08:09:23.467 226310 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:23.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:23 np0005539564 podman[260690]: 2025-11-29 08:09:23.568646179 +0000 UTC m=+0.098298764 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:09:23 np0005539564 podman[260689]: 2025-11-29 08:09:23.57612601 +0000 UTC m=+0.120716597 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:23 np0005539564 podman[260688]: 2025-11-29 08:09:23.618894833 +0000 UTC m=+0.160337585 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:09:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:25 np0005539564 nova_compute[226295]: 2025-11-29 08:09:25.082 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:25.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:25Z|00322|binding|INFO|Releasing lport 25db3838-7764-409c-8606-f0c90f681664 from this chassis (sb_readonly=0)
Nov 29 03:09:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:25 np0005539564 nova_compute[226295]: 2025-11-29 08:09:25.867 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:26 np0005539564 nova_compute[226295]: 2025-11-29 08:09:26.887 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:27.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:29.321 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:29.322 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:09:29 np0005539564 nova_compute[226295]: 2025-11-29 08:09:29.324 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:29.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:30 np0005539564 nova_compute[226295]: 2025-11-29 08:09:30.085 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:31.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.642 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403756.6413736, 16462a8e-2c63-4420-bac0-80b125611501 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.643 226310 INFO nova.compute.manager [-] [instance: 16462a8e-2c63-4420-bac0-80b125611501] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.675 226310 DEBUG nova.compute.manager [None req-573124ef-06e1-4daf-b998-e0ee9fd47936 - - - - - -] [instance: 16462a8e-2c63-4420-bac0-80b125611501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:31.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.834 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "6753610d-4170-4449-9727-e7162083a6cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.835 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.835 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "6753610d-4170-4449-9727-e7162083a6cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.836 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.836 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.838 226310 INFO nova.compute.manager [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Terminating instance#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.839 226310 DEBUG nova.compute.manager [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:09:31 np0005539564 kernel: tapd9f42b2a-6b (unregistering): left promiscuous mode
Nov 29 03:09:31 np0005539564 NetworkManager[48997]: <info>  [1764403771.8916] device (tapd9f42b2a-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.926 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:31Z|00323|binding|INFO|Releasing lport d9f42b2a-6b46-481c-8afe-61daef359711 from this chassis (sb_readonly=0)
Nov 29 03:09:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:31Z|00324|binding|INFO|Setting lport d9f42b2a-6b46-481c-8afe-61daef359711 down in Southbound
Nov 29 03:09:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:31Z|00325|binding|INFO|Removing iface tapd9f42b2a-6b ovn-installed in OVS
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.929 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:31.934 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:42:22 10.100.0.5'], port_security=['fa:16:3e:06:42:22 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6753610d-4170-4449-9727-e7162083a6cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1da66fc3-7f9f-49ea-a35d-351f9e777793', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ed36bb-bd1a-404c-bed2-6bc7af2884c4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=d9f42b2a-6b46-481c-8afe-61daef359711) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:31.936 139780 INFO neutron.agent.ovn.metadata.agent [-] Port d9f42b2a-6b46-481c-8afe-61daef359711 in datapath 4c0a06e3-8d77-4f81-85b4-47e57dafff04 unbound from our chassis#033[00m
Nov 29 03:09:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:31.937 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c0a06e3-8d77-4f81-85b4-47e57dafff04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:31.939 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[029818c9-7852-4cd9-8d49-4a01c7ff0f63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:31.940 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 namespace which is not needed anymore#033[00m
Nov 29 03:09:31 np0005539564 nova_compute[226295]: 2025-11-29 08:09:31.959 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:31 np0005539564 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 29 03:09:31 np0005539564 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000060.scope: Consumed 9.508s CPU time.
Nov 29 03:09:31 np0005539564 systemd-machined[190128]: Machine qemu-40-instance-00000060 terminated.
Nov 29 03:09:32 np0005539564 kernel: tapd9f42b2a-6b: entered promiscuous mode
Nov 29 03:09:32 np0005539564 NetworkManager[48997]: <info>  [1764403772.0694] manager: (tapd9f42b2a-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Nov 29 03:09:32 np0005539564 systemd-udevd[260758]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:32 np0005539564 kernel: tapd9f42b2a-6b (unregistering): left promiscuous mode
Nov 29 03:09:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:32Z|00326|binding|INFO|Claiming lport d9f42b2a-6b46-481c-8afe-61daef359711 for this chassis.
Nov 29 03:09:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:32Z|00327|binding|INFO|d9f42b2a-6b46-481c-8afe-61daef359711: Claiming fa:16:3e:06:42:22 10.100.0.5
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.080 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:42:22 10.100.0.5'], port_security=['fa:16:3e:06:42:22 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6753610d-4170-4449-9727-e7162083a6cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1da66fc3-7f9f-49ea-a35d-351f9e777793', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ed36bb-bd1a-404c-bed2-6bc7af2884c4, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=d9f42b2a-6b46-481c-8afe-61daef359711) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:32 np0005539564 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[260673]: [NOTICE]   (260677) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:32 np0005539564 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[260673]: [NOTICE]   (260677) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:32 np0005539564 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[260673]: [WARNING]  (260677) : Exiting Master process...
Nov 29 03:09:32 np0005539564 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[260673]: [ALERT]    (260677) : Current worker (260679) exited with code 143 (Terminated)
Nov 29 03:09:32 np0005539564 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[260673]: [WARNING]  (260677) : All workers exited. Exiting... (0)
Nov 29 03:09:32 np0005539564 systemd[1]: libpod-b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d.scope: Deactivated successfully.
Nov 29 03:09:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:32Z|00328|binding|INFO|Releasing lport d9f42b2a-6b46-481c-8afe-61daef359711 from this chassis (sb_readonly=0)
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.102 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539564 podman[260774]: 2025-11-29 08:09:32.108281676 +0000 UTC m=+0.060081142 container died b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.111 226310 INFO nova.virt.libvirt.driver [-] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Instance destroyed successfully.#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.112 226310 DEBUG nova.objects.instance [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'resources' on Instance uuid 6753610d-4170-4449-9727-e7162083a6cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.116 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:42:22 10.100.0.5'], port_security=['fa:16:3e:06:42:22 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6753610d-4170-4449-9727-e7162083a6cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1da66fc3-7f9f-49ea-a35d-351f9e777793', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ed36bb-bd1a-404c-bed2-6bc7af2884c4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=d9f42b2a-6b46-481c-8afe-61daef359711) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.134 226310 DEBUG nova.virt.libvirt.vif [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-2',id=96,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-29T08:09:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:23Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=6753610d-4170-4449-9727-e7162083a6cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.135 226310 DEBUG nova.network.os_vif_util [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "d9f42b2a-6b46-481c-8afe-61daef359711", "address": "fa:16:3e:06:42:22", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9f42b2a-6b", "ovs_interfaceid": "d9f42b2a-6b46-481c-8afe-61daef359711", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.135 226310 DEBUG nova.network.os_vif_util [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:22,bridge_name='br-int',has_traffic_filtering=True,id=d9f42b2a-6b46-481c-8afe-61daef359711,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9f42b2a-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.136 226310 DEBUG os_vif [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:22,bridge_name='br-int',has_traffic_filtering=True,id=d9f42b2a-6b46-481c-8afe-61daef359711,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9f42b2a-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:32 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.138 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.138 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9f42b2a-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.140 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539564 systemd[1]: var-lib-containers-storage-overlay-d4a3172d326dfff56b6ab5405e1f64700af75646fa2a994696c71b5800023a92-merged.mount: Deactivated successfully.
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.142 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.144 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.146 226310 INFO os_vif [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:22,bridge_name='br-int',has_traffic_filtering=True,id=d9f42b2a-6b46-481c-8afe-61daef359711,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9f42b2a-6b')#033[00m
Nov 29 03:09:32 np0005539564 podman[260774]: 2025-11-29 08:09:32.157026581 +0000 UTC m=+0.108826047 container cleanup b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:32 np0005539564 systemd[1]: libpod-conmon-b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d.scope: Deactivated successfully.
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.352 226310 DEBUG nova.compute.manager [req-82177741-45da-4b46-8139-70bd70a89151 req-c4042753-045e-4202-8c53-efb9947baff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received event network-vif-unplugged-d9f42b2a-6b46-481c-8afe-61daef359711 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.353 226310 DEBUG oslo_concurrency.lockutils [req-82177741-45da-4b46-8139-70bd70a89151 req-c4042753-045e-4202-8c53-efb9947baff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6753610d-4170-4449-9727-e7162083a6cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.353 226310 DEBUG oslo_concurrency.lockutils [req-82177741-45da-4b46-8139-70bd70a89151 req-c4042753-045e-4202-8c53-efb9947baff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.353 226310 DEBUG oslo_concurrency.lockutils [req-82177741-45da-4b46-8139-70bd70a89151 req-c4042753-045e-4202-8c53-efb9947baff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.354 226310 DEBUG nova.compute.manager [req-82177741-45da-4b46-8139-70bd70a89151 req-c4042753-045e-4202-8c53-efb9947baff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] No waiting events found dispatching network-vif-unplugged-d9f42b2a-6b46-481c-8afe-61daef359711 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.354 226310 DEBUG nova.compute.manager [req-82177741-45da-4b46-8139-70bd70a89151 req-c4042753-045e-4202-8c53-efb9947baff6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received event network-vif-unplugged-d9f42b2a-6b46-481c-8afe-61daef359711 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:09:32 np0005539564 podman[260826]: 2025-11-29 08:09:32.778155715 +0000 UTC m=+0.600672063 container remove b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.788 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3864a7-070f-4c1c-a107-727c611bad85]: (4, ('Sat Nov 29 08:09:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 (b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d)\nb17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d\nSat Nov 29 08:09:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 (b17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d)\nb17e8e6fbf0f59a5181b7e04d7e1e4e2a1b8cd4c7ed39dc750609214e4a6411d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.791 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcc62bb-757e-4bb6-966c-c2370dee4e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.793 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c0a06e3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.796 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539564 kernel: tap4c0a06e3-80: left promiscuous mode
Nov 29 03:09:32 np0005539564 nova_compute[226295]: 2025-11-29 08:09:32.828 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.832 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6c08b7-9719-482b-82de-acb7fa8a0934]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.859 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fdac93ae-1f45-4431-a7ea-9760217e55b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.860 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dd927337-6c26-402d-b7f0-378d97f002c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.884 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1966a627-53f4-42bd-9b29-a0d541314283]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673594, 'reachable_time': 28143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260844, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.888 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.888 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[89e74eb3-19b5-4620-9f37-bd39eb94849d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.889 139780 INFO neutron.agent.ovn.metadata.agent [-] Port d9f42b2a-6b46-481c-8afe-61daef359711 in datapath 4c0a06e3-8d77-4f81-85b4-47e57dafff04 unbound from our chassis#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.890 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c0a06e3-8d77-4f81-85b4-47e57dafff04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:32 np0005539564 systemd[1]: run-netns-ovnmeta\x2d4c0a06e3\x2d8d77\x2d4f81\x2d85b4\x2d47e57dafff04.mount: Deactivated successfully.
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.891 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[70dd42ee-20e0-4a0d-a128-e5f8def90077]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.891 139780 INFO neutron.agent.ovn.metadata.agent [-] Port d9f42b2a-6b46-481c-8afe-61daef359711 in datapath 4c0a06e3-8d77-4f81-85b4-47e57dafff04 unbound from our chassis#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.892 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c0a06e3-8d77-4f81-85b4-47e57dafff04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:32.893 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d1dcc9-9f0d-4bac-adaa-0a4cd78c90d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:33.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:33.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:33 np0005539564 nova_compute[226295]: 2025-11-29 08:09:33.951 226310 INFO nova.virt.libvirt.driver [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Deleting instance files /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd_del#033[00m
Nov 29 03:09:33 np0005539564 nova_compute[226295]: 2025-11-29 08:09:33.952 226310 INFO nova.virt.libvirt.driver [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Deletion of /var/lib/nova/instances/6753610d-4170-4449-9727-e7162083a6cd_del complete#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.038 226310 INFO nova.compute.manager [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Took 2.20 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.038 226310 DEBUG oslo.service.loopingcall [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.039 226310 DEBUG nova.compute.manager [-] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.039 226310 DEBUG nova.network.neutron [-] [instance: 6753610d-4170-4449-9727-e7162083a6cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:09:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:34.325 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.524 226310 DEBUG nova.compute.manager [req-aa09d20d-f82e-4572-add1-3541448faa68 req-68e70658-1ea3-45b6-8256-a1e784d82ee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received event network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.524 226310 DEBUG oslo_concurrency.lockutils [req-aa09d20d-f82e-4572-add1-3541448faa68 req-68e70658-1ea3-45b6-8256-a1e784d82ee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6753610d-4170-4449-9727-e7162083a6cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.525 226310 DEBUG oslo_concurrency.lockutils [req-aa09d20d-f82e-4572-add1-3541448faa68 req-68e70658-1ea3-45b6-8256-a1e784d82ee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.526 226310 DEBUG oslo_concurrency.lockutils [req-aa09d20d-f82e-4572-add1-3541448faa68 req-68e70658-1ea3-45b6-8256-a1e784d82ee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.526 226310 DEBUG nova.compute.manager [req-aa09d20d-f82e-4572-add1-3541448faa68 req-68e70658-1ea3-45b6-8256-a1e784d82ee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] No waiting events found dispatching network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:34 np0005539564 nova_compute[226295]: 2025-11-29 08:09:34.527 226310 WARNING nova.compute.manager [req-aa09d20d-f82e-4572-add1-3541448faa68 req-68e70658-1ea3-45b6-8256-a1e784d82ee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received unexpected event network-vif-plugged-d9f42b2a-6b46-481c-8afe-61daef359711 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:35.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:35 np0005539564 nova_compute[226295]: 2025-11-29 08:09:35.596 226310 DEBUG nova.network.neutron [-] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:35 np0005539564 nova_compute[226295]: 2025-11-29 08:09:35.631 226310 INFO nova.compute.manager [-] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Took 1.59 seconds to deallocate network for instance.#033[00m
Nov 29 03:09:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:35 np0005539564 nova_compute[226295]: 2025-11-29 08:09:35.699 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:35 np0005539564 nova_compute[226295]: 2025-11-29 08:09:35.700 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:35 np0005539564 nova_compute[226295]: 2025-11-29 08:09:35.776 226310 DEBUG oslo_concurrency.processutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:35.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:35 np0005539564 nova_compute[226295]: 2025-11-29 08:09:35.831 226310 DEBUG nova.compute.manager [req-350bc3e4-c7d0-4a8b-97ba-c559dc8a58e8 req-677ad7d3-66c1-4d66-8b97-e6b48f6385dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Received event network-vif-deleted-d9f42b2a-6b46-481c-8afe-61daef359711 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3874596961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:36 np0005539564 nova_compute[226295]: 2025-11-29 08:09:36.234 226310 DEBUG oslo_concurrency.processutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:36 np0005539564 nova_compute[226295]: 2025-11-29 08:09:36.244 226310 DEBUG nova.compute.provider_tree [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:36 np0005539564 nova_compute[226295]: 2025-11-29 08:09:36.279 226310 DEBUG nova.scheduler.client.report [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:36 np0005539564 nova_compute[226295]: 2025-11-29 08:09:36.308 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:36 np0005539564 nova_compute[226295]: 2025-11-29 08:09:36.343 226310 INFO nova.scheduler.client.report [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Deleted allocations for instance 6753610d-4170-4449-9727-e7162083a6cd#033[00m
Nov 29 03:09:36 np0005539564 nova_compute[226295]: 2025-11-29 08:09:36.451 226310 DEBUG oslo_concurrency.lockutils [None req-f7284528-c6f8-45c4-a7c3-5d47fa8868dd 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "6753610d-4170-4449-9727-e7162083a6cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:36 np0005539564 nova_compute[226295]: 2025-11-29 08:09:36.931 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:37 np0005539564 nova_compute[226295]: 2025-11-29 08:09:37.140 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:37 np0005539564 nova_compute[226295]: 2025-11-29 08:09:37.372 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:37.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:37.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:39.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:41 np0005539564 nova_compute[226295]: 2025-11-29 08:09:41.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:41.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:41.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:41 np0005539564 nova_compute[226295]: 2025-11-29 08:09:41.933 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:42 np0005539564 nova_compute[226295]: 2025-11-29 08:09:42.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:42 np0005539564 nova_compute[226295]: 2025-11-29 08:09:42.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:42 np0005539564 nova_compute[226295]: 2025-11-29 08:09:42.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:42 np0005539564 nova_compute[226295]: 2025-11-29 08:09:42.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:09:43 np0005539564 nova_compute[226295]: 2025-11-29 08:09:43.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:43.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:43.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:44 np0005539564 nova_compute[226295]: 2025-11-29 08:09:44.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:44 np0005539564 nova_compute[226295]: 2025-11-29 08:09:44.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:09:44 np0005539564 nova_compute[226295]: 2025-11-29 08:09:44.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:09:44 np0005539564 nova_compute[226295]: 2025-11-29 08:09:44.381 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:09:45 np0005539564 nova_compute[226295]: 2025-11-29 08:09:45.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:45.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:45.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:46.681279) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786681367, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2356, "num_deletes": 254, "total_data_size": 5238860, "memory_usage": 5320496, "flush_reason": "Manual Compaction"}
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Nov 29 03:09:46 np0005539564 nova_compute[226295]: 2025-11-29 08:09:46.733 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "ce907ab6-8db5-48f6-9380-13c236bae1ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:46 np0005539564 nova_compute[226295]: 2025-11-29 08:09:46.734 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786777226, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2072046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40000, "largest_seqno": 42351, "table_properties": {"data_size": 2064997, "index_size": 3675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19650, "raw_average_key_size": 21, "raw_value_size": 2048914, "raw_average_value_size": 2234, "num_data_blocks": 163, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403598, "oldest_key_time": 1764403598, "file_creation_time": 1764403786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 96002 microseconds, and 7307 cpu microseconds.
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:09:46 np0005539564 nova_compute[226295]: 2025-11-29 08:09:46.789 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:46 np0005539564 nova_compute[226295]: 2025-11-29 08:09:46.881 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:46 np0005539564 nova_compute[226295]: 2025-11-29 08:09:46.882 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:46 np0005539564 nova_compute[226295]: 2025-11-29 08:09:46.890 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:46 np0005539564 nova_compute[226295]: 2025-11-29 08:09:46.890 226310 INFO nova.compute.claims [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:46.777294) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2072046 bytes OK
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:46.777323) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:46.916577) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:46.916642) EVENT_LOG_v1 {"time_micros": 1764403786916626, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:46.916676) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5228318, prev total WAL file size 5228318, number of live WAL files 2.
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:46.919277) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323630' seq:72057594037927935, type:22 .. '6D6772737461740031353132' seq:0, type:0; will stop at (end)
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2023KB)], [75(10MB)]
Nov 29 03:09:46 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786919391, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13093392, "oldest_snapshot_seqno": -1}
Nov 29 03:09:46 np0005539564 nova_compute[226295]: 2025-11-29 08:09:46.936 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.061 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.113 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403772.1115477, 6753610d-4170-4449-9727-e7162083a6cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.114 226310 INFO nova.compute.manager [-] [instance: 6753610d-4170-4449-9727-e7162083a6cd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.142 226310 DEBUG nova.compute.manager [None req-725744c9-37b0-4099-aba9-6e90dc4fc507 - - - - - -] [instance: 6753610d-4170-4449-9727-e7162083a6cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 7283 keys, 10601099 bytes, temperature: kUnknown
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403787158399, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 10601099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10553436, "index_size": 28351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 187393, "raw_average_key_size": 25, "raw_value_size": 10424308, "raw_average_value_size": 1431, "num_data_blocks": 1125, "num_entries": 7283, "num_filter_entries": 7283, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:47.158774) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10601099 bytes
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:47.160956) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.8 rd, 44.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(11.4) write-amplify(5.1) OK, records in: 7718, records dropped: 435 output_compression: NoCompression
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:47.160998) EVENT_LOG_v1 {"time_micros": 1764403787160979, "job": 46, "event": "compaction_finished", "compaction_time_micros": 239103, "compaction_time_cpu_micros": 39058, "output_level": 6, "num_output_files": 1, "total_output_size": 10601099, "num_input_records": 7718, "num_output_records": 7283, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403787161733, "job": 46, "event": "table_file_deletion", "file_number": 77}
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.162 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403787165126, "job": 46, "event": "table_file_deletion", "file_number": 75}
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:46.919090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:47.165366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:47.165374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:47.165376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:47.165378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:47.165380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:47.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2493137400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.653 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.662 226310 DEBUG nova.compute.provider_tree [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.688 226310 DEBUG nova.scheduler.client.report [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.718 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.719 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.777 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.777 226310 DEBUG nova.network.neutron [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.819 226310 INFO nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:47.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.856 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.972 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.974 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:47 np0005539564 nova_compute[226295]: 2025-11-29 08:09:47.975 226310 INFO nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Creating image(s)#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.017 226310 DEBUG nova.storage.rbd_utils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image ce907ab6-8db5-48f6-9380-13c236bae1ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.050 226310 DEBUG nova.storage.rbd_utils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image ce907ab6-8db5-48f6-9380-13c236bae1ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.086 226310 DEBUG nova.storage.rbd_utils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image ce907ab6-8db5-48f6-9380-13c236bae1ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.091 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.126 226310 DEBUG nova.policy [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7c90fe1780904a6098015abc66b38d9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.171 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.173 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.174 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.174 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.209 226310 DEBUG nova.storage.rbd_utils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image ce907ab6-8db5-48f6-9380-13c236bae1ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.214 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 ce907ab6-8db5-48f6-9380-13c236bae1ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.498 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 ce907ab6-8db5-48f6-9380-13c236bae1ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.586 226310 DEBUG nova.storage.rbd_utils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] resizing rbd image ce907ab6-8db5-48f6-9380-13c236bae1ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.704 226310 DEBUG nova.objects.instance [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'migration_context' on Instance uuid ce907ab6-8db5-48f6-9380-13c236bae1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.724 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.724 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Ensure instance console log exists: /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.725 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.725 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:48 np0005539564 nova_compute[226295]: 2025-11-29 08:09:48.725 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:49 np0005539564 nova_compute[226295]: 2025-11-29 08:09:49.605 226310 DEBUG nova.network.neutron [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Successfully created port: 90ea25d6-c92d-43c5-ac73-129da2340c50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:49.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:49.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:50 np0005539564 nova_compute[226295]: 2025-11-29 08:09:50.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:50 np0005539564 nova_compute[226295]: 2025-11-29 08:09:50.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:50 np0005539564 nova_compute[226295]: 2025-11-29 08:09:50.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:50 np0005539564 nova_compute[226295]: 2025-11-29 08:09:50.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:50 np0005539564 nova_compute[226295]: 2025-11-29 08:09:50.382 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:09:50 np0005539564 nova_compute[226295]: 2025-11-29 08:09:50.382 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4061477850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:50 np0005539564 nova_compute[226295]: 2025-11-29 08:09:50.837 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.081 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.083 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4569MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.084 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.084 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.152 226310 DEBUG nova.network.neutron [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Successfully updated port: 90ea25d6-c92d-43c5-ac73-129da2340c50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.188 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "refresh_cache-ce907ab6-8db5-48f6-9380-13c236bae1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.188 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquired lock "refresh_cache-ce907ab6-8db5-48f6-9380-13c236bae1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.188 226310 DEBUG nova.network.neutron [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.198 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance ce907ab6-8db5-48f6-9380-13c236bae1ce actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.198 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.199 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.258 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.314 226310 DEBUG nova.compute.manager [req-537b22ab-2e1f-4774-b01d-19188c14071e req-9b9ecc91-cc1f-4e7e-b8f6-be25e8057a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received event network-changed-90ea25d6-c92d-43c5-ac73-129da2340c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.315 226310 DEBUG nova.compute.manager [req-537b22ab-2e1f-4774-b01d-19188c14071e req-9b9ecc91-cc1f-4e7e-b8f6-be25e8057a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Refreshing instance network info cache due to event network-changed-90ea25d6-c92d-43c5-ac73-129da2340c50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.315 226310 DEBUG oslo_concurrency.lockutils [req-537b22ab-2e1f-4774-b01d-19188c14071e req-9b9ecc91-cc1f-4e7e-b8f6-be25e8057a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ce907ab6-8db5-48f6-9380-13c236bae1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:51.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:51 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4091256513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.780 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.783 226310 DEBUG nova.network.neutron [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.790 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.811 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.842 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.843 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:51 np0005539564 nova_compute[226295]: 2025-11-29 08:09:51.939 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539564 nova_compute[226295]: 2025-11-29 08:09:52.164 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.144 226310 DEBUG nova.network.neutron [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Updating instance_info_cache with network_info: [{"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.169 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Releasing lock "refresh_cache-ce907ab6-8db5-48f6-9380-13c236bae1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.170 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Instance network_info: |[{"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.170 226310 DEBUG oslo_concurrency.lockutils [req-537b22ab-2e1f-4774-b01d-19188c14071e req-9b9ecc91-cc1f-4e7e-b8f6-be25e8057a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ce907ab6-8db5-48f6-9380-13c236bae1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.171 226310 DEBUG nova.network.neutron [req-537b22ab-2e1f-4774-b01d-19188c14071e req-9b9ecc91-cc1f-4e7e-b8f6-be25e8057a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Refreshing network info cache for port 90ea25d6-c92d-43c5-ac73-129da2340c50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.176 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Start _get_guest_xml network_info=[{"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': 'ed489666-5fa2-4ea4-8005-7a7505ac1b78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.183 226310 WARNING nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.189 226310 DEBUG nova.virt.libvirt.host [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.190 226310 DEBUG nova.virt.libvirt.host [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.194 226310 DEBUG nova.virt.libvirt.host [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.195 226310 DEBUG nova.virt.libvirt.host [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.197 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.197 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.198 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.198 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.199 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.199 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.200 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.200 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.200 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.201 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.201 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.202 226310 DEBUG nova.virt.hardware [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.207 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1886974198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.718 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.761 226310 DEBUG nova.storage.rbd_utils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image ce907ab6-8db5-48f6-9380-13c236bae1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.767 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:53.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:53 np0005539564 nova_compute[226295]: 2025-11-29 08:09:53.845 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1613395341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.229 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.231 226310 DEBUG nova.virt.libvirt.vif [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-129702678',display_name='tempest-ListServerFiltersTestJSON-instance-129702678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-129702678',id=99,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-rcamux6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:47Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=ce907ab6-8db5-48f6-9380-13c236bae1ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.232 226310 DEBUG nova.network.os_vif_util [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.233 226310 DEBUG nova.network.os_vif_util [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:54,bridge_name='br-int',has_traffic_filtering=True,id=90ea25d6-c92d-43c5-ac73-129da2340c50,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90ea25d6-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.236 226310 DEBUG nova.objects.instance [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce907ab6-8db5-48f6-9380-13c236bae1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.263 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <uuid>ce907ab6-8db5-48f6-9380-13c236bae1ce</uuid>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <name>instance-00000063</name>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-129702678</nova:name>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:09:53</nova:creationTime>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <nova:user uuid="7c90fe1780904a6098015abc66b38d9d">tempest-ListServerFiltersTestJSON-207904478-project-member</nova:user>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <nova:project uuid="baca94adaa5145a6b9cef930bff28fa4">tempest-ListServerFiltersTestJSON-207904478</nova:project>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <nova:port uuid="90ea25d6-c92d-43c5-ac73-129da2340c50">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <entry name="serial">ce907ab6-8db5-48f6-9380-13c236bae1ce</entry>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <entry name="uuid">ce907ab6-8db5-48f6-9380-13c236bae1ce</entry>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/ce907ab6-8db5-48f6-9380-13c236bae1ce_disk">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/ce907ab6-8db5-48f6-9380-13c236bae1ce_disk.config">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:26:b2:54"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <target dev="tap90ea25d6-c9"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce/console.log" append="off"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:09:54 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:09:54 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:09:54 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:09:54 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.264 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Preparing to wait for external event network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.264 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.265 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.265 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.266 226310 DEBUG nova.virt.libvirt.vif [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-129702678',display_name='tempest-ListServerFiltersTestJSON-instance-129702678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-129702678',id=99,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-rcamux6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:47Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=ce907ab6-8db5-48f6-9380-13c236bae1ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.266 226310 DEBUG nova.network.os_vif_util [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.267 226310 DEBUG nova.network.os_vif_util [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:54,bridge_name='br-int',has_traffic_filtering=True,id=90ea25d6-c92d-43c5-ac73-129da2340c50,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90ea25d6-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.267 226310 DEBUG os_vif [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:54,bridge_name='br-int',has_traffic_filtering=True,id=90ea25d6-c92d-43c5-ac73-129da2340c50,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90ea25d6-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.268 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.268 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.268 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.270 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.271 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90ea25d6-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.271 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90ea25d6-c9, col_values=(('external_ids', {'iface-id': '90ea25d6-c92d-43c5-ac73-129da2340c50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:b2:54', 'vm-uuid': 'ce907ab6-8db5-48f6-9380-13c236bae1ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.273 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:54 np0005539564 NetworkManager[48997]: <info>  [1764403794.2744] manager: (tap90ea25d6-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.275 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.280 226310 INFO os_vif [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:54,bridge_name='br-int',has_traffic_filtering=True,id=90ea25d6-c92d-43c5-ac73-129da2340c50,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90ea25d6-c9')#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.362 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.363 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.363 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No VIF found with MAC fa:16:3e:26:b2:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.364 226310 INFO nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Using config drive#033[00m
Nov 29 03:09:54 np0005539564 nova_compute[226295]: 2025-11-29 08:09:54.401 226310 DEBUG nova.storage.rbd_utils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image ce907ab6-8db5-48f6-9380-13c236bae1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:54 np0005539564 podman[261190]: 2025-11-29 08:09:54.54188839 +0000 UTC m=+0.090119012 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:09:54 np0005539564 podman[261191]: 2025-11-29 08:09:54.566524785 +0000 UTC m=+0.110541384 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 29 03:09:54 np0005539564 podman[261189]: 2025-11-29 08:09:54.585775264 +0000 UTC m=+0.133997936 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.629365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794629404, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 347, "num_deletes": 251, "total_data_size": 240797, "memory_usage": 248376, "flush_reason": "Manual Compaction"}
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794633043, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 158259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42357, "largest_seqno": 42698, "table_properties": {"data_size": 156127, "index_size": 296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5619, "raw_average_key_size": 18, "raw_value_size": 151842, "raw_average_value_size": 509, "num_data_blocks": 13, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403787, "oldest_key_time": 1764403787, "file_creation_time": 1764403794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 3726 microseconds, and 1198 cpu microseconds.
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.633089) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 158259 bytes OK
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.633109) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.634720) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.634742) EVENT_LOG_v1 {"time_micros": 1764403794634735, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.634762) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 238381, prev total WAL file size 238381, number of live WAL files 2.
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.635395) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(154KB)], [78(10MB)]
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794635472, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 10759358, "oldest_snapshot_seqno": -1}
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 7071 keys, 8781276 bytes, temperature: kUnknown
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794773331, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 8781276, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8736739, "index_size": 25737, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 183758, "raw_average_key_size": 25, "raw_value_size": 8612906, "raw_average_value_size": 1218, "num_data_blocks": 1007, "num_entries": 7071, "num_filter_entries": 7071, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.773869) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8781276 bytes
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.775668) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.0 rd, 63.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.1 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(123.5) write-amplify(55.5) OK, records in: 7581, records dropped: 510 output_compression: NoCompression
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.775698) EVENT_LOG_v1 {"time_micros": 1764403794775685, "job": 48, "event": "compaction_finished", "compaction_time_micros": 137970, "compaction_time_cpu_micros": 41461, "output_level": 6, "num_output_files": 1, "total_output_size": 8781276, "num_input_records": 7581, "num_output_records": 7071, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794775932, "job": 48, "event": "table_file_deletion", "file_number": 80}
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794779111, "job": 48, "event": "table_file_deletion", "file_number": 78}
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.635251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.779268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.779275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.779279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.779282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:09:54.779284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.042 226310 INFO nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Creating config drive at /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce/disk.config#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.048 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7tttj6m6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.140 226310 DEBUG nova.network.neutron [req-537b22ab-2e1f-4774-b01d-19188c14071e req-9b9ecc91-cc1f-4e7e-b8f6-be25e8057a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Updated VIF entry in instance network info cache for port 90ea25d6-c92d-43c5-ac73-129da2340c50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.141 226310 DEBUG nova.network.neutron [req-537b22ab-2e1f-4774-b01d-19188c14071e req-9b9ecc91-cc1f-4e7e-b8f6-be25e8057a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Updating instance_info_cache with network_info: [{"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.164 226310 DEBUG oslo_concurrency.lockutils [req-537b22ab-2e1f-4774-b01d-19188c14071e req-9b9ecc91-cc1f-4e7e-b8f6-be25e8057a87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ce907ab6-8db5-48f6-9380-13c236bae1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.200 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7tttj6m6" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.233 226310 DEBUG nova.storage.rbd_utils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image ce907ab6-8db5-48f6-9380-13c236bae1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.239 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce/disk.config ce907ab6-8db5-48f6-9380-13c236bae1ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.410 226310 DEBUG oslo_concurrency.processutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce/disk.config ce907ab6-8db5-48f6-9380-13c236bae1ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.411 226310 INFO nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Deleting local config drive /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:55 np0005539564 kernel: tap90ea25d6-c9: entered promiscuous mode
Nov 29 03:09:55 np0005539564 NetworkManager[48997]: <info>  [1764403795.4783] manager: (tap90ea25d6-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Nov 29 03:09:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:55Z|00329|binding|INFO|Claiming lport 90ea25d6-c92d-43c5-ac73-129da2340c50 for this chassis.
Nov 29 03:09:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:55Z|00330|binding|INFO|90ea25d6-c92d-43c5-ac73-129da2340c50: Claiming fa:16:3e:26:b2:54 10.100.0.3
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.480 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.486 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.499 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:b2:54 10.100.0.3'], port_security=['fa:16:3e:26:b2:54 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce907ab6-8db5-48f6-9380-13c236bae1ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c333182-abc9-4e1c-9562-d9522d2eaaba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b69ef350-fb24-4945-9405-01b7ba3f6aca, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=90ea25d6-c92d-43c5-ac73-129da2340c50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.500 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 90ea25d6-c92d-43c5-ac73-129da2340c50 in datapath 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 bound to our chassis#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.502 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.521 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[12d39f40-a85b-4920-bce9-79c332c611cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.522 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9a0b70e3-11 in ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:55 np0005539564 systemd-machined[190128]: New machine qemu-41-instance-00000063.
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.525 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9a0b70e3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.525 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c57c00-3cca-4cd2-9df9-c1b6a0dca1b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.526 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b352b66e-5c41-4967-aef5-750720c0b343]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 systemd[1]: Started Virtual Machine qemu-41-instance-00000063.
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.546 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[2e507cf9-b663-43f2-a695-d0a3cd3000ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.554 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:55Z|00331|binding|INFO|Setting lport 90ea25d6-c92d-43c5-ac73-129da2340c50 ovn-installed in OVS
Nov 29 03:09:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:55Z|00332|binding|INFO|Setting lport 90ea25d6-c92d-43c5-ac73-129da2340c50 up in Southbound
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.561 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539564 systemd-udevd[261309]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.582 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[145f83dc-550f-4231-ae84-390910bca029]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 NetworkManager[48997]: <info>  [1764403795.5879] device (tap90ea25d6-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:55 np0005539564 NetworkManager[48997]: <info>  [1764403795.5888] device (tap90ea25d6-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:55.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.621 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6abd5370-c074-4688-9eef-997465c56795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 NetworkManager[48997]: <info>  [1764403795.6289] manager: (tap9a0b70e3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/170)
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.628 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[02029fa5-07e1-4be6-80a6-649818bd41ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.663 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc2e0ff-b257-42a7-b5fc-917e49f9b4ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.667 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[711f1cc5-3686-46e3-b10e-94e491b84973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:55 np0005539564 NetworkManager[48997]: <info>  [1764403795.6908] device (tap9a0b70e3-10): carrier: link connected
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.697 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a695e3-f552-462c-96e0-73a98bc02b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.720 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[83195b79-b620-4724-bb4c-1ce5126642c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a0b70e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677037, 'reachable_time': 39430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261339, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.744 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[41734c88-362d-45c6-ac33-b83f3c87f467]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:e973'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677037, 'tstamp': 677037}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261340, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.775 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[470aa7ff-3029-45f9-9a0c-02358129eadf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a0b70e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677037, 'reachable_time': 39430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261341, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.809 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[39a41d83-e333-49a1-9724-c414a1cb8a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:55.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.871 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0f808b-f839-4630-9ba4-dade65a7f735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.873 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a0b70e3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.873 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.874 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a0b70e3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.876 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539564 NetworkManager[48997]: <info>  [1764403795.8771] manager: (tap9a0b70e3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 29 03:09:55 np0005539564 kernel: tap9a0b70e3-10: entered promiscuous mode
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.886 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a0b70e3-10, col_values=(('external_ids', {'iface-id': '564ded89-d5cd-4ed0-aa20-e32de45b6125'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:09:55Z|00333|binding|INFO|Releasing lport 564ded89-d5cd-4ed0-aa20-e32de45b6125 from this chassis (sb_readonly=0)
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.887 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.891 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.892 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[48eb3deb-6ef7-4f5d-9bfc-8f3fea16352b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.893 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:09:55.895 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'env', 'PROCESS_TAG=haproxy-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:55 np0005539564 nova_compute[226295]: 2025-11-29 08:09:55.902 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.217 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403796.2173495, ce907ab6-8db5-48f6-9380-13c236bae1ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.219 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.246 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.252 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403796.2185209, ce907ab6-8db5-48f6-9380-13c236bae1ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.253 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.273 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.278 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:56 np0005539564 podman[261415]: 2025-11-29 08:09:56.281890775 +0000 UTC m=+0.072422225 container create 4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.298 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:56 np0005539564 podman[261415]: 2025-11-29 08:09:56.243261343 +0000 UTC m=+0.033792843 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:56 np0005539564 systemd[1]: Started libpod-conmon-4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191.scope.
Nov 29 03:09:56 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:09:56 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6321cd4959a5e897d4a68006d4b00f70403f2776cad3c2ac2c0dcf028eef406c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:56 np0005539564 podman[261415]: 2025-11-29 08:09:56.394967465 +0000 UTC m=+0.185498905 container init 4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:56 np0005539564 podman[261415]: 2025-11-29 08:09:56.405974102 +0000 UTC m=+0.196505522 container start 4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:56 np0005539564 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[261431]: [NOTICE]   (261435) : New worker (261437) forked
Nov 29 03:09:56 np0005539564 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[261431]: [NOTICE]   (261435) : Loading success.
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.549 226310 DEBUG nova.compute.manager [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received event network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.550 226310 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.550 226310 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.551 226310 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.551 226310 DEBUG nova.compute.manager [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Processing event network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.552 226310 DEBUG nova.compute.manager [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received event network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.553 226310 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.553 226310 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.553 226310 DEBUG oslo_concurrency.lockutils [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.554 226310 DEBUG nova.compute.manager [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] No waiting events found dispatching network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.555 226310 WARNING nova.compute.manager [req-c51f200a-1076-4072-8f3b-dab51798d199 req-9dbd7c8a-c7eb-46cf-a123-6ed64317f4bb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received unexpected event network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.556 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.560 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403796.5603418, ce907ab6-8db5-48f6-9380-13c236bae1ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.561 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.562 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.567 226310 INFO nova.virt.libvirt.driver [-] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Instance spawned successfully.#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.568 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.594 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.604 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.611 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.612 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.613 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.614 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.614 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.615 226310 DEBUG nova.virt.libvirt.driver [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.707 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.877 226310 INFO nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Took 8.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.878 226310 DEBUG nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.941 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.980 226310 INFO nova.compute.manager [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Took 10.13 seconds to build instance.#033[00m
Nov 29 03:09:56 np0005539564 nova_compute[226295]: 2025-11-29 08:09:56.996 226310 DEBUG oslo_concurrency.lockutils [None req-dbf0877c-f5dd-4a2d-91c6-eb994e3d7edb 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:57.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:57.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:59 np0005539564 nova_compute[226295]: 2025-11-29 08:09:59.278 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:59.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:09:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:59.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:00 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 03:10:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:01.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:01.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:01 np0005539564 nova_compute[226295]: 2025-11-29 08:10:01.943 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:03.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:03.720 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:03.721 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:03.722 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:03.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.288 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.697 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.698 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.719 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.806 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.807 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.816 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.817 226310 INFO nova.compute.claims [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:10:04 np0005539564 nova_compute[226295]: 2025-11-29 08:10:04.938 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:05 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3628253542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.441 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.447 226310 DEBUG nova.compute.provider_tree [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.461 226310 DEBUG nova.scheduler.client.report [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.486 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.487 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.541 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.541 226310 DEBUG nova.network.neutron [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.559 226310 INFO nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.575 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:10:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:05.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.687 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.688 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.689 226310 INFO nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Creating image(s)#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.724 226310 DEBUG nova.storage.rbd_utils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.770 226310 DEBUG nova.storage.rbd_utils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.810 226310 DEBUG nova.storage.rbd_utils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.815 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.859 226310 DEBUG nova.policy [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca93c8e3eac142c0aa6b61807727dea2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:10:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:05.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.920 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.921 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.922 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.922 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.963 226310 DEBUG nova.storage.rbd_utils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:05 np0005539564 nova_compute[226295]: 2025-11-29 08:10:05.968 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.312 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.388 226310 DEBUG nova.storage.rbd_utils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] resizing rbd image 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.502 226310 DEBUG nova.objects.instance [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'migration_context' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.526 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.527 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Ensure instance console log exists: /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.527 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.528 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.528 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539564 nova_compute[226295]: 2025-11-29 08:10:06.945 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:07.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:07.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:08 np0005539564 nova_compute[226295]: 2025-11-29 08:10:08.885 226310 DEBUG nova.network.neutron [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Successfully created port: b7329edb-beb8-414a-b8c3-33d223c32d22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:10:09 np0005539564 nova_compute[226295]: 2025-11-29 08:10:09.290 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:09.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:10:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:10:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:09.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:10 np0005539564 nova_compute[226295]: 2025-11-29 08:10:10.410 226310 DEBUG nova.network.neutron [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Successfully updated port: b7329edb-beb8-414a-b8c3-33d223c32d22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:10:10 np0005539564 nova_compute[226295]: 2025-11-29 08:10:10.439 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:10 np0005539564 nova_compute[226295]: 2025-11-29 08:10:10.440 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:10 np0005539564 nova_compute[226295]: 2025-11-29 08:10:10.440 226310 DEBUG nova.network.neutron [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:10 np0005539564 nova_compute[226295]: 2025-11-29 08:10:10.536 226310 DEBUG nova.compute.manager [req-054f1555-bd3e-4c6a-82b8-65c5a49414c7 req-f3fc864c-8969-415a-8c1a-0df21f5c68e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-changed-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:10 np0005539564 nova_compute[226295]: 2025-11-29 08:10:10.537 226310 DEBUG nova.compute.manager [req-054f1555-bd3e-4c6a-82b8-65c5a49414c7 req-f3fc864c-8969-415a-8c1a-0df21f5c68e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Refreshing instance network info cache due to event network-changed-b7329edb-beb8-414a-b8c3-33d223c32d22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:10 np0005539564 nova_compute[226295]: 2025-11-29 08:10:10.538 226310 DEBUG oslo_concurrency.lockutils [req-054f1555-bd3e-4c6a-82b8-65c5a49414c7 req-f3fc864c-8969-415a-8c1a-0df21f5c68e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:10 np0005539564 nova_compute[226295]: 2025-11-29 08:10:10.632 226310 DEBUG nova.network.neutron [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:10:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:11.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:11 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:11Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:b2:54 10.100.0.3
Nov 29 03:10:11 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:11Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:b2:54 10.100.0.3
Nov 29 03:10:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:11 np0005539564 nova_compute[226295]: 2025-11-29 08:10:11.948 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.124 226310 DEBUG nova.network.neutron [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.150 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.150 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance network_info: |[{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.151 226310 DEBUG oslo_concurrency.lockutils [req-054f1555-bd3e-4c6a-82b8-65c5a49414c7 req-f3fc864c-8969-415a-8c1a-0df21f5c68e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.151 226310 DEBUG nova.network.neutron [req-054f1555-bd3e-4c6a-82b8-65c5a49414c7 req-f3fc864c-8969-415a-8c1a-0df21f5c68e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Refreshing network info cache for port b7329edb-beb8-414a-b8c3-33d223c32d22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.153 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Start _get_guest_xml network_info=[{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.160 226310 WARNING nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.166 226310 DEBUG nova.virt.libvirt.host [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.167 226310 DEBUG nova.virt.libvirt.host [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.172 226310 DEBUG nova.virt.libvirt.host [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.173 226310 DEBUG nova.virt.libvirt.host [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.174 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.174 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.175 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.175 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.175 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.176 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.176 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.176 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.176 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.177 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.177 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.177 226310 DEBUG nova.virt.hardware [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.179 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.368 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:12.372 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:12.373 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:10:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3375144778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.646 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.675 226310 DEBUG nova.storage.rbd_utils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:12 np0005539564 nova_compute[226295]: 2025-11-29 08:10:12.679 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:13 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1406636072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.128 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.130 226310 DEBUG nova.virt.libvirt.vif [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.131 226310 DEBUG nova.network.os_vif_util [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.132 226310 DEBUG nova.network.os_vif_util [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.134 226310 DEBUG nova.objects.instance [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.155 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <uuid>8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</uuid>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <name>instance-00000067</name>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherB-server-1179896728</nova:name>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:10:12</nova:creationTime>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <nova:port uuid="b7329edb-beb8-414a-b8c3-33d223c32d22">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <entry name="serial">8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</entry>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <entry name="uuid">8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</entry>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk.config">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:c7:69:0e"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <target dev="tapb7329edb-be"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/console.log" append="off"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:10:13 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:10:13 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:10:13 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:10:13 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.157 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Preparing to wait for external event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.157 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.157 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.158 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.159 226310 DEBUG nova.virt.libvirt.vif [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.159 226310 DEBUG nova.network.os_vif_util [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.160 226310 DEBUG nova.network.os_vif_util [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.160 226310 DEBUG os_vif [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.162 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.163 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.167 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.167 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7329edb-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.168 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7329edb-be, col_values=(('external_ids', {'iface-id': 'b7329edb-beb8-414a-b8c3-33d223c32d22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:69:0e', 'vm-uuid': '8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.170 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:13 np0005539564 NetworkManager[48997]: <info>  [1764403813.1722] manager: (tapb7329edb-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.173 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.180 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.181 226310 INFO os_vif [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be')#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.257 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.257 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.258 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:c7:69:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.258 226310 INFO nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Using config drive#033[00m
Nov 29 03:10:13 np0005539564 nova_compute[226295]: 2025-11-29 08:10:13.293 226310 DEBUG nova.storage.rbd_utils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:13.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:13.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.103 226310 INFO nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Creating config drive at /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/disk.config#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.116 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_j3gexjg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.278 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_j3gexjg" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.319 226310 DEBUG nova.storage.rbd_utils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.325 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/disk.config 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.374 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.537 226310 DEBUG oslo_concurrency.processutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/disk.config 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.539 226310 INFO nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Deleting local config drive /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/disk.config because it was imported into RBD.#033[00m
Nov 29 03:10:14 np0005539564 kernel: tapb7329edb-be: entered promiscuous mode
Nov 29 03:10:14 np0005539564 NetworkManager[48997]: <info>  [1764403814.6231] manager: (tapb7329edb-be): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Nov 29 03:10:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:14Z|00334|binding|INFO|Claiming lport b7329edb-beb8-414a-b8c3-33d223c32d22 for this chassis.
Nov 29 03:10:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:14Z|00335|binding|INFO|b7329edb-beb8-414a-b8c3-33d223c32d22: Claiming fa:16:3e:c7:69:0e 10.100.0.6
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.628 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.645 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:69:0e 10.100.0.6'], port_security=['fa:16:3e:c7:69:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=b7329edb-beb8-414a-b8c3-33d223c32d22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.647 139780 INFO neutron.agent.ovn.metadata.agent [-] Port b7329edb-beb8-414a-b8c3-33d223c32d22 in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 bound to our chassis#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.650 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.667 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[02198ba1-7c75-48e4-8cfc-5f3da7575b79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.668 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d5b8c11-b1 in ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.671 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d5b8c11-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.671 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[78b345ec-3642-4bfd-8ce2-88e0024ea1d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.672 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[232d8b94-cc5c-420a-923e-edc8ee43547c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 systemd-machined[190128]: New machine qemu-42-instance-00000067.
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.692 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[7422b703-f018-46c6-be0b-52ef07a60eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 systemd[1]: Started Virtual Machine qemu-42-instance-00000067.
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.718 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[64754247-739b-4e2b-bead-771dba3010e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.723 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539564 systemd-udevd[261905]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:14Z|00336|binding|INFO|Setting lport b7329edb-beb8-414a-b8c3-33d223c32d22 ovn-installed in OVS
Nov 29 03:10:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:14Z|00337|binding|INFO|Setting lport b7329edb-beb8-414a-b8c3-33d223c32d22 up in Southbound
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.732 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539564 NetworkManager[48997]: <info>  [1764403814.7428] device (tapb7329edb-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:14 np0005539564 NetworkManager[48997]: <info>  [1764403814.7438] device (tapb7329edb-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.760 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4148a178-4906-4aa3-b3ba-2c3b4d96436d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 NetworkManager[48997]: <info>  [1764403814.7694] manager: (tap4d5b8c11-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.769 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e68751b0-fc63-4157-95d1-45e3417fbc32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.808 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[12e95687-49a2-46e2-a9a3-45e576fd037b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.811 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1c8603-b885-46e1-9d2f-21c370948ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 NetworkManager[48997]: <info>  [1764403814.8411] device (tap4d5b8c11-b0): carrier: link connected
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.847 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9323de4c-39fd-464e-aba8-3131a44720f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.875 226310 DEBUG nova.network.neutron [req-054f1555-bd3e-4c6a-82b8-65c5a49414c7 req-f3fc864c-8969-415a-8c1a-0df21f5c68e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updated VIF entry in instance network info cache for port b7329edb-beb8-414a-b8c3-33d223c32d22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.875 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[59bd1407-94f9-4951-be5c-1388df0049b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 28182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261935, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.876 226310 DEBUG nova.network.neutron [req-054f1555-bd3e-4c6a-82b8-65c5a49414c7 req-f3fc864c-8969-415a-8c1a-0df21f5c68e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.892 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0b8639-6413-483e-ba05-a37e2c4e98d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:6d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678952, 'tstamp': 678952}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261936, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 nova_compute[226295]: 2025-11-29 08:10:14.897 226310 DEBUG oslo_concurrency.lockutils [req-054f1555-bd3e-4c6a-82b8-65c5a49414c7 req-f3fc864c-8969-415a-8c1a-0df21f5c68e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.920 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8d993c-7fc2-41e7-b196-167a34c6198d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 28182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261937, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:14.954 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e87dfc0f-8805-4fec-9ba5-37d32acee7dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.050 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[78aed10b-4d81-4a82-b737-9c1522f3e786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.052 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.053 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.053 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:15 np0005539564 kernel: tap4d5b8c11-b0: entered promiscuous mode
Nov 29 03:10:15 np0005539564 NetworkManager[48997]: <info>  [1764403815.0569] manager: (tap4d5b8c11-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.055 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.059 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.060 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.061 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:15Z|00338|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.074 226310 DEBUG nova.compute.manager [req-c2d3a160-ca2c-45e8-a55d-a7a293c8891c req-65c8ac18-ad9e-4d63-9e32-2cd5be6111f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.075 226310 DEBUG oslo_concurrency.lockutils [req-c2d3a160-ca2c-45e8-a55d-a7a293c8891c req-65c8ac18-ad9e-4d63-9e32-2cd5be6111f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.075 226310 DEBUG oslo_concurrency.lockutils [req-c2d3a160-ca2c-45e8-a55d-a7a293c8891c req-65c8ac18-ad9e-4d63-9e32-2cd5be6111f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.075 226310 DEBUG oslo_concurrency.lockutils [req-c2d3a160-ca2c-45e8-a55d-a7a293c8891c req-65c8ac18-ad9e-4d63-9e32-2cd5be6111f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.075 226310 DEBUG nova.compute.manager [req-c2d3a160-ca2c-45e8-a55d-a7a293c8891c req-65c8ac18-ad9e-4d63-9e32-2cd5be6111f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Processing event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.091 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.092 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.092 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.093 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[435c267b-beb2-40fa-9ad2-53cc9f3ed90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.094 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:15.095 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'env', 'PROCESS_TAG=haproxy-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d5b8c11-b69e-4a74-846b-03943fb29a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.296 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403815.2958891, 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.297 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] VM Started (Lifecycle Event)#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.301 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.305 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.310 226310 INFO nova.virt.libvirt.driver [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance spawned successfully.#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.311 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.333 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.342 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.351 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.351 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.352 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.353 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.354 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.354 226310 DEBUG nova.virt.libvirt.driver [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.403 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.404 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403815.2972646, 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.404 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.448 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.453 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403815.3043773, 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.454 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.459 226310 INFO nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Took 9.77 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.460 226310 DEBUG nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.472 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.478 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.498 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.528 226310 INFO nova.compute.manager [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Took 10.76 seconds to build instance.#033[00m
Nov 29 03:10:15 np0005539564 podman[262011]: 2025-11-29 08:10:15.542024508 +0000 UTC m=+0.073384640 container create 5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:10:15 np0005539564 nova_compute[226295]: 2025-11-29 08:10:15.545 226310 DEBUG oslo_concurrency.lockutils [None req-9c553bec-ae44-42c4-812c-293c33432de5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:15 np0005539564 systemd[1]: Started libpod-conmon-5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d.scope.
Nov 29 03:10:15 np0005539564 podman[262011]: 2025-11-29 08:10:15.498881405 +0000 UTC m=+0.030241607 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:15 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:10:15 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1522e1d5571a6ba749318c467136c113554c1663d3bbd3a3c9ed647918af1f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:15 np0005539564 podman[262011]: 2025-11-29 08:10:15.646240499 +0000 UTC m=+0.177600621 container init 5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:10:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:15.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:15 np0005539564 podman[262011]: 2025-11-29 08:10:15.657065711 +0000 UTC m=+0.188425823 container start 5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:10:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:15 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[262026]: [NOTICE]   (262030) : New worker (262032) forked
Nov 29 03:10:15 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[262026]: [NOTICE]   (262030) : Loading success.
Nov 29 03:10:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:15.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:16 np0005539564 nova_compute[226295]: 2025-11-29 08:10:16.951 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:17 np0005539564 nova_compute[226295]: 2025-11-29 08:10:17.257 226310 DEBUG nova.compute.manager [req-c0ef0a0a-f60c-497f-a152-1c2e84626822 req-c2997066-f1fb-4ee6-b547-f653cac19307 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:17 np0005539564 nova_compute[226295]: 2025-11-29 08:10:17.257 226310 DEBUG oslo_concurrency.lockutils [req-c0ef0a0a-f60c-497f-a152-1c2e84626822 req-c2997066-f1fb-4ee6-b547-f653cac19307 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:17 np0005539564 nova_compute[226295]: 2025-11-29 08:10:17.257 226310 DEBUG oslo_concurrency.lockutils [req-c0ef0a0a-f60c-497f-a152-1c2e84626822 req-c2997066-f1fb-4ee6-b547-f653cac19307 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:17 np0005539564 nova_compute[226295]: 2025-11-29 08:10:17.258 226310 DEBUG oslo_concurrency.lockutils [req-c0ef0a0a-f60c-497f-a152-1c2e84626822 req-c2997066-f1fb-4ee6-b547-f653cac19307 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:17 np0005539564 nova_compute[226295]: 2025-11-29 08:10:17.258 226310 DEBUG nova.compute.manager [req-c0ef0a0a-f60c-497f-a152-1c2e84626822 req-c2997066-f1fb-4ee6-b547-f653cac19307 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] No waiting events found dispatching network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:17 np0005539564 nova_compute[226295]: 2025-11-29 08:10:17.258 226310 WARNING nova.compute.manager [req-c0ef0a0a-f60c-497f-a152-1c2e84626822 req-c2997066-f1fb-4ee6-b547-f653cac19307 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received unexpected event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:10:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:17.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:17.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.171 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.192 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:18 np0005539564 NetworkManager[48997]: <info>  [1764403818.1990] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Nov 29 03:10:18 np0005539564 NetworkManager[48997]: <info>  [1764403818.1997] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.280 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:18 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:18Z|00339|binding|INFO|Releasing lport 564ded89-d5cd-4ed0-aa20-e32de45b6125 from this chassis (sb_readonly=0)
Nov 29 03:10:18 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:18Z|00340|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.293 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.434 226310 DEBUG nova.compute.manager [req-dc95b693-8c59-43f7-8185-d683820bf0e0 req-cb522f22-2f55-4693-b173-50112fd08f92 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-changed-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.435 226310 DEBUG nova.compute.manager [req-dc95b693-8c59-43f7-8185-d683820bf0e0 req-cb522f22-2f55-4693-b173-50112fd08f92 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Refreshing instance network info cache due to event network-changed-b7329edb-beb8-414a-b8c3-33d223c32d22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.436 226310 DEBUG oslo_concurrency.lockutils [req-dc95b693-8c59-43f7-8185-d683820bf0e0 req-cb522f22-2f55-4693-b173-50112fd08f92 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.437 226310 DEBUG oslo_concurrency.lockutils [req-dc95b693-8c59-43f7-8185-d683820bf0e0 req-cb522f22-2f55-4693-b173-50112fd08f92 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:18 np0005539564 nova_compute[226295]: 2025-11-29 08:10:18.438 226310 DEBUG nova.network.neutron [req-dc95b693-8c59-43f7-8185-d683820bf0e0 req-cb522f22-2f55-4693-b173-50112fd08f92 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Refreshing network info cache for port b7329edb-beb8-414a-b8c3-33d223c32d22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:19.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:19.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:20 np0005539564 nova_compute[226295]: 2025-11-29 08:10:20.214 226310 DEBUG nova.network.neutron [req-dc95b693-8c59-43f7-8185-d683820bf0e0 req-cb522f22-2f55-4693-b173-50112fd08f92 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updated VIF entry in instance network info cache for port b7329edb-beb8-414a-b8c3-33d223c32d22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:20 np0005539564 nova_compute[226295]: 2025-11-29 08:10:20.215 226310 DEBUG nova.network.neutron [req-dc95b693-8c59-43f7-8185-d683820bf0e0 req-cb522f22-2f55-4693-b173-50112fd08f92 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:20 np0005539564 nova_compute[226295]: 2025-11-29 08:10:20.241 226310 DEBUG oslo_concurrency.lockutils [req-dc95b693-8c59-43f7-8185-d683820bf0e0 req-cb522f22-2f55-4693-b173-50112fd08f92 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:21.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:21 np0005539564 nova_compute[226295]: 2025-11-29 08:10:21.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:23 np0005539564 nova_compute[226295]: 2025-11-29 08:10:23.174 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:23.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:23.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:25 np0005539564 podman[262095]: 2025-11-29 08:10:25.528704848 +0000 UTC m=+0.068001815 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:10:25 np0005539564 podman[262094]: 2025-11-29 08:10:25.541201756 +0000 UTC m=+0.076927577 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:10:25 np0005539564 podman[262093]: 2025-11-29 08:10:25.564851213 +0000 UTC m=+0.104022976 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:10:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:25.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:26 np0005539564 nova_compute[226295]: 2025-11-29 08:10:26.960 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:27 np0005539564 nova_compute[226295]: 2025-11-29 08:10:27.477 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:27.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:28 np0005539564 nova_compute[226295]: 2025-11-29 08:10:28.176 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:29Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:69:0e 10.100.0.6
Nov 29 03:10:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:29Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:69:0e 10.100.0.6
Nov 29 03:10:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:29.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:29.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:31 np0005539564 nova_compute[226295]: 2025-11-29 08:10:31.574 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:31.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:31.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:31 np0005539564 nova_compute[226295]: 2025-11-29 08:10:31.963 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:33 np0005539564 nova_compute[226295]: 2025-11-29 08:10:33.181 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:33.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.256 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "ce907ab6-8db5-48f6-9380-13c236bae1ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.258 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.259 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.259 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.259 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.262 226310 INFO nova.compute.manager [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Terminating instance#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.263 226310 DEBUG nova.compute.manager [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:10:34 np0005539564 kernel: tap90ea25d6-c9 (unregistering): left promiscuous mode
Nov 29 03:10:34 np0005539564 NetworkManager[48997]: <info>  [1764403834.3319] device (tap90ea25d6-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:34Z|00341|binding|INFO|Releasing lport 90ea25d6-c92d-43c5-ac73-129da2340c50 from this chassis (sb_readonly=0)
Nov 29 03:10:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:34Z|00342|binding|INFO|Setting lport 90ea25d6-c92d-43c5-ac73-129da2340c50 down in Southbound
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.340 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:34Z|00343|binding|INFO|Removing iface tap90ea25d6-c9 ovn-installed in OVS
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.344 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.353 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:b2:54 10.100.0.3'], port_security=['fa:16:3e:26:b2:54 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce907ab6-8db5-48f6-9380-13c236bae1ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c333182-abc9-4e1c-9562-d9522d2eaaba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b69ef350-fb24-4945-9405-01b7ba3f6aca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=90ea25d6-c92d-43c5-ac73-129da2340c50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.355 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 90ea25d6-c92d-43c5-ac73-129da2340c50 in datapath 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 unbound from our chassis#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.356 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.358 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.358 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6a07333f-1914-400e-a91c-be1bb42ed8b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.359 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 namespace which is not needed anymore#033[00m
Nov 29 03:10:34 np0005539564 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 29 03:10:34 np0005539564 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000063.scope: Consumed 17.040s CPU time.
Nov 29 03:10:34 np0005539564 systemd-machined[190128]: Machine qemu-41-instance-00000063 terminated.
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.509 226310 INFO nova.virt.libvirt.driver [-] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Instance destroyed successfully.#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.510 226310 DEBUG nova.objects.instance [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'resources' on Instance uuid ce907ab6-8db5-48f6-9380-13c236bae1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.524 226310 DEBUG nova.virt.libvirt.vif [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-129702678',display_name='tempest-ListServerFiltersTestJSON-instance-129702678',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-129702678',id=99,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-rcamux6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:56Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=ce907ab6-8db5-48f6-9380-13c236bae1ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.524 226310 DEBUG nova.network.os_vif_util [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "90ea25d6-c92d-43c5-ac73-129da2340c50", "address": "fa:16:3e:26:b2:54", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90ea25d6-c9", "ovs_interfaceid": "90ea25d6-c92d-43c5-ac73-129da2340c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.525 226310 DEBUG nova.network.os_vif_util [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:54,bridge_name='br-int',has_traffic_filtering=True,id=90ea25d6-c92d-43c5-ac73-129da2340c50,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90ea25d6-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.525 226310 DEBUG os_vif [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:54,bridge_name='br-int',has_traffic_filtering=True,id=90ea25d6-c92d-43c5-ac73-129da2340c50,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90ea25d6-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.528 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.528 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90ea25d6-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.530 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.533 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.538 226310 INFO os_vif [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:54,bridge_name='br-int',has_traffic_filtering=True,id=90ea25d6-c92d-43c5-ac73-129da2340c50,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90ea25d6-c9')#033[00m
Nov 29 03:10:34 np0005539564 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[261431]: [NOTICE]   (261435) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:34 np0005539564 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[261431]: [NOTICE]   (261435) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:34 np0005539564 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[261431]: [WARNING]  (261435) : Exiting Master process...
Nov 29 03:10:34 np0005539564 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[261431]: [ALERT]    (261435) : Current worker (261437) exited with code 143 (Terminated)
Nov 29 03:10:34 np0005539564 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[261431]: [WARNING]  (261435) : All workers exited. Exiting... (0)
Nov 29 03:10:34 np0005539564 systemd[1]: libpod-4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191.scope: Deactivated successfully.
Nov 29 03:10:34 np0005539564 podman[262182]: 2025-11-29 08:10:34.555353054 +0000 UTC m=+0.053267138 container died 4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.580 226310 DEBUG nova.compute.manager [req-b1870275-e3c5-4fd5-9e1e-c366d30cc5f3 req-9ee936d3-c82e-470f-b91b-5a51edbd004d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received event network-vif-unplugged-90ea25d6-c92d-43c5-ac73-129da2340c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.580 226310 DEBUG oslo_concurrency.lockutils [req-b1870275-e3c5-4fd5-9e1e-c366d30cc5f3 req-9ee936d3-c82e-470f-b91b-5a51edbd004d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.581 226310 DEBUG oslo_concurrency.lockutils [req-b1870275-e3c5-4fd5-9e1e-c366d30cc5f3 req-9ee936d3-c82e-470f-b91b-5a51edbd004d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.581 226310 DEBUG oslo_concurrency.lockutils [req-b1870275-e3c5-4fd5-9e1e-c366d30cc5f3 req-9ee936d3-c82e-470f-b91b-5a51edbd004d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.581 226310 DEBUG nova.compute.manager [req-b1870275-e3c5-4fd5-9e1e-c366d30cc5f3 req-9ee936d3-c82e-470f-b91b-5a51edbd004d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] No waiting events found dispatching network-vif-unplugged-90ea25d6-c92d-43c5-ac73-129da2340c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.582 226310 DEBUG nova.compute.manager [req-b1870275-e3c5-4fd5-9e1e-c366d30cc5f3 req-9ee936d3-c82e-470f-b91b-5a51edbd004d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received event network-vif-unplugged-90ea25d6-c92d-43c5-ac73-129da2340c50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:34 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:34 np0005539564 systemd[1]: var-lib-containers-storage-overlay-6321cd4959a5e897d4a68006d4b00f70403f2776cad3c2ac2c0dcf028eef406c-merged.mount: Deactivated successfully.
Nov 29 03:10:34 np0005539564 podman[262182]: 2025-11-29 08:10:34.603486103 +0000 UTC m=+0.101400187 container cleanup 4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:34 np0005539564 systemd[1]: libpod-conmon-4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191.scope: Deactivated successfully.
Nov 29 03:10:34 np0005539564 podman[262237]: 2025-11-29 08:10:34.691409114 +0000 UTC m=+0.058616742 container remove 4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.697 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a85c06ca-be1d-4ae0-88d5-fe503c63370b]: (4, ('Sat Nov 29 08:10:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 (4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191)\n4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191\nSat Nov 29 08:10:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 (4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191)\n4cae81d45932160d671edfcb97d018d56e32213bd62778eb90449f8bad7a4191\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.699 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[64c9f0b1-2f08-4f1c-b1c1-a5bb6458b9e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.701 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a0b70e3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.704 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:34 np0005539564 kernel: tap9a0b70e3-10: left promiscuous mode
Nov 29 03:10:34 np0005539564 nova_compute[226295]: 2025-11-29 08:10:34.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.740 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8f86005c-a809-4d51-b4fc-999bd4604d55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.766 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3192f30e-d4ae-43c4-9b32-390776d42437]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.768 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c8fd45-ed8e-4a4e-b00c-1f3c01463195]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.787 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b7385d-8227-47a9-bfb6-2b0a516b06a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677029, 'reachable_time': 17354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262255, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.794 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:10:34.794 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d08d11b6-01a2-497b-8fab-027425478c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:34 np0005539564 systemd[1]: run-netns-ovnmeta\x2d9a0b70e3\x2d1894\x2d47e1\x2dbc43\x2d1721fdb1c9d6.mount: Deactivated successfully.
Nov 29 03:10:35 np0005539564 nova_compute[226295]: 2025-11-29 08:10:35.012 226310 INFO nova.virt.libvirt.driver [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Deleting instance files /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce_del#033[00m
Nov 29 03:10:35 np0005539564 nova_compute[226295]: 2025-11-29 08:10:35.014 226310 INFO nova.virt.libvirt.driver [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Deletion of /var/lib/nova/instances/ce907ab6-8db5-48f6-9380-13c236bae1ce_del complete#033[00m
Nov 29 03:10:35 np0005539564 nova_compute[226295]: 2025-11-29 08:10:35.088 226310 INFO nova.compute.manager [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:10:35 np0005539564 nova_compute[226295]: 2025-11-29 08:10:35.089 226310 DEBUG oslo.service.loopingcall [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:10:35 np0005539564 nova_compute[226295]: 2025-11-29 08:10:35.089 226310 DEBUG nova.compute.manager [-] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:10:35 np0005539564 nova_compute[226295]: 2025-11-29 08:10:35.089 226310 DEBUG nova.network.neutron [-] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:10:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:35.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:35.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.293 226310 DEBUG nova.network.neutron [-] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.326 226310 INFO nova.compute.manager [-] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Took 1.24 seconds to deallocate network for instance.#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.386 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.387 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.396 226310 DEBUG nova.compute.manager [req-611217d7-367d-483f-aaf6-fd0b2a8cb977 req-0670a9eb-0332-4ec5-82a9-020f2f38c104 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received event network-vif-deleted-90ea25d6-c92d-43c5-ac73-129da2340c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.465 226310 DEBUG oslo_concurrency.processutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.764 226310 DEBUG nova.compute.manager [req-f1eaba87-adee-4666-aa3b-43f3c7fd7671 req-d1aa7696-a765-4ba1-bd59-cc67788714ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received event network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.765 226310 DEBUG oslo_concurrency.lockutils [req-f1eaba87-adee-4666-aa3b-43f3c7fd7671 req-d1aa7696-a765-4ba1-bd59-cc67788714ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.766 226310 DEBUG oslo_concurrency.lockutils [req-f1eaba87-adee-4666-aa3b-43f3c7fd7671 req-d1aa7696-a765-4ba1-bd59-cc67788714ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.766 226310 DEBUG oslo_concurrency.lockutils [req-f1eaba87-adee-4666-aa3b-43f3c7fd7671 req-d1aa7696-a765-4ba1-bd59-cc67788714ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.767 226310 DEBUG nova.compute.manager [req-f1eaba87-adee-4666-aa3b-43f3c7fd7671 req-d1aa7696-a765-4ba1-bd59-cc67788714ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] No waiting events found dispatching network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.767 226310 WARNING nova.compute.manager [req-f1eaba87-adee-4666-aa3b-43f3c7fd7671 req-d1aa7696-a765-4ba1-bd59-cc67788714ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Received unexpected event network-vif-plugged-90ea25d6-c92d-43c5-ac73-129da2340c50 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:10:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1567484573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.983 226310 DEBUG oslo_concurrency.processutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:36 np0005539564 nova_compute[226295]: 2025-11-29 08:10:36.990 226310 DEBUG nova.compute.provider_tree [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:37 np0005539564 nova_compute[226295]: 2025-11-29 08:10:37.020 226310 DEBUG nova.scheduler.client.report [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:37 np0005539564 nova_compute[226295]: 2025-11-29 08:10:37.047 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:37 np0005539564 nova_compute[226295]: 2025-11-29 08:10:37.077 226310 INFO nova.scheduler.client.report [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Deleted allocations for instance ce907ab6-8db5-48f6-9380-13c236bae1ce#033[00m
Nov 29 03:10:37 np0005539564 nova_compute[226295]: 2025-11-29 08:10:37.180 226310 DEBUG oslo_concurrency.lockutils [None req-c66761a2-137b-4200-ae86-0f52031ef82c 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "ce907ab6-8db5-48f6-9380-13c236bae1ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:37.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:39 np0005539564 nova_compute[226295]: 2025-11-29 08:10:39.337 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:39 np0005539564 nova_compute[226295]: 2025-11-29 08:10:39.531 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:39.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:39.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:41.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:41 np0005539564 nova_compute[226295]: 2025-11-29 08:10:41.969 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:42 np0005539564 nova_compute[226295]: 2025-11-29 08:10:42.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:43 np0005539564 ovn_controller[130591]: 2025-11-29T08:10:43Z|00344|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:10:43 np0005539564 nova_compute[226295]: 2025-11-29 08:10:43.325 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:43 np0005539564 nova_compute[226295]: 2025-11-29 08:10:43.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:43 np0005539564 nova_compute[226295]: 2025-11-29 08:10:43.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:43 np0005539564 nova_compute[226295]: 2025-11-29 08:10:43.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:10:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:43.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:43.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:44 np0005539564 nova_compute[226295]: 2025-11-29 08:10:44.534 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:45 np0005539564 nova_compute[226295]: 2025-11-29 08:10:45.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:45 np0005539564 nova_compute[226295]: 2025-11-29 08:10:45.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:10:45 np0005539564 nova_compute[226295]: 2025-11-29 08:10:45.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:10:45 np0005539564 nova_compute[226295]: 2025-11-29 08:10:45.684 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:45 np0005539564 nova_compute[226295]: 2025-11-29 08:10:45.686 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:45 np0005539564 nova_compute[226295]: 2025-11-29 08:10:45.687 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:10:45 np0005539564 nova_compute[226295]: 2025-11-29 08:10:45.687 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:45.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:45.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 32K writes, 130K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 32K writes, 10K syncs, 3.01 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9060 writes, 34K keys, 9060 commit groups, 1.0 writes per commit group, ingest: 35.75 MB, 0.06 MB/s#012Interval WAL: 9061 writes, 3577 syncs, 2.53 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:10:46 np0005539564 nova_compute[226295]: 2025-11-29 08:10:46.971 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:47.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:47.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:49.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:49.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:50 np0005539564 nova_compute[226295]: 2025-11-29 08:10:49.999 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539564 nova_compute[226295]: 2025-11-29 08:10:50.002 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403834.506486, ce907ab6-8db5-48f6-9380-13c236bae1ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:50 np0005539564 nova_compute[226295]: 2025-11-29 08:10:50.003 226310 INFO nova.compute.manager [-] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:10:50 np0005539564 nova_compute[226295]: 2025-11-29 08:10:50.035 226310 DEBUG nova.compute.manager [None req-a8d5290f-abcd-44bd-b8fa-127983bf2e2c - - - - - -] [instance: ce907ab6-8db5-48f6-9380-13c236bae1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.332 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.349 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.350 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.350 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.350 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.351 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.370 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.371 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.372 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:51.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:51 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3661516400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.824 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.914 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.914 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:51.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:51 np0005539564 nova_compute[226295]: 2025-11-29 08:10:51.974 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.137 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.140 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4356MB free_disk=20.87619400024414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.140 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.141 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.252 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.253 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.253 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.324 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3378207188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.812 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.819 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.836 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.863 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:10:52 np0005539564 nova_compute[226295]: 2025-11-29 08:10:52.864 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:53 np0005539564 nova_compute[226295]: 2025-11-29 08:10:53.153 226310 DEBUG nova.compute.manager [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:53 np0005539564 nova_compute[226295]: 2025-11-29 08:10:53.191 226310 INFO nova.compute.manager [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] instance snapshotting#033[00m
Nov 29 03:10:53 np0005539564 nova_compute[226295]: 2025-11-29 08:10:53.193 226310 DEBUG nova.objects.instance [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'flavor' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:53 np0005539564 nova_compute[226295]: 2025-11-29 08:10:53.463 226310 INFO nova.virt.libvirt.driver [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Beginning live snapshot process#033[00m
Nov 29 03:10:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:53 np0005539564 nova_compute[226295]: 2025-11-29 08:10:53.714 226310 DEBUG nova.virt.libvirt.imagebackend [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:10:53 np0005539564 nova_compute[226295]: 2025-11-29 08:10:53.857 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:53.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:53 np0005539564 nova_compute[226295]: 2025-11-29 08:10:53.990 226310 DEBUG nova.storage.rbd_utils [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(220967990cf7491188cc401116821f11) on rbd image(8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:10:54 np0005539564 nova_compute[226295]: 2025-11-29 08:10:54.307 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e266 e266: 3 total, 3 up, 3 in
Nov 29 03:10:54 np0005539564 nova_compute[226295]: 2025-11-29 08:10:54.531 226310 DEBUG nova.storage.rbd_utils [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] cloning vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk@220967990cf7491188cc401116821f11 to images/19d37975-6e79-4fd9-8985-4a51a67c9db3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:10:54 np0005539564 nova_compute[226295]: 2025-11-29 08:10:54.676 226310 DEBUG nova.storage.rbd_utils [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] flattening images/19d37975-6e79-4fd9-8985-4a51a67c9db3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:10:55 np0005539564 nova_compute[226295]: 2025-11-29 08:10:55.000 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:55 np0005539564 nova_compute[226295]: 2025-11-29 08:10:55.188 226310 DEBUG nova.storage.rbd_utils [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] removing snapshot(220967990cf7491188cc401116821f11) on rbd image(8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:10:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e267 e267: 3 total, 3 up, 3 in
Nov 29 03:10:55 np0005539564 nova_compute[226295]: 2025-11-29 08:10:55.740 226310 DEBUG nova.storage.rbd_utils [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(snap) on rbd image(19d37975-6e79-4fd9-8985-4a51a67c9db3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:10:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:55.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:56 np0005539564 podman[262471]: 2025-11-29 08:10:56.549733244 +0000 UTC m=+0.085864327 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:10:56 np0005539564 podman[262469]: 2025-11-29 08:10:56.636008621 +0000 UTC m=+0.184168828 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:56 np0005539564 podman[262470]: 2025-11-29 08:10:56.641368575 +0000 UTC m=+0.179371129 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 03:10:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e268 e268: 3 total, 3 up, 3 in
Nov 29 03:10:56 np0005539564 nova_compute[226295]: 2025-11-29 08:10:56.978 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:57.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:57 np0005539564 nova_compute[226295]: 2025-11-29 08:10:57.812 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:57.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:58 np0005539564 nova_compute[226295]: 2025-11-29 08:10:58.238 226310 INFO nova.virt.libvirt.driver [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Snapshot image upload complete#033[00m
Nov 29 03:10:58 np0005539564 nova_compute[226295]: 2025-11-29 08:10:58.239 226310 INFO nova.compute.manager [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Took 5.01 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:10:58 np0005539564 nova_compute[226295]: 2025-11-29 08:10:58.620 226310 DEBUG nova.compute.manager [None req-91a8fb21-f14e-4430-bddf-44b3d4401096 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 29 03:10:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:10:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:59.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:00 np0005539564 nova_compute[226295]: 2025-11-29 08:11:00.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:00 np0005539564 nova_compute[226295]: 2025-11-29 08:11:00.845 226310 DEBUG nova.compute.manager [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:00 np0005539564 nova_compute[226295]: 2025-11-29 08:11:00.899 226310 INFO nova.compute.manager [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] instance snapshotting#033[00m
Nov 29 03:11:00 np0005539564 nova_compute[226295]: 2025-11-29 08:11:00.901 226310 DEBUG nova.objects.instance [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'flavor' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:01 np0005539564 nova_compute[226295]: 2025-11-29 08:11:01.182 226310 INFO nova.virt.libvirt.driver [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Beginning live snapshot process#033[00m
Nov 29 03:11:01 np0005539564 nova_compute[226295]: 2025-11-29 08:11:01.355 226310 DEBUG nova.virt.libvirt.imagebackend [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:11:01 np0005539564 nova_compute[226295]: 2025-11-29 08:11:01.577 226310 DEBUG nova.storage.rbd_utils [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(3d1d9ec6bafa48fda0cae2de63b4219d) on rbd image(8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:11:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e269 e269: 3 total, 3 up, 3 in
Nov 29 03:11:01 np0005539564 nova_compute[226295]: 2025-11-29 08:11:01.834 226310 DEBUG nova.storage.rbd_utils [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] cloning vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk@3d1d9ec6bafa48fda0cae2de63b4219d to images/0136f9fb-9ab3-4552-8505-f62bd960c3d5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:11:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:01.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:01 np0005539564 nova_compute[226295]: 2025-11-29 08:11:01.981 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539564 nova_compute[226295]: 2025-11-29 08:11:01.994 226310 DEBUG nova.storage.rbd_utils [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] flattening images/0136f9fb-9ab3-4552-8505-f62bd960c3d5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:11:02 np0005539564 nova_compute[226295]: 2025-11-29 08:11:02.501 226310 DEBUG nova.storage.rbd_utils [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] removing snapshot(3d1d9ec6bafa48fda0cae2de63b4219d) on rbd image(8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:11:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e270 e270: 3 total, 3 up, 3 in
Nov 29 03:11:02 np0005539564 nova_compute[226295]: 2025-11-29 08:11:02.857 226310 DEBUG nova.storage.rbd_utils [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(snap) on rbd image(0136f9fb-9ab3-4552-8505-f62bd960c3d5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:11:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:03.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:03.722 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:03.723 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:03.723 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e271 e271: 3 total, 3 up, 3 in
Nov 29 03:11:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:03.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:04.997 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:04 np0005539564 nova_compute[226295]: 2025-11-29 08:11:04.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:04.999 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:11:05 np0005539564 nova_compute[226295]: 2025-11-29 08:11:05.003 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:05 np0005539564 nova_compute[226295]: 2025-11-29 08:11:05.603 226310 INFO nova.virt.libvirt.driver [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Snapshot image upload complete#033[00m
Nov 29 03:11:05 np0005539564 nova_compute[226295]: 2025-11-29 08:11:05.604 226310 INFO nova.compute.manager [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Took 4.67 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:11:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:05.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:05.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:05 np0005539564 nova_compute[226295]: 2025-11-29 08:11:05.972 226310 DEBUG nova.compute.manager [None req-860f2471-20bb-455f-8fb9-300d002071bd ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 29 03:11:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e272 e272: 3 total, 3 up, 3 in
Nov 29 03:11:06 np0005539564 nova_compute[226295]: 2025-11-29 08:11:06.984 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:07.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:07 np0005539564 nova_compute[226295]: 2025-11-29 08:11:07.806 226310 DEBUG nova.compute.manager [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:07 np0005539564 nova_compute[226295]: 2025-11-29 08:11:07.851 226310 INFO nova.compute.manager [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] instance snapshotting#033[00m
Nov 29 03:11:07 np0005539564 nova_compute[226295]: 2025-11-29 08:11:07.852 226310 DEBUG nova.objects.instance [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'flavor' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:07.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:08 np0005539564 nova_compute[226295]: 2025-11-29 08:11:08.130 226310 INFO nova.virt.libvirt.driver [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Beginning live snapshot process#033[00m
Nov 29 03:11:08 np0005539564 nova_compute[226295]: 2025-11-29 08:11:08.313 226310 DEBUG nova.virt.libvirt.imagebackend [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:11:08 np0005539564 nova_compute[226295]: 2025-11-29 08:11:08.713 226310 DEBUG nova.storage.rbd_utils [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(484bd45ada88405f9c19e291f9f3aac9) on rbd image(8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:11:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:09.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:09.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:10 np0005539564 nova_compute[226295]: 2025-11-29 08:11:10.004 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e273 e273: 3 total, 3 up, 3 in
Nov 29 03:11:10 np0005539564 nova_compute[226295]: 2025-11-29 08:11:10.107 226310 DEBUG nova.storage.rbd_utils [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] cloning vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk@484bd45ada88405f9c19e291f9f3aac9 to images/c02352c0-4a84-408b-a38c-1ebe85e285cf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:11:10 np0005539564 nova_compute[226295]: 2025-11-29 08:11:10.276 226310 DEBUG nova.storage.rbd_utils [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] flattening images/c02352c0-4a84-408b-a38c-1ebe85e285cf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:11:10 np0005539564 nova_compute[226295]: 2025-11-29 08:11:10.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:10 np0005539564 nova_compute[226295]: 2025-11-29 08:11:10.755 226310 DEBUG nova.storage.rbd_utils [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] removing snapshot(484bd45ada88405f9c19e291f9f3aac9) on rbd image(8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:11:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e274 e274: 3 total, 3 up, 3 in
Nov 29 03:11:11 np0005539564 nova_compute[226295]: 2025-11-29 08:11:11.118 226310 DEBUG nova.storage.rbd_utils [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(snap) on rbd image(c02352c0-4a84-408b-a38c-1ebe85e285cf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:11:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:11.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e275 e275: 3 total, 3 up, 3 in
Nov 29 03:11:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:11.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:11 np0005539564 nova_compute[226295]: 2025-11-29 08:11:11.987 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:12.001 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:13 np0005539564 nova_compute[226295]: 2025-11-29 08:11:13.376 226310 INFO nova.virt.libvirt.driver [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Snapshot image upload complete#033[00m
Nov 29 03:11:13 np0005539564 nova_compute[226295]: 2025-11-29 08:11:13.377 226310 INFO nova.compute.manager [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Took 5.50 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:11:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:13 np0005539564 nova_compute[226295]: 2025-11-29 08:11:13.737 226310 DEBUG nova.compute.manager [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 29 03:11:13 np0005539564 nova_compute[226295]: 2025-11-29 08:11:13.738 226310 DEBUG nova.compute.manager [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Nov 29 03:11:13 np0005539564 nova_compute[226295]: 2025-11-29 08:11:13.738 226310 DEBUG nova.compute.manager [None req-71f49612-7247-4387-b1cb-f4d4cca6f3a4 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Deleting image 19d37975-6e79-4fd9-8985-4a51a67c9db3 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Nov 29 03:11:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:13.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e276 e276: 3 total, 3 up, 3 in
Nov 29 03:11:15 np0005539564 nova_compute[226295]: 2025-11-29 08:11:15.006 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:15.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:15.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e277 e277: 3 total, 3 up, 3 in
Nov 29 03:11:16 np0005539564 nova_compute[226295]: 2025-11-29 08:11:16.989 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:17 np0005539564 podman[262989]: 2025-11-29 08:11:17.576496363 +0000 UTC m=+0.066749360 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:11:17 np0005539564 podman[262989]: 2025-11-29 08:11:17.694181998 +0000 UTC m=+0.184434975 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:11:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:17.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e278 e278: 3 total, 3 up, 3 in
Nov 29 03:11:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:17.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e279 e279: 3 total, 3 up, 3 in
Nov 29 03:11:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:19.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:19.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:11:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:11:20 np0005539564 nova_compute[226295]: 2025-11-29 08:11:20.009 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:21.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e280 e280: 3 total, 3 up, 3 in
Nov 29 03:11:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:21.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:21 np0005539564 nova_compute[226295]: 2025-11-29 08:11:21.992 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:11:22Z|00345|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:11:22 np0005539564 nova_compute[226295]: 2025-11-29 08:11:22.435 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:23.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:23.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:25 np0005539564 nova_compute[226295]: 2025-11-29 08:11:25.010 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:25.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:25.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 e281: 3 total, 3 up, 3 in
Nov 29 03:11:27 np0005539564 nova_compute[226295]: 2025-11-29 08:11:27.021 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:27 np0005539564 podman[263390]: 2025-11-29 08:11:27.414063573 +0000 UTC m=+0.086039382 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:11:27 np0005539564 podman[263389]: 2025-11-29 08:11:27.417300471 +0000 UTC m=+0.089715241 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:27 np0005539564 podman[263388]: 2025-11-29 08:11:27.447702631 +0000 UTC m=+0.120450900 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 03:11:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:27.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:29.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:29.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:30 np0005539564 nova_compute[226295]: 2025-11-29 08:11:30.012 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:31.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:31.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:32 np0005539564 nova_compute[226295]: 2025-11-29 08:11:32.022 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:33 np0005539564 nova_compute[226295]: 2025-11-29 08:11:33.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:33.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:33.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8474 writes, 44K keys, 8474 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 8474 writes, 8474 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1675 writes, 8056 keys, 1675 commit groups, 1.0 writes per commit group, ingest: 16.41 MB, 0.03 MB/s#012Interval WAL: 1675 writes, 1675 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     15.3      3.39              0.19        24    0.141       0      0       0.0       0.0#012  L6      1/0    8.37 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   4.0     36.6     30.3      6.93              0.72        23    0.301    135K    13K       0.0       0.0#012 Sum      1/0    8.37 MB   0.0      0.2     0.1      0.2       0.3      0.1       0.0   5.0     24.6     25.4     10.33              0.91        47    0.220    135K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.6     57.4     56.8      1.02              0.22        10    0.102     37K   2521       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0     36.6     30.3      6.93              0.72        23    0.301    135K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     15.3      3.39              0.19        23    0.147       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.051, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.26 GB write, 0.07 MB/s write, 0.25 GB read, 0.07 MB/s read, 10.3 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 30.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000308 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1675,29.49 MB,9.70202%) FilterBlock(47,393.80 KB,0.126502%) IndexBlock(47,657.22 KB,0.211123%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:11:35 np0005539564 nova_compute[226295]: 2025-11-29 08:11:35.013 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:35.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:35.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:37 np0005539564 nova_compute[226295]: 2025-11-29 08:11:37.025 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539564 nova_compute[226295]: 2025-11-29 08:11:37.286 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:37.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:37.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:39.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:39.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:40 np0005539564 nova_compute[226295]: 2025-11-29 08:11:40.016 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539564 nova_compute[226295]: 2025-11-29 08:11:40.385 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:41.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:42.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:42 np0005539564 nova_compute[226295]: 2025-11-29 08:11:42.029 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:43 np0005539564 nova_compute[226295]: 2025-11-29 08:11:43.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:44.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:44 np0005539564 nova_compute[226295]: 2025-11-29 08:11:44.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:44 np0005539564 nova_compute[226295]: 2025-11-29 08:11:44.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:44 np0005539564 nova_compute[226295]: 2025-11-29 08:11:44.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:11:45 np0005539564 nova_compute[226295]: 2025-11-29 08:11:45.017 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:45.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:46.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:46 np0005539564 nova_compute[226295]: 2025-11-29 08:11:46.315 226310 DEBUG nova.compute.manager [None req-c3d6f4d3-1621-4da2-b176-659fcc379feb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Nov 29 03:11:46 np0005539564 nova_compute[226295]: 2025-11-29 08:11:46.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:46 np0005539564 nova_compute[226295]: 2025-11-29 08:11:46.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:11:46 np0005539564 nova_compute[226295]: 2025-11-29 08:11:46.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:11:46 np0005539564 nova_compute[226295]: 2025-11-29 08:11:46.889 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:46 np0005539564 nova_compute[226295]: 2025-11-29 08:11:46.890 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:46 np0005539564 nova_compute[226295]: 2025-11-29 08:11:46.890 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:11:46 np0005539564 nova_compute[226295]: 2025-11-29 08:11:46.890 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:47 np0005539564 nova_compute[226295]: 2025-11-29 08:11:47.030 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:47.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:48.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:48 np0005539564 nova_compute[226295]: 2025-11-29 08:11:48.337 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:48 np0005539564 nova_compute[226295]: 2025-11-29 08:11:48.363 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:48 np0005539564 nova_compute[226295]: 2025-11-29 08:11:48.364 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:11:48 np0005539564 nova_compute[226295]: 2025-11-29 08:11:48.365 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:48 np0005539564 nova_compute[226295]: 2025-11-29 08:11:48.366 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:48 np0005539564 nova_compute[226295]: 2025-11-29 08:11:48.366 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:11:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:49 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995624476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:49 np0005539564 nova_compute[226295]: 2025-11-29 08:11:49.360 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:49.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:49 np0005539564 nova_compute[226295]: 2025-11-29 08:11:49.931 226310 DEBUG oslo_concurrency.lockutils [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:49 np0005539564 nova_compute[226295]: 2025-11-29 08:11:49.932 226310 DEBUG oslo_concurrency.lockutils [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:49 np0005539564 nova_compute[226295]: 2025-11-29 08:11:49.933 226310 DEBUG nova.compute.manager [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:50.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:50 np0005539564 nova_compute[226295]: 2025-11-29 08:11:50.019 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:50 np0005539564 nova_compute[226295]: 2025-11-29 08:11:50.131 226310 DEBUG nova.compute.manager [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:11:50 np0005539564 nova_compute[226295]: 2025-11-29 08:11:50.132 226310 DEBUG nova.objects.instance [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'flavor' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:50 np0005539564 nova_compute[226295]: 2025-11-29 08:11:50.193 226310 DEBUG nova.virt.libvirt.driver [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:11:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:51.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:52.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.032 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.565 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.566 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.566 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.567 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.567 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:52 np0005539564 kernel: tapb7329edb-be (unregistering): left promiscuous mode
Nov 29 03:11:52 np0005539564 NetworkManager[48997]: <info>  [1764403912.7735] device (tapb7329edb-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.785 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:11:52Z|00346|binding|INFO|Releasing lport b7329edb-beb8-414a-b8c3-33d223c32d22 from this chassis (sb_readonly=0)
Nov 29 03:11:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:11:52Z|00347|binding|INFO|Setting lport b7329edb-beb8-414a-b8c3-33d223c32d22 down in Southbound
Nov 29 03:11:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:11:52Z|00348|binding|INFO|Removing iface tapb7329edb-be ovn-installed in OVS
Nov 29 03:11:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:52.802 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:69:0e 10.100.0.6'], port_security=['fa:16:3e:c7:69:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=b7329edb-beb8-414a-b8c3-33d223c32d22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:52.809 139780 INFO neutron.agent.ovn.metadata.agent [-] Port b7329edb-beb8-414a-b8c3-33d223c32d22 in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:11:52 np0005539564 nova_compute[226295]: 2025-11-29 08:11:52.812 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:52.813 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d5b8c11-b69e-4a74-846b-03943fb29a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:11:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:52.817 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[741c9355-b87f-47d0-b022-6217db570704]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:52.818 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace which is not needed anymore#033[00m
Nov 29 03:11:52 np0005539564 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 29 03:11:52 np0005539564 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000067.scope: Consumed 19.147s CPU time.
Nov 29 03:11:52 np0005539564 systemd-machined[190128]: Machine qemu-42-instance-00000067 terminated.
Nov 29 03:11:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[262026]: [NOTICE]   (262030) : haproxy version is 2.8.14-c23fe91
Nov 29 03:11:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[262026]: [NOTICE]   (262030) : path to executable is /usr/sbin/haproxy
Nov 29 03:11:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[262026]: [WARNING]  (262030) : Exiting Master process...
Nov 29 03:11:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[262026]: [WARNING]  (262030) : Exiting Master process...
Nov 29 03:11:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[262026]: [ALERT]    (262030) : Current worker (262032) exited with code 143 (Terminated)
Nov 29 03:11:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[262026]: [WARNING]  (262030) : All workers exited. Exiting... (0)
Nov 29 03:11:53 np0005539564 systemd[1]: libpod-5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d.scope: Deactivated successfully.
Nov 29 03:11:53 np0005539564 podman[263526]: 2025-11-29 08:11:53.019494918 +0000 UTC m=+0.067922583 container died 5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:11:53 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:11:53 np0005539564 systemd[1]: var-lib-containers-storage-overlay-b1522e1d5571a6ba749318c467136c113554c1663d3bbd3a3c9ed647918af1f2-merged.mount: Deactivated successfully.
Nov 29 03:11:53 np0005539564 podman[263526]: 2025-11-29 08:11:53.070496064 +0000 UTC m=+0.118923769 container cleanup 5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:11:53 np0005539564 systemd[1]: libpod-conmon-5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d.scope: Deactivated successfully.
Nov 29 03:11:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4282980456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.139 226310 DEBUG nova.compute.manager [req-2d6f9921-54ec-4add-a4c7-58f2cbd737e4 req-a11262b7-ba23-4c2e-a54a-d6c021dd13f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-unplugged-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.141 226310 DEBUG oslo_concurrency.lockutils [req-2d6f9921-54ec-4add-a4c7-58f2cbd737e4 req-a11262b7-ba23-4c2e-a54a-d6c021dd13f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.142 226310 DEBUG oslo_concurrency.lockutils [req-2d6f9921-54ec-4add-a4c7-58f2cbd737e4 req-a11262b7-ba23-4c2e-a54a-d6c021dd13f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.142 226310 DEBUG oslo_concurrency.lockutils [req-2d6f9921-54ec-4add-a4c7-58f2cbd737e4 req-a11262b7-ba23-4c2e-a54a-d6c021dd13f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.142 226310 DEBUG nova.compute.manager [req-2d6f9921-54ec-4add-a4c7-58f2cbd737e4 req-a11262b7-ba23-4c2e-a54a-d6c021dd13f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] No waiting events found dispatching network-vif-unplugged-b7329edb-beb8-414a-b8c3-33d223c32d22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.143 226310 WARNING nova.compute.manager [req-2d6f9921-54ec-4add-a4c7-58f2cbd737e4 req-a11262b7-ba23-4c2e-a54a-d6c021dd13f4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received unexpected event network-vif-unplugged-b7329edb-beb8-414a-b8c3-33d223c32d22 for instance with vm_state active and task_state powering-off.#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.144 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:53 np0005539564 podman[263567]: 2025-11-29 08:11:53.161073397 +0000 UTC m=+0.059844355 container remove 5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.171 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[72035a7c-1858-43ae-b152-42e2bd6d0c85]: (4, ('Sat Nov 29 08:11:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d)\n5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d\nSat Nov 29 08:11:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d)\n5fc4084c5885982abfa5a050afbd4abc4c8fdab4ecad64b9d4fa3675842ea45d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.172 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0db4ef1e-70dc-4250-a41c-090fdd2b663f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.173 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:53 np0005539564 kernel: tap4d5b8c11-b0: left promiscuous mode
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.177 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.193 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.195 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.196 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc6ee77-f525-42b2-b9e0-160dc8bad33d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.213 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e136551b-9e91-458c-ad85-08b432c9a5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.216 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae0424f-efbc-4d0b-b2f0-a967ded39427]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.220 226310 INFO nova.virt.libvirt.driver [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.228 226310 INFO nova.virt.libvirt.driver [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance destroyed successfully.#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.229 226310 DEBUG nova.objects.instance [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'numa_topology' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.235 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[724f7eed-bbdc-40d2-bed4-8a79e0d026e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678943, 'reachable_time': 18410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263589, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.240 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.240 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcd3a57-b9e3-45e3-8ded-838cfb9ab816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:53 np0005539564 systemd[1]: run-netns-ovnmeta\x2d4d5b8c11\x2db69e\x2d4a74\x2d846b\x2d03943fb29a81.mount: Deactivated successfully.
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.263 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:53.265 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.266 226310 DEBUG nova.compute.manager [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.267 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.313 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.313 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.332 226310 DEBUG oslo_concurrency.lockutils [None req-1114d86b-5451-40da-b592-10593c3f9905 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.522 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.524 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4490MB free_disk=20.809612274169922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.524 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:53 np0005539564 nova_compute[226295]: 2025-11-29 08:11:53.524 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:53.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:54.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:55 np0005539564 nova_compute[226295]: 2025-11-29 08:11:55.021 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:55.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:56.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.254 226310 DEBUG nova.compute.manager [req-2e36e7b8-4053-40bb-8aad-63d8c02d8b0c req-b14636d0-4ba0-4cab-b921-4ff7e4d92e07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.255 226310 DEBUG oslo_concurrency.lockutils [req-2e36e7b8-4053-40bb-8aad-63d8c02d8b0c req-b14636d0-4ba0-4cab-b921-4ff7e4d92e07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.256 226310 DEBUG oslo_concurrency.lockutils [req-2e36e7b8-4053-40bb-8aad-63d8c02d8b0c req-b14636d0-4ba0-4cab-b921-4ff7e4d92e07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.256 226310 DEBUG oslo_concurrency.lockutils [req-2e36e7b8-4053-40bb-8aad-63d8c02d8b0c req-b14636d0-4ba0-4cab-b921-4ff7e4d92e07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.256 226310 DEBUG nova.compute.manager [req-2e36e7b8-4053-40bb-8aad-63d8c02d8b0c req-b14636d0-4ba0-4cab-b921-4ff7e4d92e07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] No waiting events found dispatching network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.257 226310 WARNING nova.compute.manager [req-2e36e7b8-4053-40bb-8aad-63d8c02d8b0c req-b14636d0-4ba0-4cab-b921-4ff7e4d92e07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received unexpected event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.466 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.467 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.467 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:11:56 np0005539564 nova_compute[226295]: 2025-11-29 08:11:56.583 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:57 np0005539564 nova_compute[226295]: 2025-11-29 08:11:57.063 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1294256293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:57 np0005539564 nova_compute[226295]: 2025-11-29 08:11:57.120 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:57 np0005539564 nova_compute[226295]: 2025-11-29 08:11:57.128 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:57.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:58.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:58 np0005539564 podman[263615]: 2025-11-29 08:11:58.542557188 +0000 UTC m=+0.084851730 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:11:58 np0005539564 podman[263614]: 2025-11-29 08:11:58.548782156 +0000 UTC m=+0.091122999 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:11:58 np0005539564 podman[263613]: 2025-11-29 08:11:58.57597724 +0000 UTC m=+0.131251452 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:11:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:11:59.268 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:11:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:59.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:00 np0005539564 nova_compute[226295]: 2025-11-29 08:12:00.023 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:00.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:00 np0005539564 nova_compute[226295]: 2025-11-29 08:12:00.318 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:00 np0005539564 nova_compute[226295]: 2025-11-29 08:12:00.394 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:12:00 np0005539564 nova_compute[226295]: 2025-11-29 08:12:00.395 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:00 np0005539564 nova_compute[226295]: 2025-11-29 08:12:00.396 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:00 np0005539564 nova_compute[226295]: 2025-11-29 08:12:00.397 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:12:00 np0005539564 nova_compute[226295]: 2025-11-29 08:12:00.492 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:12:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:01.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:02 np0005539564 nova_compute[226295]: 2025-11-29 08:12:02.066 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:03.723 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:03.723 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:03.724 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:03.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:05 np0005539564 nova_compute[226295]: 2025-11-29 08:12:05.029 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:05.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:06.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.692 226310 DEBUG nova.compute.manager [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Stashing vm_state: stopped _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.814 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.815 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.846 226310 DEBUG nova.objects.instance [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.872 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.873 226310 INFO nova.compute.claims [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.874 226310 DEBUG nova.objects.instance [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.900 226310 DEBUG nova.objects.instance [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:06 np0005539564 nova_compute[226295]: 2025-11-29 08:12:06.967 226310 INFO nova.compute.resource_tracker [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating resource usage from migration 437a1b20-a903-4d00-94b4-5fa3849d9d42#033[00m
Nov 29 03:12:07 np0005539564 nova_compute[226295]: 2025-11-29 08:12:07.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:07 np0005539564 nova_compute[226295]: 2025-11-29 08:12:07.073 226310 DEBUG oslo_concurrency.processutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4285221341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:07 np0005539564 nova_compute[226295]: 2025-11-29 08:12:07.487 226310 DEBUG oslo_concurrency.processutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:07 np0005539564 nova_compute[226295]: 2025-11-29 08:12:07.495 226310 DEBUG nova.compute.provider_tree [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:07 np0005539564 nova_compute[226295]: 2025-11-29 08:12:07.685 226310 DEBUG nova.scheduler.client.report [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:07.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:07 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 03:12:08 np0005539564 nova_compute[226295]: 2025-11-29 08:12:08.031 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403913.0302563, 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:08 np0005539564 nova_compute[226295]: 2025-11-29 08:12:08.032 226310 INFO nova.compute.manager [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:12:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:08.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:08 np0005539564 nova_compute[226295]: 2025-11-29 08:12:08.109 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:08 np0005539564 nova_compute[226295]: 2025-11-29 08:12:08.110 226310 INFO nova.compute.manager [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Migrating#033[00m
Nov 29 03:12:09 np0005539564 nova_compute[226295]: 2025-11-29 08:12:09.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:09.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:10 np0005539564 nova_compute[226295]: 2025-11-29 08:12:10.033 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:10.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:11.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:11 np0005539564 nova_compute[226295]: 2025-11-29 08:12:11.937 226310 DEBUG nova.compute.manager [None req-d0a357ae-40f4-4554-b8df-1b6337959973 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:11 np0005539564 nova_compute[226295]: 2025-11-29 08:12:11.942 226310 DEBUG nova.compute.manager [None req-d0a357ae-40f4-4554-b8df-1b6337959973 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_prep, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:11 np0005539564 nova_compute[226295]: 2025-11-29 08:12:11.955 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:11 np0005539564 nova_compute[226295]: 2025-11-29 08:12:11.956 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:11 np0005539564 nova_compute[226295]: 2025-11-29 08:12:11.994 226310 INFO nova.compute.manager [None req-d0a357ae-40f4-4554-b8df-1b6337959973 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] During sync_power_state the instance has a pending task (resize_prep). Skip.#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.014 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:12:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:12.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.070 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.077 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.077 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.078 226310 DEBUG nova.network.neutron [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.222 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.223 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.231 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.231 226310 INFO nova.compute.claims [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:12:12 np0005539564 nova_compute[226295]: 2025-11-29 08:12:12.571 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:13 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1886074226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:13 np0005539564 nova_compute[226295]: 2025-11-29 08:12:13.083 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:13 np0005539564 nova_compute[226295]: 2025-11-29 08:12:13.093 226310 DEBUG nova.compute.provider_tree [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:13 np0005539564 nova_compute[226295]: 2025-11-29 08:12:13.132 226310 DEBUG nova.scheduler.client.report [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:13 np0005539564 nova_compute[226295]: 2025-11-29 08:12:13.190 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:13 np0005539564 nova_compute[226295]: 2025-11-29 08:12:13.191 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:12:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:13.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:14.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:14 np0005539564 nova_compute[226295]: 2025-11-29 08:12:14.903 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:12:14 np0005539564 nova_compute[226295]: 2025-11-29 08:12:14.903 226310 DEBUG nova.network.neutron [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.036 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.140 226310 INFO nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.159 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.239 226310 INFO nova.virt.block_device [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Booting with volume c5655890-57a1-4371-8ce4-c9179f1c49bb at /dev/vda#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.296 226310 DEBUG nova.policy [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9cb37d6d47ac46aaa19aebb2e5b21658', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '146c65131f5b423287d348b351399c4e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.582 226310 DEBUG os_brick.utils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.584 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.603 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.603 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[3331259e-ecfc-4b80-a7af-9ae4e85c0d89]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.605 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.620 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.620 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[0b603e75-caaa-4f99-8cef-2c2ba8f802b5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.623 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.640 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.641 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[386a104d-1415-4578-88c7-85c86dc03896]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.643 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e8cc4d-fc87-49ce-abce-e84771b8988c]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.643 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.695 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "nvme version" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.698 226310 DEBUG os_brick.initiator.connectors.lightos [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.699 226310 DEBUG os_brick.initiator.connectors.lightos [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.699 226310 DEBUG os_brick.initiator.connectors.lightos [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.699 226310 DEBUG os_brick.utils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] <== get_connector_properties: return (116ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:12:15 np0005539564 nova_compute[226295]: 2025-11-29 08:12:15.700 226310 DEBUG nova.virt.block_device [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updating existing volume attachment record: 81c4d29b-4c3b-43d9-b95f-1ef480e9bdc7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:12:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:15.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:16.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:17 np0005539564 nova_compute[226295]: 2025-11-29 08:12:17.073 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:17.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:18.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.127 226310 DEBUG nova.network.neutron [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.423 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.447 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.449 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.450 226310 INFO nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Creating image(s)#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.451 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.451 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Ensure instance console log exists: /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.452 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.453 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.453 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.549 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.556 226310 INFO nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance already shutdown.#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.563 226310 INFO nova.virt.libvirt.driver [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance destroyed successfully.#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.564 226310 DEBUG nova.virt.libvirt.vif [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:c7:69:0e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.564 226310 DEBUG nova.network.os_vif_util [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:c7:69:0e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.565 226310 DEBUG nova.network.os_vif_util [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.565 226310 DEBUG os_vif [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.568 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.569 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7329edb-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.570 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.571 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.574 226310 INFO os_vif [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be')#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.580 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:19 np0005539564 nova_compute[226295]: 2025-11-29 08:12:19.581 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:19.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:20.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:21 np0005539564 nova_compute[226295]: 2025-11-29 08:12:21.174 226310 DEBUG nova.network.neutron [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Port b7329edb-beb8-414a-b8c3-33d223c32d22 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:12:21 np0005539564 nova_compute[226295]: 2025-11-29 08:12:21.212 226310 DEBUG nova.network.neutron [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Successfully created port: 4fdcff4a-999b-4a95-bb5f-528102f9556f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:12:21 np0005539564 nova_compute[226295]: 2025-11-29 08:12:21.392 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:21 np0005539564 nova_compute[226295]: 2025-11-29 08:12:21.392 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:21 np0005539564 nova_compute[226295]: 2025-11-29 08:12:21.393 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:21.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:22.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:22 np0005539564 nova_compute[226295]: 2025-11-29 08:12:22.096 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:22 np0005539564 nova_compute[226295]: 2025-11-29 08:12:22.097 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:22 np0005539564 nova_compute[226295]: 2025-11-29 08:12:22.097 226310 DEBUG nova.network.neutron [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:22 np0005539564 nova_compute[226295]: 2025-11-29 08:12:22.126 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539564 nova_compute[226295]: 2025-11-29 08:12:23.506 226310 DEBUG nova.network.neutron [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Successfully updated port: 4fdcff4a-999b-4a95-bb5f-528102f9556f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:23 np0005539564 nova_compute[226295]: 2025-11-29 08:12:23.549 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:23 np0005539564 nova_compute[226295]: 2025-11-29 08:12:23.550 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquired lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:23 np0005539564 nova_compute[226295]: 2025-11-29 08:12:23.550 226310 DEBUG nova.network.neutron [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:23 np0005539564 nova_compute[226295]: 2025-11-29 08:12:23.680 226310 DEBUG nova.compute.manager [req-609be627-bfe0-4eec-ad59-4adaf2fc65e7 req-77145f29-7eab-46ef-99a3-4423ad4a23bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-changed-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:23 np0005539564 nova_compute[226295]: 2025-11-29 08:12:23.681 226310 DEBUG nova.compute.manager [req-609be627-bfe0-4eec-ad59-4adaf2fc65e7 req-77145f29-7eab-46ef-99a3-4423ad4a23bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Refreshing instance network info cache due to event network-changed-4fdcff4a-999b-4a95-bb5f-528102f9556f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:23 np0005539564 nova_compute[226295]: 2025-11-29 08:12:23.681 226310 DEBUG oslo_concurrency.lockutils [req-609be627-bfe0-4eec-ad59-4adaf2fc65e7 req-77145f29-7eab-46ef-99a3-4423ad4a23bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:23.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:24.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:24 np0005539564 nova_compute[226295]: 2025-11-29 08:12:24.126 226310 DEBUG nova.network.neutron [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:12:24 np0005539564 nova_compute[226295]: 2025-11-29 08:12:24.571 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539564 nova_compute[226295]: 2025-11-29 08:12:24.720 226310 DEBUG nova.network.neutron [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:24 np0005539564 nova_compute[226295]: 2025-11-29 08:12:24.986 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:25 np0005539564 nova_compute[226295]: 2025-11-29 08:12:25.112 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:12:25 np0005539564 nova_compute[226295]: 2025-11-29 08:12:25.113 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:12:25 np0005539564 nova_compute[226295]: 2025-11-29 08:12:25.114 226310 INFO nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Creating image(s)#033[00m
Nov 29 03:12:25 np0005539564 nova_compute[226295]: 2025-11-29 08:12:25.138 226310 DEBUG nova.storage.rbd_utils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(nova-resize) on rbd image(8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:12:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:25.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e282 e282: 3 total, 3 up, 3 in
Nov 29 03:12:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:26.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.091 226310 DEBUG nova.objects.instance [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.231 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.231 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Ensure instance console log exists: /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.232 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.232 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.232 226310 DEBUG oslo_concurrency.lockutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.235 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Start _get_guest_xml network_info=[{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:c7:69:0e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.237 226310 DEBUG nova.network.neutron [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updating instance_info_cache with network_info: [{"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.242 226310 WARNING nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.254 226310 DEBUG nova.virt.libvirt.host [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.255 226310 DEBUG nova.virt.libvirt.host [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.259 226310 DEBUG nova.virt.libvirt.host [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.259 226310 DEBUG nova.virt.libvirt.host [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.260 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.261 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.261 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.262 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.262 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.262 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.262 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.263 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.263 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.263 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.263 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.264 226310 DEBUG nova.virt.hardware [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.264 226310 DEBUG nova.objects.instance [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.269 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Releasing lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.269 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance network_info: |[{"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.269 226310 DEBUG oslo_concurrency.lockutils [req-609be627-bfe0-4eec-ad59-4adaf2fc65e7 req-77145f29-7eab-46ef-99a3-4423ad4a23bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.270 226310 DEBUG nova.network.neutron [req-609be627-bfe0-4eec-ad59-4adaf2fc65e7 req-77145f29-7eab-46ef-99a3-4423ad4a23bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Refreshing network info cache for port 4fdcff4a-999b-4a95-bb5f-528102f9556f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.273 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Start _get_guest_xml network_info=[{"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c5655890-57a1-4371-8ce4-c9179f1c49bb', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c5655890-57a1-4371-8ce4-c9179f1c49bb', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5575532e-65f8-4b29-bab0-a0f8e60d032c', 'attached_at': '', 'detached_at': '', 'volume_id': 'c5655890-57a1-4371-8ce4-c9179f1c49bb', 'serial': 'c5655890-57a1-4371-8ce4-c9179f1c49bb'}, 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '81c4d29b-4c3b-43d9-b95f-1ef480e9bdc7', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.278 226310 WARNING nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.282 226310 DEBUG nova.virt.libvirt.host [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.283 226310 DEBUG nova.virt.libvirt.host [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.286 226310 DEBUG nova.virt.libvirt.host [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.286 226310 DEBUG nova.virt.libvirt.host [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.287 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.287 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.287 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.288 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.288 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.288 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.288 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.288 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.289 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.289 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.289 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.289 226310 DEBUG nova.virt.hardware [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.309 226310 DEBUG nova.storage.rbd_utils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] rbd image 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.313 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.348 226310 DEBUG oslo_concurrency.processutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1652710344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.783 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1662404296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.810 226310 DEBUG oslo_concurrency.processutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.853 226310 DEBUG nova.virt.libvirt.vif [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1111293699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1111293699',id=108,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjacxFQsKdMM4UDJE5tqVty/GRJtgDwjO80+Cb748HjTaOefbBqANvkqMMVhv8OZRL0vzsrbYfDHv6t2rc90ONK+EYpM6HO7fjiT30tNIEWFgoPJhxB+XUGt8iA5muhkg==',key_name='tempest-keypair-1424948510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='146c65131f5b423287d348b351399c4e',ramdisk_id='',reservation_id='r-ivfz3olx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-1695306825',owner_user_name='tempest-ServerActionsV293TestJSON-1695306825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9cb37d6d47ac46aaa19aebb2e5b21658',uuid=5575532e-65f8-4b29-bab0-a0f8e60d032c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.854 226310 DEBUG nova.network.os_vif_util [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converting VIF {"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.855 226310 DEBUG nova.network.os_vif_util [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.856 226310 DEBUG nova.objects.instance [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.862 226310 DEBUG oslo_concurrency.processutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.945 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <uuid>5575532e-65f8-4b29-bab0-a0f8e60d032c</uuid>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <name>instance-0000006c</name>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsV293TestJSON-server-1111293699</nova:name>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:12:26</nova:creationTime>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <nova:user uuid="9cb37d6d47ac46aaa19aebb2e5b21658">tempest-ServerActionsV293TestJSON-1695306825-project-member</nova:user>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <nova:project uuid="146c65131f5b423287d348b351399c4e">tempest-ServerActionsV293TestJSON-1695306825</nova:project>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <nova:port uuid="4fdcff4a-999b-4a95-bb5f-528102f9556f">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <entry name="serial">5575532e-65f8-4b29-bab0-a0f8e60d032c</entry>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <entry name="uuid">5575532e-65f8-4b29-bab0-a0f8e60d032c</entry>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-c5655890-57a1-4371-8ce4-c9179f1c49bb">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <serial>c5655890-57a1-4371-8ce4-c9179f1c49bb</serial>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5d:a7:ee"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <target dev="tap4fdcff4a-99"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/console.log" append="off"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:12:26 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:12:26 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:12:26 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:12:26 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.947 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Preparing to wait for external event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.947 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.947 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.948 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.949 226310 DEBUG nova.virt.libvirt.vif [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1111293699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1111293699',id=108,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjacxFQsKdMM4UDJE5tqVty/GRJtgDwjO80+Cb748HjTaOefbBqANvkqMMVhv8OZRL0vzsrbYfDHv6t2rc90ONK+EYpM6HO7fjiT30tNIEWFgoPJhxB+XUGt8iA5muhkg==',key_name='tempest-keypair-1424948510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='146c65131f5b423287d348b351399c4e',ramdisk_id='',reservation_id='r-ivfz3olx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-1695306825',owner_user_name='tempest-ServerActionsV293TestJSON-1695306825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9cb37d6d47ac46aaa19aebb2e5b21658',uuid=5575532e-65f8-4b29-bab0-a0f8e60d032c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.949 226310 DEBUG nova.network.os_vif_util [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converting VIF {"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.950 226310 DEBUG nova.network.os_vif_util [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.950 226310 DEBUG os_vif [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.951 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.951 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.952 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.955 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fdcff4a-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.956 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fdcff4a-99, col_values=(('external_ids', {'iface-id': '4fdcff4a-999b-4a95-bb5f-528102f9556f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:a7:ee', 'vm-uuid': '5575532e-65f8-4b29-bab0-a0f8e60d032c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.958 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539564 NetworkManager[48997]: <info>  [1764403946.9597] manager: (tap4fdcff4a-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.962 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539564 nova_compute[226295]: 2025-11-29 08:12:26.969 226310 INFO os_vif [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99')#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.054 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.055 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.056 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] No VIF found with MAC fa:16:3e:5d:a7:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.057 226310 INFO nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Using config drive#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.094 226310 DEBUG nova.storage.rbd_utils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] rbd image 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.131 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3458217817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.361 226310 DEBUG oslo_concurrency.processutils [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.363 226310 DEBUG nova.virt.libvirt.vif [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:c7:69:0e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.363 226310 DEBUG nova.network.os_vif_util [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:c7:69:0e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.364 226310 DEBUG nova.network.os_vif_util [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.367 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <uuid>8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</uuid>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <name>instance-00000067</name>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <memory>196608</memory>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherB-server-1179896728</nova:name>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:12:26</nova:creationTime>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.micro">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <nova:memory>192</nova:memory>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <nova:port uuid="b7329edb-beb8-414a-b8c3-33d223c32d22">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <entry name="serial">8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</entry>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <entry name="uuid">8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</entry>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk.config">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:c7:69:0e"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <target dev="tapb7329edb-be"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/console.log" append="off"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:12:27 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:12:27 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:12:27 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:12:27 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.368 226310 DEBUG nova.virt.libvirt.vif [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:c7:69:0e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.368 226310 DEBUG nova.network.os_vif_util [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:c7:69:0e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.368 226310 DEBUG nova.network.os_vif_util [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.369 226310 DEBUG os_vif [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.369 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.370 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.370 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.373 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.373 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7329edb-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.374 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7329edb-be, col_values=(('external_ids', {'iface-id': 'b7329edb-beb8-414a-b8c3-33d223c32d22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:69:0e', 'vm-uuid': '8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.376 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:27 np0005539564 NetworkManager[48997]: <info>  [1764403947.3776] manager: (tapb7329edb-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.379 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.386 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.387 226310 INFO os_vif [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be')#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.481 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.482 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.482 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:c7:69:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.483 226310 INFO nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Using config drive#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.539 226310 DEBUG nova.compute.manager [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.539 226310 DEBUG nova.virt.libvirt.driver [None req-19d74d52-6883-417e-8bb5-112f54515c64 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.828 226310 INFO nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Creating config drive at /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config#033[00m
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.837 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphcbtqsri execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:27.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:27 np0005539564 nova_compute[226295]: 2025-11-29 08:12:27.977 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphcbtqsri" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.011 226310 DEBUG nova.storage.rbd_utils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] rbd image 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.015 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:28.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.237 226310 DEBUG oslo_concurrency.processutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.239 226310 INFO nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Deleting local config drive /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:12:28 np0005539564 NetworkManager[48997]: <info>  [1764403948.3213] manager: (tap4fdcff4a-99): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Nov 29 03:12:28 np0005539564 kernel: tap4fdcff4a-99: entered promiscuous mode
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.328 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:28Z|00349|binding|INFO|Claiming lport 4fdcff4a-999b-4a95-bb5f-528102f9556f for this chassis.
Nov 29 03:12:28 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:28Z|00350|binding|INFO|4fdcff4a-999b-4a95-bb5f-528102f9556f: Claiming fa:16:3e:5d:a7:ee 10.100.0.7
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.340 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a7:ee 10.100.0.7'], port_security=['fa:16:3e:5d:a7:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5575532e-65f8-4b29-bab0-a0f8e60d032c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-337f42f7-7833-4286-befc-f5fca120d50f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '146c65131f5b423287d348b351399c4e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e83feb7e-e837-4a27-87d5-f25ed404c193', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f225a6f7-d086-4f61-9a6c-4cc9bd46d793, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4fdcff4a-999b-4a95-bb5f-528102f9556f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.341 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4fdcff4a-999b-4a95-bb5f-528102f9556f in datapath 337f42f7-7833-4286-befc-f5fca120d50f bound to our chassis#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.342 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 337f42f7-7833-4286-befc-f5fca120d50f#033[00m
Nov 29 03:12:28 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:28Z|00351|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f ovn-installed in OVS
Nov 29 03:12:28 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:28Z|00352|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f up in Southbound
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.349 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.353 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.355 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7b578717-8c4f-4a5b-93da-4dcb36b1ab0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.355 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap337f42f7-71 in ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.357 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap337f42f7-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.358 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fd37e9-cdfb-4f74-9506-e580d6bbf78a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.358 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdf03d9-119b-4a9f-afa1-2ab13fb8b85b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 systemd-machined[190128]: New machine qemu-43-instance-0000006c.
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.371 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7871c0-fe7c-43c0-8414-1109b569879f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 systemd[1]: Started Virtual Machine qemu-43-instance-0000006c.
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.396 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[32118def-ba48-4b01-a598-c4323faafba9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 systemd-udevd[264128]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:28 np0005539564 NetworkManager[48997]: <info>  [1764403948.4220] device (tap4fdcff4a-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:28 np0005539564 NetworkManager[48997]: <info>  [1764403948.4229] device (tap4fdcff4a-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.438 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cbed4051-f153-4f20-b144-b7fbc3acf112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 systemd-udevd[264135]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:28 np0005539564 NetworkManager[48997]: <info>  [1764403948.4462] manager: (tap337f42f7-70): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.445 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[967ee581-702b-41df-9444-62a4be91d7f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.482 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[213029b2-1e64-490d-a8e4-0b2b9f02da9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.486 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7470fc-a54b-48ff-9374-b74d25993e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 NetworkManager[48997]: <info>  [1764403948.5193] device (tap337f42f7-70): carrier: link connected
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.529 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ce4d78-2047-469e-b8c3-60378a0fd820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.545 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[48d710c1-6671-42f8-8b44-14bb88646c0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap337f42f7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:dd:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692320, 'reachable_time': 31869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264165, 'error': None, 'target': 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.559 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0ecc75a5-119d-4b53-8f7f-f4cea639c3d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:dd97'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692320, 'tstamp': 692320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264166, 'error': None, 'target': 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.572 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f76bb1e6-8969-4b2d-ae7e-14c6876304ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap337f42f7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:dd:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692320, 'reachable_time': 31869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264167, 'error': None, 'target': 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.610 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f20e12a8-beab-4422-8891-e3255c1ad3f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.684 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bf563dff-97f1-4e02-bc61-de7de7b27142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.686 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap337f42f7-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.687 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.687 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap337f42f7-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539564 NetworkManager[48997]: <info>  [1764403948.7325] manager: (tap337f42f7-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Nov 29 03:12:28 np0005539564 kernel: tap337f42f7-70: entered promiscuous mode
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.737 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.738 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap337f42f7-70, col_values=(('external_ids', {'iface-id': 'a440551b-4851-4b07-bfbe-398f9cdcd887'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:28Z|00353|binding|INFO|Releasing lport a440551b-4851-4b07-bfbe-398f9cdcd887 from this chassis (sb_readonly=0)
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.758 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.768 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.769 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/337f42f7-7833-4286-befc-f5fca120d50f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/337f42f7-7833-4286-befc-f5fca120d50f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.770 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a06b4639-7483-478e-8f1b-cf6e25dca4f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.771 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-337f42f7-7833-4286-befc-f5fca120d50f
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/337f42f7-7833-4286-befc-f5fca120d50f.pid.haproxy
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 337f42f7-7833-4286-befc-f5fca120d50f
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:28.773 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'env', 'PROCESS_TAG=haproxy-337f42f7-7833-4286-befc-f5fca120d50f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/337f42f7-7833-4286-befc-f5fca120d50f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.828 226310 DEBUG nova.network.neutron [req-609be627-bfe0-4eec-ad59-4adaf2fc65e7 req-77145f29-7eab-46ef-99a3-4423ad4a23bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updated VIF entry in instance network info cache for port 4fdcff4a-999b-4a95-bb5f-528102f9556f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.828 226310 DEBUG nova.network.neutron [req-609be627-bfe0-4eec-ad59-4adaf2fc65e7 req-77145f29-7eab-46ef-99a3-4423ad4a23bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updating instance_info_cache with network_info: [{"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.865 226310 DEBUG oslo_concurrency.lockutils [req-609be627-bfe0-4eec-ad59-4adaf2fc65e7 req-77145f29-7eab-46ef-99a3-4423ad4a23bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.891 226310 DEBUG nova.compute.manager [req-29828145-8c5d-4444-b415-1d997dbb6f47 req-6b61f806-4dfd-4092-adb2-2b700188ec05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.892 226310 DEBUG oslo_concurrency.lockutils [req-29828145-8c5d-4444-b415-1d997dbb6f47 req-6b61f806-4dfd-4092-adb2-2b700188ec05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.892 226310 DEBUG oslo_concurrency.lockutils [req-29828145-8c5d-4444-b415-1d997dbb6f47 req-6b61f806-4dfd-4092-adb2-2b700188ec05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.892 226310 DEBUG oslo_concurrency.lockutils [req-29828145-8c5d-4444-b415-1d997dbb6f47 req-6b61f806-4dfd-4092-adb2-2b700188ec05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.893 226310 DEBUG nova.compute.manager [req-29828145-8c5d-4444-b415-1d997dbb6f47 req-6b61f806-4dfd-4092-adb2-2b700188ec05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Processing event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.989 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403948.9886699, 5575532e-65f8-4b29-bab0-a0f8e60d032c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.990 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:28 np0005539564 nova_compute[226295]: 2025-11-29 08:12:28.994 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.000 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.005 226310 INFO nova.virt.libvirt.driver [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance spawned successfully.#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.006 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.010 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.016 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.052 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.053 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403948.9888628, 5575532e-65f8-4b29-bab0-a0f8e60d032c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.055 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.061 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.062 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.063 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.064 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.065 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.066 226310 DEBUG nova.virt.libvirt.driver [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.079 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.086 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403948.99881, 5575532e-65f8-4b29-bab0-a0f8e60d032c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.086 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.119 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.126 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:12:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.140 226310 INFO nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Took 9.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.140 226310 DEBUG nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.182 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.225 226310 INFO nova.compute.manager [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Took 17.03 seconds to build instance.#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.259 226310 DEBUG oslo_concurrency.lockutils [None req-2b6f9a78-6a18-49dd-8193-d8be9b51fc20 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:29 np0005539564 podman[264242]: 2025-11-29 08:12:29.283207438 +0000 UTC m=+0.093889403 container create a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:12:29 np0005539564 systemd[1]: Started libpod-conmon-a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079.scope.
Nov 29 03:12:29 np0005539564 podman[264242]: 2025-11-29 08:12:29.238121753 +0000 UTC m=+0.048803768 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:29 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:12:29 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88352b610d413f9809aceca517544159ffc1275440e9bbb169f97d82c7a751a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:29 np0005539564 podman[264242]: 2025-11-29 08:12:29.377120012 +0000 UTC m=+0.187801967 container init a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:29 np0005539564 podman[264242]: 2025-11-29 08:12:29.383965536 +0000 UTC m=+0.194647461 container start a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:12:29 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[264265]: [NOTICE]   (264301) : New worker (264313) forked
Nov 29 03:12:29 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[264265]: [NOTICE]   (264301) : Loading success.
Nov 29 03:12:29 np0005539564 podman[264257]: 2025-11-29 08:12:29.406904895 +0000 UTC m=+0.076101554 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:12:29 np0005539564 podman[264256]: 2025-11-29 08:12:29.423734049 +0000 UTC m=+0.091059257 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:12:29 np0005539564 podman[264255]: 2025-11-29 08:12:29.43783378 +0000 UTC m=+0.116088203 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:12:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:29.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.896 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.897 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:29 np0005539564 nova_compute[226295]: 2025-11-29 08:12:29.897 226310 DEBUG nova.compute.manager [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Going to confirm migration 14 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:12:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:30.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:12:30 np0005539564 nova_compute[226295]: 2025-11-29 08:12:30.481 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:30 np0005539564 nova_compute[226295]: 2025-11-29 08:12:30.481 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:30 np0005539564 nova_compute[226295]: 2025-11-29 08:12:30.482 226310 DEBUG nova.network.neutron [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:30 np0005539564 nova_compute[226295]: 2025-11-29 08:12:30.482 226310 DEBUG nova.objects.instance [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'info_cache' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:31 np0005539564 nova_compute[226295]: 2025-11-29 08:12:31.126 226310 DEBUG nova.compute.manager [req-aea504dd-01d5-4cad-8253-47927c12f23a req-7231ab58-0e1f-4e59-a627-0a317351b444 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:31 np0005539564 nova_compute[226295]: 2025-11-29 08:12:31.127 226310 DEBUG oslo_concurrency.lockutils [req-aea504dd-01d5-4cad-8253-47927c12f23a req-7231ab58-0e1f-4e59-a627-0a317351b444 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:31 np0005539564 nova_compute[226295]: 2025-11-29 08:12:31.127 226310 DEBUG oslo_concurrency.lockutils [req-aea504dd-01d5-4cad-8253-47927c12f23a req-7231ab58-0e1f-4e59-a627-0a317351b444 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:31 np0005539564 nova_compute[226295]: 2025-11-29 08:12:31.128 226310 DEBUG oslo_concurrency.lockutils [req-aea504dd-01d5-4cad-8253-47927c12f23a req-7231ab58-0e1f-4e59-a627-0a317351b444 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:31 np0005539564 nova_compute[226295]: 2025-11-29 08:12:31.128 226310 DEBUG nova.compute.manager [req-aea504dd-01d5-4cad-8253-47927c12f23a req-7231ab58-0e1f-4e59-a627-0a317351b444 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:31 np0005539564 nova_compute[226295]: 2025-11-29 08:12:31.129 226310 WARNING nova.compute.manager [req-aea504dd-01d5-4cad-8253-47927c12f23a req-7231ab58-0e1f-4e59-a627-0a317351b444 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:31.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:31.961 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:31.962 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:12:31 np0005539564 nova_compute[226295]: 2025-11-29 08:12:31.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:32.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:32 np0005539564 nova_compute[226295]: 2025-11-29 08:12:32.132 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539564 nova_compute[226295]: 2025-11-29 08:12:32.376 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539564 nova_compute[226295]: 2025-11-29 08:12:32.917 226310 DEBUG nova.network.neutron [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:32 np0005539564 nova_compute[226295]: 2025-11-29 08:12:32.938 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:32 np0005539564 nova_compute[226295]: 2025-11-29 08:12:32.938 226310 DEBUG nova.objects.instance [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'migration_context' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.035 226310 DEBUG nova.storage.rbd_utils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] removing snapshot(nova-resize) on rbd image(8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:12:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e283 e283: 3 total, 3 up, 3 in
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.271 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.273 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.304 226310 DEBUG nova.compute.manager [req-7213003d-190e-4bb6-a9db-aec77556c324 req-f65e20e2-20bc-4fd3-a89a-f3f491ec0499 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-changed-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.308 226310 DEBUG nova.compute.manager [req-7213003d-190e-4bb6-a9db-aec77556c324 req-f65e20e2-20bc-4fd3-a89a-f3f491ec0499 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Refreshing instance network info cache due to event network-changed-4fdcff4a-999b-4a95-bb5f-528102f9556f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.308 226310 DEBUG oslo_concurrency.lockutils [req-7213003d-190e-4bb6-a9db-aec77556c324 req-f65e20e2-20bc-4fd3-a89a-f3f491ec0499 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.309 226310 DEBUG oslo_concurrency.lockutils [req-7213003d-190e-4bb6-a9db-aec77556c324 req-f65e20e2-20bc-4fd3-a89a-f3f491ec0499 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.309 226310 DEBUG nova.network.neutron [req-7213003d-190e-4bb6-a9db-aec77556c324 req-f65e20e2-20bc-4fd3-a89a-f3f491ec0499 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Refreshing network info cache for port 4fdcff4a-999b-4a95-bb5f-528102f9556f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:33 np0005539564 nova_compute[226295]: 2025-11-29 08:12:33.538 226310 DEBUG oslo_concurrency.processutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:33.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1722645418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:34 np0005539564 nova_compute[226295]: 2025-11-29 08:12:34.039 226310 DEBUG oslo_concurrency.processutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:34 np0005539564 nova_compute[226295]: 2025-11-29 08:12:34.046 226310 DEBUG nova.compute.provider_tree [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:34 np0005539564 nova_compute[226295]: 2025-11-29 08:12:34.074 226310 DEBUG nova.scheduler.client.report [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:34.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:34 np0005539564 nova_compute[226295]: 2025-11-29 08:12:34.168 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:34 np0005539564 nova_compute[226295]: 2025-11-29 08:12:34.168 226310 DEBUG nova.compute.manager [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805#033[00m
Nov 29 03:12:34 np0005539564 nova_compute[226295]: 2025-11-29 08:12:34.493 226310 INFO nova.scheduler.client.report [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Deleted allocation for migration 437a1b20-a903-4d00-94b4-5fa3849d9d42#033[00m
Nov 29 03:12:34 np0005539564 nova_compute[226295]: 2025-11-29 08:12:34.559 226310 DEBUG oslo_concurrency.lockutils [None req-31e5aed1-6a48-4c78-8368-3dccea772efc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:35.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:35.964 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:36.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.794375) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956794407, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2247, "num_deletes": 262, "total_data_size": 5076190, "memory_usage": 5148912, "flush_reason": "Manual Compaction"}
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956824056, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 3293722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42703, "largest_seqno": 44945, "table_properties": {"data_size": 3284429, "index_size": 5787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19770, "raw_average_key_size": 20, "raw_value_size": 3265629, "raw_average_value_size": 3415, "num_data_blocks": 250, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403795, "oldest_key_time": 1764403795, "file_creation_time": 1764403956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 29768 microseconds, and 14050 cpu microseconds.
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.824131) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 3293722 bytes OK
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.824165) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.837847) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.837865) EVENT_LOG_v1 {"time_micros": 1764403956837860, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.837887) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 5066079, prev total WAL file size 5066079, number of live WAL files 2.
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.839350) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323633' seq:72057594037927935, type:22 .. '6C6F676D0031353134' seq:0, type:0; will stop at (end)
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(3216KB)], [81(8575KB)]
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956839394, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 12074998, "oldest_snapshot_seqno": -1}
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 7487 keys, 11910402 bytes, temperature: kUnknown
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956943277, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 11910402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11859691, "index_size": 30887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 193636, "raw_average_key_size": 25, "raw_value_size": 11725257, "raw_average_value_size": 1566, "num_data_blocks": 1223, "num_entries": 7487, "num_filter_entries": 7487, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.943742) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11910402 bytes
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.946251) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.0 rd, 114.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.3) write-amplify(3.6) OK, records in: 8027, records dropped: 540 output_compression: NoCompression
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.946284) EVENT_LOG_v1 {"time_micros": 1764403956946270, "job": 50, "event": "compaction_finished", "compaction_time_micros": 104077, "compaction_time_cpu_micros": 56166, "output_level": 6, "num_output_files": 1, "total_output_size": 11910402, "num_input_records": 8027, "num_output_records": 7487, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956947673, "job": 50, "event": "table_file_deletion", "file_number": 83}
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956950796, "job": 50, "event": "table_file_deletion", "file_number": 81}
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.839247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.950987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.950995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.950999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.951003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:36.951007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:37 np0005539564 nova_compute[226295]: 2025-11-29 08:12:37.135 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:37 np0005539564 nova_compute[226295]: 2025-11-29 08:12:37.212 226310 DEBUG nova.network.neutron [req-7213003d-190e-4bb6-a9db-aec77556c324 req-f65e20e2-20bc-4fd3-a89a-f3f491ec0499 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updated VIF entry in instance network info cache for port 4fdcff4a-999b-4a95-bb5f-528102f9556f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:37 np0005539564 nova_compute[226295]: 2025-11-29 08:12:37.213 226310 DEBUG nova.network.neutron [req-7213003d-190e-4bb6-a9db-aec77556c324 req-f65e20e2-20bc-4fd3-a89a-f3f491ec0499 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updating instance_info_cache with network_info: [{"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:37 np0005539564 nova_compute[226295]: 2025-11-29 08:12:37.257 226310 DEBUG oslo_concurrency.lockutils [req-7213003d-190e-4bb6-a9db-aec77556c324 req-f65e20e2-20bc-4fd3-a89a-f3f491ec0499 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:37 np0005539564 nova_compute[226295]: 2025-11-29 08:12:37.378 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:37.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:38.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:39.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:39 np0005539564 nova_compute[226295]: 2025-11-29 08:12:39.992 226310 DEBUG nova.objects.instance [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'flavor' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:40.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:40 np0005539564 nova_compute[226295]: 2025-11-29 08:12:40.112 226310 DEBUG oslo_concurrency.lockutils [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:40 np0005539564 nova_compute[226295]: 2025-11-29 08:12:40.114 226310 DEBUG oslo_concurrency.lockutils [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:40 np0005539564 nova_compute[226295]: 2025-11-29 08:12:40.115 226310 DEBUG nova.network.neutron [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:40 np0005539564 nova_compute[226295]: 2025-11-29 08:12:40.115 226310 DEBUG nova.objects.instance [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'info_cache' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 e284: 3 total, 3 up, 3 in
Nov 29 03:12:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:41.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:42.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:42 np0005539564 nova_compute[226295]: 2025-11-29 08:12:42.137 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:42 np0005539564 nova_compute[226295]: 2025-11-29 08:12:42.380 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:43.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:44.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:44Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:a7:ee 10.100.0.7
Nov 29 03:12:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:44Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:a7:ee 10.100.0.7
Nov 29 03:12:44 np0005539564 nova_compute[226295]: 2025-11-29 08:12:44.892 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:44 np0005539564 nova_compute[226295]: 2025-11-29 08:12:44.893 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:45 np0005539564 nova_compute[226295]: 2025-11-29 08:12:45.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:45.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:46.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:46 np0005539564 nova_compute[226295]: 2025-11-29 08:12:46.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:46 np0005539564 nova_compute[226295]: 2025-11-29 08:12:46.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.288 226310 DEBUG nova.network.neutron [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.321 226310 DEBUG oslo_concurrency.lockutils [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.377 226310 INFO nova.virt.libvirt.driver [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance destroyed successfully.#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.377 226310 DEBUG nova.objects.instance [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'numa_topology' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.383 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.392 226310 DEBUG nova.objects.instance [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.420 226310 DEBUG nova.virt.libvirt.vif [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.420 226310 DEBUG nova.network.os_vif_util [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.421 226310 DEBUG nova.network.os_vif_util [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.422 226310 DEBUG os_vif [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.424 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.424 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7329edb-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.426 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.428 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.430 226310 INFO os_vif [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be')#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.441 226310 DEBUG nova.virt.libvirt.driver [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Start _get_guest_xml network_info=[{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.448 226310 WARNING nova.virt.libvirt.driver [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.456 226310 DEBUG nova.virt.libvirt.host [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.458 226310 DEBUG nova.virt.libvirt.host [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.462 226310 DEBUG nova.virt.libvirt.host [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.463 226310 DEBUG nova.virt.libvirt.host [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.465 226310 DEBUG nova.virt.libvirt.driver [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.465 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.466 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.467 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.467 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.468 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.468 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.468 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.469 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.469 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.470 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.470 226310 DEBUG nova.virt.hardware [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.471 226310 DEBUG nova.objects.instance [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:47 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.506 226310 DEBUG oslo_concurrency.processutils [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:47.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/362817229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:47.999 226310 DEBUG oslo_concurrency.processutils [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.040 226310 DEBUG oslo_concurrency.processutils [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:48.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.374 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.375 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.375 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4217449745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.509 226310 DEBUG oslo_concurrency.processutils [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.511 226310 DEBUG nova.virt.libvirt.vif [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.512 226310 DEBUG nova.network.os_vif_util [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.513 226310 DEBUG nova.network.os_vif_util [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.515 226310 DEBUG nova.objects.instance [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.542 226310 DEBUG nova.virt.libvirt.driver [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <uuid>8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</uuid>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <name>instance-00000067</name>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <memory>196608</memory>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherB-server-1179896728</nova:name>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:12:47</nova:creationTime>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.micro">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <nova:memory>192</nova:memory>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <nova:port uuid="b7329edb-beb8-414a-b8c3-33d223c32d22">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <entry name="serial">8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</entry>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <entry name="uuid">8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78</entry>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_disk.config">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:c7:69:0e"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <target dev="tapb7329edb-be"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78/console.log" append="off"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:12:48 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:12:48 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:12:48 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:12:48 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.545 226310 DEBUG nova.virt.libvirt.driver [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.546 226310 DEBUG nova.virt.libvirt.driver [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.547 226310 DEBUG nova.virt.libvirt.vif [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.548 226310 DEBUG nova.network.os_vif_util [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.549 226310 DEBUG nova.network.os_vif_util [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.550 226310 DEBUG os_vif [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.551 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.552 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.553 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.559 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.560 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7329edb-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.562 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7329edb-be, col_values=(('external_ids', {'iface-id': 'b7329edb-beb8-414a-b8c3-33d223c32d22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:69:0e', 'vm-uuid': '8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.564 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539564 NetworkManager[48997]: <info>  [1764403968.5658] manager: (tapb7329edb-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.569 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.573 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.575 226310 INFO os_vif [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be')#033[00m
Nov 29 03:12:48 np0005539564 NetworkManager[48997]: <info>  [1764403968.6860] manager: (tapb7329edb-be): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Nov 29 03:12:48 np0005539564 kernel: tapb7329edb-be: entered promiscuous mode
Nov 29 03:12:48 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:48Z|00354|binding|INFO|Claiming lport b7329edb-beb8-414a-b8c3-33d223c32d22 for this chassis.
Nov 29 03:12:48 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:48Z|00355|binding|INFO|b7329edb-beb8-414a-b8c3-33d223c32d22: Claiming fa:16:3e:c7:69:0e 10.100.0.6
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.688 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.696 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:69:0e 10.100.0.6'], port_security=['fa:16:3e:c7:69:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=b7329edb-beb8-414a-b8c3-33d223c32d22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.698 139780 INFO neutron.agent.ovn.metadata.agent [-] Port b7329edb-beb8-414a-b8c3-33d223c32d22 in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 bound to our chassis#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.701 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.719 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aeaaef2b-b520-4844-93db-1d87cbb39b8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.720 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d5b8c11-b1 in ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:48 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:48Z|00356|binding|INFO|Setting lport b7329edb-beb8-414a-b8c3-33d223c32d22 ovn-installed in OVS
Nov 29 03:12:48 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:48Z|00357|binding|INFO|Setting lport b7329edb-beb8-414a-b8c3-33d223c32d22 up in Southbound
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.722 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.722 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d5b8c11-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.723 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[45f20cfb-843a-4bfb-a225-3c3a10cdb71b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.724 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa61ff4-1e61-4efb-989a-7fc16a2b90c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 nova_compute[226295]: 2025-11-29 08:12:48.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539564 systemd-machined[190128]: New machine qemu-44-instance-00000067.
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.742 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d89fcde5-2b3d-4d03-8f6e-8e79d3ce36a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.768 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dc69f44d-bef1-4204-bbeb-5a938451f55f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 systemd[1]: Started Virtual Machine qemu-44-instance-00000067.
Nov 29 03:12:48 np0005539564 systemd-udevd[264523]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:48 np0005539564 NetworkManager[48997]: <info>  [1764403968.7934] device (tapb7329edb-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:48 np0005539564 NetworkManager[48997]: <info>  [1764403968.7945] device (tapb7329edb-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.812 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e3de6dab-52b8-4b60-8d65-4ce15d95c481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.820 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ece660-f404-4da9-9356-8cab0f2ecba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 NetworkManager[48997]: <info>  [1764403968.8220] manager: (tap4d5b8c11-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/185)
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.861 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8526b56d-570d-48cc-958f-722fbb362a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.865 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cc72e419-af87-4b67-846e-2d019abfcae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 NetworkManager[48997]: <info>  [1764403968.9047] device (tap4d5b8c11-b0): carrier: link connected
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.914 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[28abb497-8906-450f-ba5e-7101d8134610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.941 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f52b71-6cd6-496b-8aab-62b524efe0ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694358, 'reachable_time': 38623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264553, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.962 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c55fc809-2246-48cb-a382-a0395a4655a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:6d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694358, 'tstamp': 694358}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264554, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:48.989 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8d69a40f-8ed0-4054-95c2-4468b96deb7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694358, 'reachable_time': 38623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264555, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.037 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[06327347-d3e5-4547-93e4-ea37b595db19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.129 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2edf73bb-a6c5-4229-be6d-dc4e7931d44c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.130 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.131 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.131 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.133 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539564 NetworkManager[48997]: <info>  [1764403969.1342] manager: (tap4d5b8c11-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 29 03:12:49 np0005539564 kernel: tap4d5b8c11-b0: entered promiscuous mode
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.136 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.137 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.138 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:49Z|00358|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.155 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.155 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.157 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6e37f4-c49f-4b5e-a6fd-15f4b9235557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.157 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:49.159 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'env', 'PROCESS_TAG=haproxy-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d5b8c11-b69e-4a74-846b-03943fb29a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.261 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403969.2594836, 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.261 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.263 226310 DEBUG nova.compute.manager [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.267 226310 INFO nova.virt.libvirt.driver [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance rebooted successfully.#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.267 226310 DEBUG nova.compute.manager [None req-509e6051-7ff5-47e3-a035-3d0d328d9569 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.350 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.354 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.389 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.390 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764403969.260025, 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.390 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.429 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.434 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:49 np0005539564 podman[264630]: 2025-11-29 08:12:49.596415187 +0000 UTC m=+0.064363196 container create 99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:12:49 np0005539564 podman[264630]: 2025-11-29 08:12:49.565292078 +0000 UTC m=+0.033240057 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:49.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.794 226310 DEBUG nova.compute.manager [req-7e646c75-9d0d-46b5-b746-264ecf42da20 req-f6c1e99d-4a44-45dc-b93f-abb4f058244d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.794 226310 DEBUG oslo_concurrency.lockutils [req-7e646c75-9d0d-46b5-b746-264ecf42da20 req-f6c1e99d-4a44-45dc-b93f-abb4f058244d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.794 226310 DEBUG oslo_concurrency.lockutils [req-7e646c75-9d0d-46b5-b746-264ecf42da20 req-f6c1e99d-4a44-45dc-b93f-abb4f058244d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.794 226310 DEBUG oslo_concurrency.lockutils [req-7e646c75-9d0d-46b5-b746-264ecf42da20 req-f6c1e99d-4a44-45dc-b93f-abb4f058244d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.794 226310 DEBUG nova.compute.manager [req-7e646c75-9d0d-46b5-b746-264ecf42da20 req-f6c1e99d-4a44-45dc-b93f-abb4f058244d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] No waiting events found dispatching network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:49 np0005539564 nova_compute[226295]: 2025-11-29 08:12:49.795 226310 WARNING nova.compute.manager [req-7e646c75-9d0d-46b5-b746-264ecf42da20 req-f6c1e99d-4a44-45dc-b93f-abb4f058244d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received unexpected event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:49 np0005539564 systemd[1]: Started libpod-conmon-99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49.scope.
Nov 29 03:12:49 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:12:49 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5a15a7741481feefb34dd4319d76356b3e087de78c0abfa30a19bae7633e4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:50 np0005539564 podman[264630]: 2025-11-29 08:12:50.01429256 +0000 UTC m=+0.482240559 container init 99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:12:50 np0005539564 podman[264630]: 2025-11-29 08:12:50.024505805 +0000 UTC m=+0.492453774 container start 99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:12:50 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[264645]: [NOTICE]   (264649) : New worker (264651) forked
Nov 29 03:12:50 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[264645]: [NOTICE]   (264649) : Loading success.
Nov 29 03:12:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:50.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.606 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.607 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.608 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.609 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.609 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.611 226310 INFO nova.compute.manager [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Terminating instance#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.613 226310 DEBUG nova.compute.manager [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:12:51 np0005539564 kernel: tapb7329edb-be (unregistering): left promiscuous mode
Nov 29 03:12:51 np0005539564 NetworkManager[48997]: <info>  [1764403971.6563] device (tapb7329edb-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.661 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:51Z|00359|binding|INFO|Releasing lport b7329edb-beb8-414a-b8c3-33d223c32d22 from this chassis (sb_readonly=0)
Nov 29 03:12:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:51Z|00360|binding|INFO|Setting lport b7329edb-beb8-414a-b8c3-33d223c32d22 down in Southbound
Nov 29 03:12:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:51Z|00361|binding|INFO|Removing iface tapb7329edb-be ovn-installed in OVS
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.663 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:51.677 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:69:0e 10.100.0.6'], port_security=['fa:16:3e:c7:69:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=b7329edb-beb8-414a-b8c3-33d223c32d22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:51.679 139780 INFO neutron.agent.ovn.metadata.agent [-] Port b7329edb-beb8-414a-b8c3-33d223c32d22 in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:12:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:51.680 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d5b8c11-b69e-4a74-846b-03943fb29a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.681 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:51.682 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dba4f470-e5d0-4a2f-bd41-74271c3d8f79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:51.683 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace which is not needed anymore#033[00m
Nov 29 03:12:51 np0005539564 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 29 03:12:51 np0005539564 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000067.scope: Consumed 2.756s CPU time.
Nov 29 03:12:51 np0005539564 systemd-machined[190128]: Machine qemu-44-instance-00000067 terminated.
Nov 29 03:12:51 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[264645]: [NOTICE]   (264649) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:51 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[264645]: [NOTICE]   (264649) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:51 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[264645]: [WARNING]  (264649) : Exiting Master process...
Nov 29 03:12:51 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[264645]: [WARNING]  (264649) : Exiting Master process...
Nov 29 03:12:51 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[264645]: [ALERT]    (264649) : Current worker (264651) exited with code 143 (Terminated)
Nov 29 03:12:51 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[264645]: [WARNING]  (264649) : All workers exited. Exiting... (0)
Nov 29 03:12:51 np0005539564 systemd[1]: libpod-99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49.scope: Deactivated successfully.
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.835 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539564 podman[264685]: 2025-11-29 08:12:51.83643743 +0000 UTC m=+0.051108960 container died 99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.845 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.849 226310 INFO nova.virt.libvirt.driver [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Instance destroyed successfully.#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.850 226310 DEBUG nova.objects.instance [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:51 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:51 np0005539564 systemd[1]: var-lib-containers-storage-overlay-4a5a15a7741481feefb34dd4319d76356b3e087de78c0abfa30a19bae7633e4b-merged.mount: Deactivated successfully.
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.874 226310 DEBUG nova.virt.libvirt.vif [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1179896728',display_name='tempest-ServerActionsTestOtherB-server-1179896728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1179896728',id=103,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-dcrww618',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.875 226310 DEBUG nova.network.os_vif_util [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.877 226310 DEBUG nova.network.os_vif_util [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.877 226310 DEBUG os_vif [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.880 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.880 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7329edb-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:51.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:51 np0005539564 podman[264685]: 2025-11-29 08:12:51.885795361 +0000 UTC m=+0.100466911 container cleanup 99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.886 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.888 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.894 226310 INFO os_vif [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:69:0e,bridge_name='br-int',has_traffic_filtering=True,id=b7329edb-beb8-414a-b8c3-33d223c32d22,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7329edb-be')#033[00m
Nov 29 03:12:51 np0005539564 systemd[1]: libpod-conmon-99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49.scope: Deactivated successfully.
Nov 29 03:12:51 np0005539564 podman[264728]: 2025-11-29 08:12:51.962288715 +0000 UTC m=+0.044263486 container remove 99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:12:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:51.979 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bae1f4c0-5df5-461c-8003-a94148ddf0a3]: (4, ('Sat Nov 29 08:12:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49)\n99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49\nSat Nov 29 08:12:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49)\n99433eba17973cfbcfc8aadb181b0b0acbda5f69cbe59a674106d3804e352f49\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:51.981 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7e153c18-3501-407b-ad00-25699508ba64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:51.983 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.984 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539564 kernel: tap4d5b8c11-b0: left promiscuous mode
Nov 29 03:12:51 np0005539564 nova_compute[226295]: 2025-11-29 08:12:51.998 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:52.001 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[219853f9-2169-4245-aba6-2b6d4d20d1ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:52.019 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d663dc99-3483-4562-9ab2-aba61fffd692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:52.020 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5faa4bc0-e788-43ce-a16a-aa6ec03dd545]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:52.034 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7c4a86-cd89-4c37-9c25-197fe4da2f3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694348, 'reachable_time': 42331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264761, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:52.037 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:52.037 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[910ea425-0fe1-4593-9813-05ee9375c4cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539564 systemd[1]: run-netns-ovnmeta\x2d4d5b8c11\x2db69e\x2d4a74\x2d846b\x2d03943fb29a81.mount: Deactivated successfully.
Nov 29 03:12:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:52.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 03:12:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.144 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 03:12:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 03:12:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 03:12:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.569 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [{"id": "b7329edb-beb8-414a-b8c3-33d223c32d22", "address": "fa:16:3e:c7:69:0e", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7329edb-be", "ovs_interfaceid": "b7329edb-beb8-414a-b8c3-33d223c32d22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.613 226310 INFO nova.virt.libvirt.driver [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Deleting instance files /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_del#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.614 226310 INFO nova.virt.libvirt.driver [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Deletion of /var/lib/nova/instances/8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78_del complete#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.651 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.652 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.653 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.725 226310 INFO nova.compute.manager [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.725 226310 DEBUG oslo.service.loopingcall [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.726 226310 DEBUG nova.compute.manager [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.726 226310 DEBUG nova.network.neutron [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.747 226310 DEBUG nova.compute.manager [req-5979ca79-0f2b-4d16-8ca5-3f961a3e1b5d req-5b2c92db-cf4b-4923-b3da-35cf70d144c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.747 226310 DEBUG oslo_concurrency.lockutils [req-5979ca79-0f2b-4d16-8ca5-3f961a3e1b5d req-5b2c92db-cf4b-4923-b3da-35cf70d144c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.748 226310 DEBUG oslo_concurrency.lockutils [req-5979ca79-0f2b-4d16-8ca5-3f961a3e1b5d req-5b2c92db-cf4b-4923-b3da-35cf70d144c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.748 226310 DEBUG oslo_concurrency.lockutils [req-5979ca79-0f2b-4d16-8ca5-3f961a3e1b5d req-5b2c92db-cf4b-4923-b3da-35cf70d144c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.748 226310 DEBUG nova.compute.manager [req-5979ca79-0f2b-4d16-8ca5-3f961a3e1b5d req-5b2c92db-cf4b-4923-b3da-35cf70d144c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] No waiting events found dispatching network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:52 np0005539564 nova_compute[226295]: 2025-11-29 08:12:52.748 226310 WARNING nova.compute.manager [req-5979ca79-0f2b-4d16-8ca5-3f961a3e1b5d req-5b2c92db-cf4b-4923-b3da-35cf70d144c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received unexpected event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.378 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.379 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.502 226310 INFO nova.compute.manager [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Rebuilding instance#033[00m
Nov 29 03:12:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1327195148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.908 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.986 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:53 np0005539564 nova_compute[226295]: 2025-11-29 08:12:53.986 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:54.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.213 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.214 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4307MB free_disk=20.784767150878906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.214 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.215 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.407 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.408 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 5575532e-65f8-4b29-bab0-a0f8e60d032c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.408 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.409 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.458 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.476 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.507 226310 DEBUG nova.compute.manager [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.517 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.518 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.580 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.594 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'pci_requests' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.614 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.618 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.637 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'resources' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.650 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'migration_context' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.673 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.679 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.776 226310 DEBUG nova.network.neutron [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.794 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.839 226310 INFO nova.compute.manager [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Took 2.11 seconds to deallocate network for instance.#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.965 226310 DEBUG nova.compute.manager [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-unplugged-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.966 226310 DEBUG oslo_concurrency.lockutils [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.966 226310 DEBUG oslo_concurrency.lockutils [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.967 226310 DEBUG oslo_concurrency.lockutils [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.967 226310 DEBUG nova.compute.manager [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] No waiting events found dispatching network-vif-unplugged-b7329edb-beb8-414a-b8c3-33d223c32d22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.968 226310 WARNING nova.compute.manager [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received unexpected event network-vif-unplugged-b7329edb-beb8-414a-b8c3-33d223c32d22 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.968 226310 DEBUG nova.compute.manager [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.969 226310 DEBUG oslo_concurrency.lockutils [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.969 226310 DEBUG oslo_concurrency.lockutils [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.970 226310 DEBUG oslo_concurrency.lockutils [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.970 226310 DEBUG nova.compute.manager [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] No waiting events found dispatching network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.971 226310 WARNING nova.compute.manager [req-8933c0e6-aac4-49f0-99e4-218b5080057d req-660e1e59-479c-4b3b-8983-45b5479547c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received unexpected event network-vif-plugged-b7329edb-beb8-414a-b8c3-33d223c32d22 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:12:54 np0005539564 nova_compute[226295]: 2025-11-29 08:12:54.973 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:55 np0005539564 nova_compute[226295]: 2025-11-29 08:12:55.138 226310 DEBUG nova.compute.manager [req-7e11362a-ca66-479f-8c99-6e7e611e034c req-225e4a83-c229-4933-a618-b6f7c787d67e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Received event network-vif-deleted-b7329edb-beb8-414a-b8c3-33d223c32d22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:55 np0005539564 nova_compute[226295]: 2025-11-29 08:12:55.309 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:55 np0005539564 nova_compute[226295]: 2025-11-29 08:12:55.317 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:55 np0005539564 nova_compute[226295]: 2025-11-29 08:12:55.371 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:55 np0005539564 nova_compute[226295]: 2025-11-29 08:12:55.403 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:12:55 np0005539564 nova_compute[226295]: 2025-11-29 08:12:55.404 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:55 np0005539564 nova_compute[226295]: 2025-11-29 08:12:55.405 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:55 np0005539564 nova_compute[226295]: 2025-11-29 08:12:55.514 226310 DEBUG oslo_concurrency.processutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1171770234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.041 226310 DEBUG oslo_concurrency.processutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.050 226310 DEBUG nova.compute.provider_tree [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.096 226310 DEBUG nova.scheduler.client.report [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:56.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.156 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.280 226310 INFO nova.scheduler.client.report [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Deleted allocations for instance 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78#033[00m
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.404 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.448 226310 DEBUG oslo_concurrency.lockutils [None req-fdbbb5c8-8185-46e5-bd5d-2a6d030d87ee ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.628378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976628419, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 482, "num_deletes": 252, "total_data_size": 556591, "memory_usage": 565256, "flush_reason": "Manual Compaction"}
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976633673, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 366532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44950, "largest_seqno": 45427, "table_properties": {"data_size": 363959, "index_size": 609, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6564, "raw_average_key_size": 19, "raw_value_size": 358675, "raw_average_value_size": 1048, "num_data_blocks": 27, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403957, "oldest_key_time": 1764403957, "file_creation_time": 1764403976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 5335 microseconds, and 1956 cpu microseconds.
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.633711) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 366532 bytes OK
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.633728) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.636340) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.636357) EVENT_LOG_v1 {"time_micros": 1764403976636351, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.636376) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 553661, prev total WAL file size 553661, number of live WAL files 2.
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.636823) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(357KB)], [84(11MB)]
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976636877, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 12276934, "oldest_snapshot_seqno": -1}
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 7312 keys, 10415462 bytes, temperature: kUnknown
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976723283, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 10415462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10367240, "index_size": 28846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 190697, "raw_average_key_size": 26, "raw_value_size": 10237173, "raw_average_value_size": 1400, "num_data_blocks": 1130, "num_entries": 7312, "num_filter_entries": 7312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764403976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.723605) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10415462 bytes
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.725046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.9 rd, 120.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(61.9) write-amplify(28.4) OK, records in: 7829, records dropped: 517 output_compression: NoCompression
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.725078) EVENT_LOG_v1 {"time_micros": 1764403976725063, "job": 52, "event": "compaction_finished", "compaction_time_micros": 86501, "compaction_time_cpu_micros": 28559, "output_level": 6, "num_output_files": 1, "total_output_size": 10415462, "num_input_records": 7829, "num_output_records": 7312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976725366, "job": 52, "event": "table_file_deletion", "file_number": 86}
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976729582, "job": 52, "event": "table_file_deletion", "file_number": 84}
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.636766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.729709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.729715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.729717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.729719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:12:56.729720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.918 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:56 np0005539564 kernel: tap4fdcff4a-99 (unregistering): left promiscuous mode
Nov 29 03:12:56 np0005539564 NetworkManager[48997]: <info>  [1764403976.9794] device (tap4fdcff4a-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.986 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:56 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:56Z|00362|binding|INFO|Releasing lport 4fdcff4a-999b-4a95-bb5f-528102f9556f from this chassis (sb_readonly=0)
Nov 29 03:12:56 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:56Z|00363|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f down in Southbound
Nov 29 03:12:56 np0005539564 ovn_controller[130591]: 2025-11-29T08:12:56Z|00364|binding|INFO|Removing iface tap4fdcff4a-99 ovn-installed in OVS
Nov 29 03:12:56 np0005539564 nova_compute[226295]: 2025-11-29 08:12:56.990 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.014 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.019 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a7:ee 10.100.0.7'], port_security=['fa:16:3e:5d:a7:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5575532e-65f8-4b29-bab0-a0f8e60d032c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-337f42f7-7833-4286-befc-f5fca120d50f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '146c65131f5b423287d348b351399c4e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e83feb7e-e837-4a27-87d5-f25ed404c193', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f225a6f7-d086-4f61-9a6c-4cc9bd46d793, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4fdcff4a-999b-4a95-bb5f-528102f9556f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.021 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4fdcff4a-999b-4a95-bb5f-528102f9556f in datapath 337f42f7-7833-4286-befc-f5fca120d50f unbound from our chassis#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.023 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 337f42f7-7833-4286-befc-f5fca120d50f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.025 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[602e4134-ab1e-4dd0-8f6f-f9e7efd4ff7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.025 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f namespace which is not needed anymore#033[00m
Nov 29 03:12:57 np0005539564 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 29 03:12:57 np0005539564 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000006c.scope: Consumed 15.577s CPU time.
Nov 29 03:12:57 np0005539564 systemd-machined[190128]: Machine qemu-43-instance-0000006c terminated.
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[264265]: [NOTICE]   (264301) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:57 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[264265]: [NOTICE]   (264301) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:57 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[264265]: [WARNING]  (264301) : Exiting Master process...
Nov 29 03:12:57 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[264265]: [ALERT]    (264301) : Current worker (264313) exited with code 143 (Terminated)
Nov 29 03:12:57 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[264265]: [WARNING]  (264301) : All workers exited. Exiting... (0)
Nov 29 03:12:57 np0005539564 systemd[1]: libpod-a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079.scope: Deactivated successfully.
Nov 29 03:12:57 np0005539564 conmon[264265]: conmon a3d4bc5b8847f3bac657 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079.scope/container/memory.events
Nov 29 03:12:57 np0005539564 podman[264854]: 2025-11-29 08:12:57.166118302 +0000 UTC m=+0.049154347 container died a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:12:57 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:57 np0005539564 systemd[1]: var-lib-containers-storage-overlay-88352b610d413f9809aceca517544159ffc1275440e9bbb169f97d82c7a751a2-merged.mount: Deactivated successfully.
Nov 29 03:12:57 np0005539564 podman[264854]: 2025-11-29 08:12:57.197418296 +0000 UTC m=+0.080454341 container cleanup a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:12:57 np0005539564 systemd[1]: libpod-conmon-a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079.scope: Deactivated successfully.
Nov 29 03:12:57 np0005539564 podman[264884]: 2025-11-29 08:12:57.271448473 +0000 UTC m=+0.047036900 container remove a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.280 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2b50e616-eca3-4620-b6a2-cf1301f0ce9e]: (4, ('Sat Nov 29 08:12:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f (a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079)\na3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079\nSat Nov 29 08:12:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f (a3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079)\na3d4bc5b8847f3bac657dbd6a5cae8db3c3010e9ba1ca924dbb8b3e10f12d079\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.282 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21b9d63c-863c-40fc-a1e9-bac2c2f445e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.283 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap337f42f7-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.284 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539564 kernel: tap337f42f7-70: left promiscuous mode
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.305 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.308 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[40084fad-ecb9-4713-aff0-c0bb62e96562]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.327 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a422befa-21d3-465f-90a3-b6fb75dda01d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.328 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[799ef0c8-776c-4c74-afb2-f32c2691ec54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.344 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[537ff76d-1a83-4a3d-98a7-8c75061ad89e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692311, 'reachable_time': 26595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264911, 'error': None, 'target': 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.347 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:12:57.348 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[e47ae037-c685-4ac1-8ba8-566c3b3099c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:57 np0005539564 systemd[1]: run-netns-ovnmeta\x2d337f42f7\x2d7833\x2d4286\x2dbefc\x2df5fca120d50f.mount: Deactivated successfully.
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.626 226310 DEBUG nova.compute.manager [req-e3b8bd32-904c-4a14-82d3-b87e825ab615 req-3a3c62b1-78ce-4c79-99d6-b7e9ef96178c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-unplugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.627 226310 DEBUG oslo_concurrency.lockutils [req-e3b8bd32-904c-4a14-82d3-b87e825ab615 req-3a3c62b1-78ce-4c79-99d6-b7e9ef96178c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.628 226310 DEBUG oslo_concurrency.lockutils [req-e3b8bd32-904c-4a14-82d3-b87e825ab615 req-3a3c62b1-78ce-4c79-99d6-b7e9ef96178c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.629 226310 DEBUG oslo_concurrency.lockutils [req-e3b8bd32-904c-4a14-82d3-b87e825ab615 req-3a3c62b1-78ce-4c79-99d6-b7e9ef96178c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.629 226310 DEBUG nova.compute.manager [req-e3b8bd32-904c-4a14-82d3-b87e825ab615 req-3a3c62b1-78ce-4c79-99d6-b7e9ef96178c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-unplugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.630 226310 WARNING nova.compute.manager [req-e3b8bd32-904c-4a14-82d3-b87e825ab615 req-3a3c62b1-78ce-4c79-99d6-b7e9ef96178c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-unplugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.701 226310 INFO nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.709 226310 INFO nova.virt.libvirt.driver [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance destroyed successfully.#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.717 226310 INFO nova.virt.libvirt.driver [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance destroyed successfully.#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.719 226310 DEBUG nova.virt.libvirt.vif [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-817085192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1111293699',id=108,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjacxFQsKdMM4UDJE5tqVty/GRJtgDwjO80+Cb748HjTaOefbBqANvkqMMVhv8OZRL0vzsrbYfDHv6t2rc90ONK+EYpM6HO7fjiT30tNIEWFgoPJhxB+XUGt8iA5muhkg==',key_name='tempest-keypair-1424948510',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='146c65131f5b423287d348b351399c4e',ramdisk_id='',reservation_id='r-ivfz3olx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-1695306825',owner_user_name='tempest-ServerActionsV293TestJSON-1695306825-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9cb37d6d47ac46aaa19aebb2e5b21658',uuid=5575532e-65f8-4b29-bab0-a0f8e60d032c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.720 226310 DEBUG nova.network.os_vif_util [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converting VIF {"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.721 226310 DEBUG nova.network.os_vif_util [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.721 226310 DEBUG os_vif [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.724 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.724 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fdcff4a-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.726 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.729 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539564 nova_compute[226295]: 2025-11-29 08:12:57.731 226310 INFO os_vif [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99')#033[00m
Nov 29 03:12:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:57.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:58.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:58 np0005539564 nova_compute[226295]: 2025-11-29 08:12:58.557 226310 INFO nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Deleting instance files /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c_del#033[00m
Nov 29 03:12:58 np0005539564 nova_compute[226295]: 2025-11-29 08:12:58.559 226310 INFO nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Deletion of /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c_del complete#033[00m
Nov 29 03:12:59 np0005539564 nova_compute[226295]: 2025-11-29 08:12:59.191 226310 WARNING nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] During detach_volume, instance disappeared.: nova.exception.InstanceNotFound: Instance 5575532e-65f8-4b29-bab0-a0f8e60d032c could not be found.#033[00m
Nov 29 03:12:59 np0005539564 nova_compute[226295]: 2025-11-29 08:12:59.753 226310 DEBUG nova.compute.manager [req-81e70d06-f72d-4c7c-92c2-cc39176eff9a req-6f9a1ce2-5677-4c0d-baaa-7d38e43d2060 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:59 np0005539564 nova_compute[226295]: 2025-11-29 08:12:59.754 226310 DEBUG oslo_concurrency.lockutils [req-81e70d06-f72d-4c7c-92c2-cc39176eff9a req-6f9a1ce2-5677-4c0d-baaa-7d38e43d2060 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:59 np0005539564 nova_compute[226295]: 2025-11-29 08:12:59.754 226310 DEBUG oslo_concurrency.lockutils [req-81e70d06-f72d-4c7c-92c2-cc39176eff9a req-6f9a1ce2-5677-4c0d-baaa-7d38e43d2060 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:59 np0005539564 nova_compute[226295]: 2025-11-29 08:12:59.755 226310 DEBUG oslo_concurrency.lockutils [req-81e70d06-f72d-4c7c-92c2-cc39176eff9a req-6f9a1ce2-5677-4c0d-baaa-7d38e43d2060 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:59 np0005539564 nova_compute[226295]: 2025-11-29 08:12:59.755 226310 DEBUG nova.compute.manager [req-81e70d06-f72d-4c7c-92c2-cc39176eff9a req-6f9a1ce2-5677-4c0d-baaa-7d38e43d2060 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:59 np0005539564 nova_compute[226295]: 2025-11-29 08:12:59.756 226310 WARNING nova.compute.manager [req-81e70d06-f72d-4c7c-92c2-cc39176eff9a req-6f9a1ce2-5677-4c0d-baaa-7d38e43d2060 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:12:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:12:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:12:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:59.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:13:00 np0005539564 nova_compute[226295]: 2025-11-29 08:13:00.072 226310 DEBUG nova.compute.manager [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Preparing to wait for external event volume-reimaged-c5655890-57a1-4371-8ce4-c9179f1c49bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:00 np0005539564 nova_compute[226295]: 2025-11-29 08:13:00.073 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:00 np0005539564 nova_compute[226295]: 2025-11-29 08:13:00.073 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:00 np0005539564 nova_compute[226295]: 2025-11-29 08:13:00.074 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:00.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:00 np0005539564 podman[264933]: 2025-11-29 08:13:00.527840431 +0000 UTC m=+0.074194543 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:13:00 np0005539564 podman[264932]: 2025-11-29 08:13:00.545737884 +0000 UTC m=+0.097705717 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:13:00 np0005539564 podman[264931]: 2025-11-29 08:13:00.587599953 +0000 UTC m=+0.141793516 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:13:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:01.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:02.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.149 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.300 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.300 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.323 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.442 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.443 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.451 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.451 226310 INFO nova.compute.claims [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.727 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539564 nova_compute[226295]: 2025-11-29 08:13:02.779 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3453105964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.228 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.238 226310 DEBUG nova.compute.provider_tree [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.280 226310 DEBUG nova.scheduler.client.report [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.321 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.322 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.572 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.573 226310 DEBUG nova.network.neutron [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.610 226310 INFO nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.660 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:13:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:03.724 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:03.724 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:03.725 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.880 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.882 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.883 226310 INFO nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Creating image(s)#033[00m
Nov 29 03:13:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:03.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.922 226310 DEBUG nova.storage.rbd_utils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:03 np0005539564 nova_compute[226295]: 2025-11-29 08:13:03.962 226310 DEBUG nova.storage.rbd_utils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.000 226310 DEBUG nova.storage.rbd_utils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.006 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.070 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.072 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.073 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.074 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.116 226310 DEBUG nova.storage.rbd_utils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.122 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:04.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.447 226310 DEBUG nova.policy [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca93c8e3eac142c0aa6b61807727dea2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.495 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.618 226310 DEBUG nova.storage.rbd_utils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] resizing rbd image 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.734 226310 DEBUG nova.objects.instance [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'migration_context' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.830 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.831 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Ensure instance console log exists: /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.832 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.832 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:04 np0005539564 nova_compute[226295]: 2025-11-29 08:13:04.832 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:05.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:06.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:06 np0005539564 nova_compute[226295]: 2025-11-29 08:13:06.846 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403971.84505, 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:06 np0005539564 nova_compute[226295]: 2025-11-29 08:13:06.847 226310 INFO nova.compute.manager [-] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:13:06 np0005539564 nova_compute[226295]: 2025-11-29 08:13:06.872 226310 DEBUG nova.compute.manager [None req-a490efc2-c187-4964-a00d-14f577c7ddb1 - - - - - -] [instance: 8ea05c4f-adc6-470b-b8cd-2ee92dbfdf78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:07 np0005539564 nova_compute[226295]: 2025-11-29 08:13:07.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:07 np0005539564 nova_compute[226295]: 2025-11-29 08:13:07.344 226310 DEBUG nova.network.neutron [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Successfully created port: 18df9eaa-1422-4e4b-ac00-67cdb84e329f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:13:07 np0005539564 nova_compute[226295]: 2025-11-29 08:13:07.729 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:07.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:08.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:09.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:10.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:10 np0005539564 nova_compute[226295]: 2025-11-29 08:13:10.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:11.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:12.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.155 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.243 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403977.2426045, 5575532e-65f8-4b29-bab0-a0f8e60d032c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.244 226310 INFO nova.compute.manager [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.287 226310 DEBUG nova.compute.manager [None req-35b1b757-eb46-48da-9e9d-aa1fdd9dcbff - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.609 226310 DEBUG nova.network.neutron [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Successfully updated port: 18df9eaa-1422-4e4b-ac00-67cdb84e329f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.692 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.693 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.693 226310 DEBUG nova.network.neutron [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.990 226310 DEBUG nova.compute.manager [req-dbff6a0e-79b4-411e-8b22-8ae0d2d7662c req-f584c401-3c8c-428e-859c-aa3d7b6badfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.991 226310 DEBUG nova.compute.manager [req-dbff6a0e-79b4-411e-8b22-8ae0d2d7662c req-f584c401-3c8c-428e-859c-aa3d7b6badfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing instance network info cache due to event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:12 np0005539564 nova_compute[226295]: 2025-11-29 08:13:12.991 226310 DEBUG oslo_concurrency.lockutils [req-dbff6a0e-79b4-411e-8b22-8ae0d2d7662c req-f584c401-3c8c-428e-859c-aa3d7b6badfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:13 np0005539564 nova_compute[226295]: 2025-11-29 08:13:13.186 226310 DEBUG nova.network.neutron [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:13.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:14.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:15.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:15 np0005539564 nova_compute[226295]: 2025-11-29 08:13:15.974 226310 DEBUG nova.compute.manager [req-0fc22380-efef-442f-84f5-0b006cca073a req-735ca8f4-ec04-45d6-93da-daedded36a47 dfacc734c4c542069d1326226ba1a96f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event volume-reimaged-c5655890-57a1-4371-8ce4-c9179f1c49bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:15 np0005539564 nova_compute[226295]: 2025-11-29 08:13:15.974 226310 DEBUG oslo_concurrency.lockutils [req-0fc22380-efef-442f-84f5-0b006cca073a req-735ca8f4-ec04-45d6-93da-daedded36a47 dfacc734c4c542069d1326226ba1a96f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:15 np0005539564 nova_compute[226295]: 2025-11-29 08:13:15.975 226310 DEBUG oslo_concurrency.lockutils [req-0fc22380-efef-442f-84f5-0b006cca073a req-735ca8f4-ec04-45d6-93da-daedded36a47 dfacc734c4c542069d1326226ba1a96f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:15 np0005539564 nova_compute[226295]: 2025-11-29 08:13:15.975 226310 DEBUG oslo_concurrency.lockutils [req-0fc22380-efef-442f-84f5-0b006cca073a req-735ca8f4-ec04-45d6-93da-daedded36a47 dfacc734c4c542069d1326226ba1a96f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:15 np0005539564 nova_compute[226295]: 2025-11-29 08:13:15.975 226310 DEBUG nova.compute.manager [req-0fc22380-efef-442f-84f5-0b006cca073a req-735ca8f4-ec04-45d6-93da-daedded36a47 dfacc734c4c542069d1326226ba1a96f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Processing event volume-reimaged-c5655890-57a1-4371-8ce4-c9179f1c49bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:13:15 np0005539564 nova_compute[226295]: 2025-11-29 08:13:15.976 226310 DEBUG nova.compute.manager [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance event wait completed in 14 seconds for volume-reimaged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:16.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:16 np0005539564 nova_compute[226295]: 2025-11-29 08:13:16.968 226310 INFO nova.virt.block_device [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Booting with volume c5655890-57a1-4371-8ce4-c9179f1c49bb at /dev/vda#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.475 226310 DEBUG nova.network.neutron [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.537 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.538 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance network_info: |[{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.538 226310 DEBUG oslo_concurrency.lockutils [req-dbff6a0e-79b4-411e-8b22-8ae0d2d7662c req-f584c401-3c8c-428e-859c-aa3d7b6badfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.539 226310 DEBUG nova.network.neutron [req-dbff6a0e-79b4-411e-8b22-8ae0d2d7662c req-f584c401-3c8c-428e-859c-aa3d7b6badfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.545 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Start _get_guest_xml network_info=[{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.548 226310 DEBUG os_brick.utils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.551 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.556 226310 WARNING nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.569 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.570 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[f202c9a0-4368-47a1-a965-e119fd1936ae]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.572 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.585 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.585 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[1d541622-7156-4002-bf3e-121c09e3ebe6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.587 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.600 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.601 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[9d08659f-8f9a-4de2-a997-0716878b0079]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.603 226310 DEBUG nova.virt.libvirt.host [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.604 226310 DEBUG nova.virt.libvirt.host [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.603 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4627bb-971f-447d-83d5-6162b0c91b16]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.605 226310 DEBUG oslo_concurrency.processutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.656 226310 DEBUG oslo_concurrency.processutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "nvme version" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.661 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.661 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.662 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.663 226310 DEBUG os_brick.utils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] <== get_connector_properties: return (114ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.664 226310 DEBUG nova.virt.block_device [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updating existing volume attachment record: eeb95054-dd7b-498a-83f2-fc2054b7cc15 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.672 226310 DEBUG nova.virt.libvirt.host [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.672 226310 DEBUG nova.virt.libvirt.host [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.675 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.675 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.676 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.677 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.677 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.678 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.678 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.678 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.679 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.679 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.680 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.680 226310 DEBUG nova.virt.hardware [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.687 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:17 np0005539564 nova_compute[226295]: 2025-11-29 08:13:17.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:17.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:18.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2610712328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:18 np0005539564 nova_compute[226295]: 2025-11-29 08:13:18.211 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:18 np0005539564 nova_compute[226295]: 2025-11-29 08:13:18.255 226310 DEBUG nova.storage.rbd_utils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:18 np0005539564 nova_compute[226295]: 2025-11-29 08:13:18.261 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3422734767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:18 np0005539564 nova_compute[226295]: 2025-11-29 08:13:18.785 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:18 np0005539564 nova_compute[226295]: 2025-11-29 08:13:18.789 226310 DEBUG nova.virt.libvirt.vif [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:18 np0005539564 nova_compute[226295]: 2025-11-29 08:13:18.790 226310 DEBUG nova.network.os_vif_util [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:18 np0005539564 nova_compute[226295]: 2025-11-29 08:13:18.792 226310 DEBUG nova.network.os_vif_util [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:18 np0005539564 nova_compute[226295]: 2025-11-29 08:13:18.794 226310 DEBUG nova.objects.instance [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.487 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <uuid>48a6ffaa-4f03-4048-bd19-c50aea2863cc</uuid>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <name>instance-0000006f</name>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherB-server-127920487</nova:name>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:13:17</nova:creationTime>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <nova:port uuid="18df9eaa-1422-4e4b-ac00-67cdb84e329f">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <entry name="serial">48a6ffaa-4f03-4048-bd19-c50aea2863cc</entry>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <entry name="uuid">48a6ffaa-4f03-4048-bd19-c50aea2863cc</entry>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk.config">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:16:57:d0"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <target dev="tap18df9eaa-14"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/console.log" append="off"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:13:19 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:13:19 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:13:19 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:13:19 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.488 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Preparing to wait for external event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.489 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.489 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.489 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.490 226310 DEBUG nova.virt.libvirt.vif [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.490 226310 DEBUG nova.network.os_vif_util [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.491 226310 DEBUG nova.network.os_vif_util [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.491 226310 DEBUG os_vif [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.492 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.492 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.492 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.498 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.498 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18df9eaa-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.499 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18df9eaa-14, col_values=(('external_ids', {'iface-id': '18df9eaa-1422-4e4b-ac00-67cdb84e329f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:57:d0', 'vm-uuid': '48a6ffaa-4f03-4048-bd19-c50aea2863cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.501 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:19 np0005539564 NetworkManager[48997]: <info>  [1764403999.5024] manager: (tap18df9eaa-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.503 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.514 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.516 226310 INFO os_vif [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14')#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.607 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.607 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.608 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:16:57:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.608 226310 INFO nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Using config drive#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.639 226310 DEBUG nova.storage.rbd_utils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:19.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.947 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.948 226310 INFO nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Creating image(s)#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.949 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.949 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Ensure instance console log exists: /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.949 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.949 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.950 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.952 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Start _get_guest_xml network_info=[{"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c5655890-57a1-4371-8ce4-c9179f1c49bb', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c5655890-57a1-4371-8ce4-c9179f1c49bb', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5575532e-65f8-4b29-bab0-a0f8e60d032c', 'attached_at': '', 'detached_at': '', 'volume_id': 'c5655890-57a1-4371-8ce4-c9179f1c49bb', 'serial': 'c5655890-57a1-4371-8ce4-c9179f1c49bb'}, 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'eeb95054-dd7b-498a-83f2-fc2054b7cc15', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.959 226310 WARNING nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.969 226310 DEBUG nova.virt.libvirt.host [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.970 226310 DEBUG nova.virt.libvirt.host [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.983 226310 DEBUG nova.virt.libvirt.host [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.985 226310 DEBUG nova.virt.libvirt.host [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.987 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.987 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.988 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.989 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.989 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.989 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.990 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.990 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.991 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.991 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.992 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.992 226310 DEBUG nova.virt.hardware [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:19 np0005539564 nova_compute[226295]: 2025-11-29 08:13:19.993 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.069 226310 DEBUG nova.storage.rbd_utils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] rbd image 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.075 226310 DEBUG oslo_concurrency.processutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/827675621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.553 226310 DEBUG oslo_concurrency.processutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.563 226310 INFO nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Creating config drive at /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/disk.config#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.569 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyc444bda execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.666 226310 DEBUG nova.virt.libvirt.vif [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-817085192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1111293699',id=108,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjacxFQsKdMM4UDJE5tqVty/GRJtgDwjO80+Cb748HjTaOefbBqANvkqMMVhv8OZRL0vzsrbYfDHv6t2rc90ONK+EYpM6HO7fjiT30tNIEWFgoPJhxB+XUGt8iA5muhkg==',key_name='tempest-keypair-1424948510',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='146c65131f5b423287d348b351399c4e',ramdisk_id='',reservation_id='r-ivfz3olx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-1695306825',owner_user_name='tempest-ServerActionsV293TestJSON-1695306825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9cb37d6d47ac46aaa19aebb2e5b21658',uuid=5575532e-65f8-4b29-bab0-a0f8e60d032c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.666 226310 DEBUG nova.network.os_vif_util [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converting VIF {"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.667 226310 DEBUG nova.network.os_vif_util [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.670 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <uuid>5575532e-65f8-4b29-bab0-a0f8e60d032c</uuid>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <name>instance-0000006c</name>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsV293TestJSON-server-817085192</nova:name>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:13:19</nova:creationTime>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <nova:user uuid="9cb37d6d47ac46aaa19aebb2e5b21658">tempest-ServerActionsV293TestJSON-1695306825-project-member</nova:user>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <nova:project uuid="146c65131f5b423287d348b351399c4e">tempest-ServerActionsV293TestJSON-1695306825</nova:project>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <nova:port uuid="4fdcff4a-999b-4a95-bb5f-528102f9556f">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <entry name="serial">5575532e-65f8-4b29-bab0-a0f8e60d032c</entry>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <entry name="uuid">5575532e-65f8-4b29-bab0-a0f8e60d032c</entry>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-c5655890-57a1-4371-8ce4-c9179f1c49bb">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <serial>c5655890-57a1-4371-8ce4-c9179f1c49bb</serial>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5d:a7:ee"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <target dev="tap4fdcff4a-99"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/console.log" append="off"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:13:20 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:13:20 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:13:20 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:13:20 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.671 226310 DEBUG nova.virt.libvirt.vif [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-817085192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1111293699',id=108,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjacxFQsKdMM4UDJE5tqVty/GRJtgDwjO80+Cb748HjTaOefbBqANvkqMMVhv8OZRL0vzsrbYfDHv6t2rc90ONK+EYpM6HO7fjiT30tNIEWFgoPJhxB+XUGt8iA5muhkg==',key_name='tempest-keypair-1424948510',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='146c65131f5b423287d348b351399c4e',ramdisk_id='',reservation_id='r-ivfz3olx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-1695306825',owner_user_name='tempest-ServerActionsV293TestJSON-1695306825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9cb37d6d47ac46aaa19aebb2e5b21658',uuid=5575532e-65f8-4b29-bab0-a0f8e60d032c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.671 226310 DEBUG nova.network.os_vif_util [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converting VIF {"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.671 226310 DEBUG nova.network.os_vif_util [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.672 226310 DEBUG os_vif [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.676 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.676 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.677 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.681 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.682 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fdcff4a-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.683 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fdcff4a-99, col_values=(('external_ids', {'iface-id': '4fdcff4a-999b-4a95-bb5f-528102f9556f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:a7:ee', 'vm-uuid': '5575532e-65f8-4b29-bab0-a0f8e60d032c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.686 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:20 np0005539564 NetworkManager[48997]: <info>  [1764404000.6873] manager: (tap4fdcff4a-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.689 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.695 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.697 226310 INFO os_vif [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99')#033[00m
Nov 29 03:13:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.732 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyc444bda" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.779 226310 DEBUG nova.storage.rbd_utils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.786 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/disk.config 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.886 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.887 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.888 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] No VIF found with MAC fa:16:3e:5d:a7:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.890 226310 INFO nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Using config drive#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.933 226310 DEBUG nova.storage.rbd_utils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] rbd image 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:20 np0005539564 nova_compute[226295]: 2025-11-29 08:13:20.985 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.014 226310 DEBUG oslo_concurrency.processutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/disk.config 48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.014 226310 INFO nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Deleting local config drive /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.074 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'keypairs' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:21 np0005539564 kernel: tap18df9eaa-14: entered promiscuous mode
Nov 29 03:13:21 np0005539564 NetworkManager[48997]: <info>  [1764404001.0923] manager: (tap18df9eaa-14): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.098 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:21Z|00365|binding|INFO|Claiming lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f for this chassis.
Nov 29 03:13:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:21Z|00366|binding|INFO|18df9eaa-1422-4e4b-ac00-67cdb84e329f: Claiming fa:16:3e:16:57:d0 10.100.0.13
Nov 29 03:13:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:21Z|00367|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f ovn-installed in OVS
Nov 29 03:13:21 np0005539564 systemd-udevd[265385]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539564 systemd-machined[190128]: New machine qemu-45-instance-0000006f.
Nov 29 03:13:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:21Z|00368|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f up in Southbound
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.162 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:57:d0 10.100.0.13'], port_security=['fa:16:3e:16:57:d0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '48a6ffaa-4f03-4048-bd19-c50aea2863cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=18df9eaa-1422-4e4b-ac00-67cdb84e329f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:21 np0005539564 NetworkManager[48997]: <info>  [1764404001.1635] device (tap18df9eaa-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.163 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 18df9eaa-1422-4e4b-ac00-67cdb84e329f in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 bound to our chassis#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.165 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:13:21 np0005539564 NetworkManager[48997]: <info>  [1764404001.1660] device (tap18df9eaa-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:21 np0005539564 systemd[1]: Started Virtual Machine qemu-45-instance-0000006f.
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.182 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[45f67960-99eb-48cc-809d-32bea8a91606]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.183 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d5b8c11-b1 in ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.185 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d5b8c11-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.185 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f4679cc4-a55d-45c7-9434-6b0b1fa4df31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.186 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[df93675d-8b71-4e1e-8b88-b59b4c0cf006]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.204 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[54082e5d-bd47-46ee-b73d-20c8f7aae660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.233 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1832af6d-61d7-4fa0-902d-e797d63a9ed4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.276 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[63a34c25-54e5-4578-b685-7e6e9743ede6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 NetworkManager[48997]: <info>  [1764404001.2885] manager: (tap4d5b8c11-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/190)
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.286 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9756f411-2446-44d5-bce1-f3f814ecb761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.331 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0bec4132-2b2b-4ea2-8792-1b0dfd1cb59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.336 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d21daa87-8058-4641-b51e-315a5988b4c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 NetworkManager[48997]: <info>  [1764404001.3748] device (tap4d5b8c11-b0): carrier: link connected
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.380 226310 DEBUG nova.network.neutron [req-dbff6a0e-79b4-411e-8b22-8ae0d2d7662c req-f584c401-3c8c-428e-859c-aa3d7b6badfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updated VIF entry in instance network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.381 226310 DEBUG nova.network.neutron [req-dbff6a0e-79b4-411e-8b22-8ae0d2d7662c req-f584c401-3c8c-428e-859c-aa3d7b6badfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.382 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[072d4afc-3ddb-4ee7-b3b6-dde0d8c9f79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.408 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ccbd88c1-3cb1-4fb5-b8b0-80d308295933]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697605, 'reachable_time': 24366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265425, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.433 226310 DEBUG oslo_concurrency.lockutils [req-dbff6a0e-79b4-411e-8b22-8ae0d2d7662c req-f584c401-3c8c-428e-859c-aa3d7b6badfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.435 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[46278a89-4d08-401a-9d10-ef504e00506f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:6d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697605, 'tstamp': 697605}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265426, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.463 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[360320eb-8797-42b0-91b9-74d84f22933a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697605, 'reachable_time': 24366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265427, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.514 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6d588b-6fca-4c65-8841-fd94097e4832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.604 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e3b5f9-95a9-4789-9553-7c1188e277f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.606 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.607 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.608 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.610 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539564 NetworkManager[48997]: <info>  [1764404001.6118] manager: (tap4d5b8c11-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Nov 29 03:13:21 np0005539564 kernel: tap4d5b8c11-b0: entered promiscuous mode
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.616 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.617 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:21Z|00369|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.619 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.650 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.655 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.656 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.657 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d0f5a3-f1ff-4be1-899a-f85e1526732d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.658 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:21.659 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'env', 'PROCESS_TAG=haproxy-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d5b8c11-b69e-4a74-846b-03943fb29a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.795 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404001.794866, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.795 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.819 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.825 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404001.7950675, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.826 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.853 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.858 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.885 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.917 226310 INFO nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Creating config drive at /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config#033[00m
Nov 29 03:13:21 np0005539564 nova_compute[226295]: 2025-11-29 08:13:21.925 226310 DEBUG oslo_concurrency.processutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpya18tpps execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:21.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.024 226310 DEBUG nova.compute.manager [req-24794333-7545-4bf8-8e63-176e1fbd0f5c req-6c148644-badc-4817-8366-8b6f3f4bd3a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.025 226310 DEBUG oslo_concurrency.lockutils [req-24794333-7545-4bf8-8e63-176e1fbd0f5c req-6c148644-badc-4817-8366-8b6f3f4bd3a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.026 226310 DEBUG oslo_concurrency.lockutils [req-24794333-7545-4bf8-8e63-176e1fbd0f5c req-6c148644-badc-4817-8366-8b6f3f4bd3a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.026 226310 DEBUG oslo_concurrency.lockutils [req-24794333-7545-4bf8-8e63-176e1fbd0f5c req-6c148644-badc-4817-8366-8b6f3f4bd3a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.027 226310 DEBUG nova.compute.manager [req-24794333-7545-4bf8-8e63-176e1fbd0f5c req-6c148644-badc-4817-8366-8b6f3f4bd3a0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Processing event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.028 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.035 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404002.0348504, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.036 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.040 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.045 226310 INFO nova.virt.libvirt.driver [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance spawned successfully.#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.045 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.070 226310 DEBUG oslo_concurrency.processutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpya18tpps" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.102 226310 DEBUG nova.storage.rbd_utils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] rbd image 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.107 226310 DEBUG oslo_concurrency.processutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:22 np0005539564 podman[265508]: 2025-11-29 08:13:22.125684292 +0000 UTC m=+0.056479824 container create bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.156 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.159 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.166 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.167 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.167 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.167 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.168 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.168 226310 DEBUG nova.virt.libvirt.driver [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.173 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:22 np0005539564 systemd[1]: Started libpod-conmon-bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5.scope.
Nov 29 03:13:22 np0005539564 podman[265508]: 2025-11-29 08:13:22.096725591 +0000 UTC m=+0.027521143 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:22 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.212 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:22 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7945ee8c7dbf1c178094b6e694e404139d661a362af422cc25834ddfdae7f86/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:22 np0005539564 podman[265508]: 2025-11-29 08:13:22.234383224 +0000 UTC m=+0.165178796 container init bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:13:22 np0005539564 podman[265508]: 2025-11-29 08:13:22.242035441 +0000 UTC m=+0.172830993 container start bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.252 226310 INFO nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Took 18.37 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.253 226310 DEBUG nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:22 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[265542]: [NOTICE]   (265561) : New worker (265566) forked
Nov 29 03:13:22 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[265542]: [NOTICE]   (265561) : Loading success.
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.327 226310 DEBUG oslo_concurrency.processutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config 5575532e-65f8-4b29-bab0-a0f8e60d032c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.328 226310 INFO nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Deleting local config drive /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.362 226310 INFO nova.compute.manager [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Took 19.97 seconds to build instance.#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.385 226310 DEBUG oslo_concurrency.lockutils [None req-25415328-736b-4a55-8a2f-5791dcfdec18 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:22 np0005539564 kernel: tap4fdcff4a-99: entered promiscuous mode
Nov 29 03:13:22 np0005539564 NetworkManager[48997]: <info>  [1764404002.4287] manager: (tap4fdcff4a-99): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.430 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:22Z|00370|binding|INFO|Claiming lport 4fdcff4a-999b-4a95-bb5f-528102f9556f for this chassis.
Nov 29 03:13:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:22Z|00371|binding|INFO|4fdcff4a-999b-4a95-bb5f-528102f9556f: Claiming fa:16:3e:5d:a7:ee 10.100.0.7
Nov 29 03:13:22 np0005539564 systemd-udevd[265413]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:22Z|00372|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f ovn-installed in OVS
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.456 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:22Z|00373|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f up in Southbound
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.460 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a7:ee 10.100.0.7'], port_security=['fa:16:3e:5d:a7:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5575532e-65f8-4b29-bab0-a0f8e60d032c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-337f42f7-7833-4286-befc-f5fca120d50f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '146c65131f5b423287d348b351399c4e', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e83feb7e-e837-4a27-87d5-f25ed404c193', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f225a6f7-d086-4f61-9a6c-4cc9bd46d793, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4fdcff4a-999b-4a95-bb5f-528102f9556f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:22 np0005539564 NetworkManager[48997]: <info>  [1764404002.4657] device (tap4fdcff4a-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.468 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4fdcff4a-999b-4a95-bb5f-528102f9556f in datapath 337f42f7-7833-4286-befc-f5fca120d50f bound to our chassis#033[00m
Nov 29 03:13:22 np0005539564 NetworkManager[48997]: <info>  [1764404002.4713] device (tap4fdcff4a-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.472 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 337f42f7-7833-4286-befc-f5fca120d50f#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.488 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[733bd48e-bcd3-4144-b460-5d2cb344cf98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.490 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap337f42f7-71 in ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.492 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap337f42f7-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.493 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7b69df-7554-48af-9cd2-763cc7c8f612]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.493 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1371a3-4eb6-4d60-b9db-a600c73f66ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 systemd-machined[190128]: New machine qemu-46-instance-0000006c.
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.507 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[1df911ad-fa4c-4ac3-9a88-7b503dab0b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 systemd[1]: Started Virtual Machine qemu-46-instance-0000006c.
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.530 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[83f1ed4a-2511-4ac9-84ad-77e8c0394f82]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.558 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[19fd6fd5-cf4f-4698-9e50-4d72847d9d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 NetworkManager[48997]: <info>  [1764404002.5688] manager: (tap337f42f7-70): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.573 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[09fea65b-58c5-494e-b52c-59d798eb020f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.628 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[14c78219-3d6d-45b4-93ba-baa3da86bb5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.636 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[07d59c9e-833e-40af-9702-0fcc7ae7e06d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 NetworkManager[48997]: <info>  [1764404002.6813] device (tap337f42f7-70): carrier: link connected
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.695 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4e4b82bc-6573-4589-a55e-336709877d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.718 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[392c8da0-578a-4aa3-ad27-e398617aef14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap337f42f7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:dd:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697736, 'reachable_time': 42291, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265607, 'error': None, 'target': 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.740 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[652c2e1c-e678-4c8e-bfdb-6cd6de51add3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:dd97'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697736, 'tstamp': 697736}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265608, 'error': None, 'target': 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.760 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7645213b-915d-48eb-b39e-87bf9d1d5414]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap337f42f7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:dd:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697736, 'reachable_time': 42291, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265609, 'error': None, 'target': 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.802 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d31715-71a5-4787-9fa2-3265ae3ae21d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.883 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[56e1f6be-5989-4292-a182-4a3e1bce784e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.886 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap337f42f7-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.887 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.889 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap337f42f7-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.891 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539564 kernel: tap337f42f7-70: entered promiscuous mode
Nov 29 03:13:22 np0005539564 NetworkManager[48997]: <info>  [1764404002.8923] manager: (tap337f42f7-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.895 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.904 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap337f42f7-70, col_values=(('external_ids', {'iface-id': 'a440551b-4851-4b07-bfbe-398f9cdcd887'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:22Z|00374|binding|INFO|Releasing lport a440551b-4851-4b07-bfbe-398f9cdcd887 from this chassis (sb_readonly=0)
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.907 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.909 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/337f42f7-7833-4286-befc-f5fca120d50f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/337f42f7-7833-4286-befc-f5fca120d50f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:22 np0005539564 nova_compute[226295]: 2025-11-29 08:13:22.925 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.924 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bf612f09-6faf-4b0e-b0d8-60e55b85c012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.928 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-337f42f7-7833-4286-befc-f5fca120d50f
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/337f42f7-7833-4286-befc-f5fca120d50f.pid.haproxy
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 337f42f7-7833-4286-befc-f5fca120d50f
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:22.930 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'env', 'PROCESS_TAG=haproxy-337f42f7-7833-4286-befc-f5fca120d50f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/337f42f7-7833-4286-befc-f5fca120d50f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.223 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404003.2226954, 5575532e-65f8-4b29-bab0-a0f8e60d032c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.224 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.226 226310 DEBUG nova.compute.manager [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.227 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.231 226310 INFO nova.virt.libvirt.driver [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance spawned successfully.#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.232 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.293 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.303 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.307 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.308 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.313 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.314 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.314 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.316 226310 DEBUG nova.virt.libvirt.driver [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.393 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.394 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404003.2234833, 5575532e-65f8-4b29-bab0-a0f8e60d032c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.394 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:23 np0005539564 podman[265680]: 2025-11-29 08:13:23.422574625 +0000 UTC m=+0.070305718 container create f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.434 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.440 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.470 226310 DEBUG nova.compute.manager [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:23 np0005539564 systemd[1]: Started libpod-conmon-f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5.scope.
Nov 29 03:13:23 np0005539564 podman[265680]: 2025-11-29 08:13:23.392510244 +0000 UTC m=+0.040241407 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:23 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:13:23 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f6efc1d32a4d1c9a85707f27218706f474badd62bb3e0608a74146eff0e69f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.543 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:13:23 np0005539564 podman[265680]: 2025-11-29 08:13:23.545148431 +0000 UTC m=+0.192879504 container init f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:13:23 np0005539564 podman[265680]: 2025-11-29 08:13:23.553052374 +0000 UTC m=+0.200783447 container start f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.576 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.576 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.577 226310 DEBUG nova.objects.instance [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:13:23 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[265696]: [NOTICE]   (265700) : New worker (265702) forked
Nov 29 03:13:23 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[265696]: [NOTICE]   (265700) : Loading success.
Nov 29 03:13:23 np0005539564 nova_compute[226295]: 2025-11-29 08:13:23.672 226310 DEBUG oslo_concurrency.lockutils [None req-0fc22380-efef-442f-84f5-0b006cca073a 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:23.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:24.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:25 np0005539564 nova_compute[226295]: 2025-11-29 08:13:25.379 226310 DEBUG nova.compute.manager [req-363a6d88-62e8-4453-863c-be41f75e6d75 req-47d7e78c-34b5-4f6f-9ca8-ec9c5404272a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:25 np0005539564 nova_compute[226295]: 2025-11-29 08:13:25.380 226310 DEBUG oslo_concurrency.lockutils [req-363a6d88-62e8-4453-863c-be41f75e6d75 req-47d7e78c-34b5-4f6f-9ca8-ec9c5404272a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:25 np0005539564 nova_compute[226295]: 2025-11-29 08:13:25.381 226310 DEBUG oslo_concurrency.lockutils [req-363a6d88-62e8-4453-863c-be41f75e6d75 req-47d7e78c-34b5-4f6f-9ca8-ec9c5404272a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:25 np0005539564 nova_compute[226295]: 2025-11-29 08:13:25.382 226310 DEBUG oslo_concurrency.lockutils [req-363a6d88-62e8-4453-863c-be41f75e6d75 req-47d7e78c-34b5-4f6f-9ca8-ec9c5404272a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:25 np0005539564 nova_compute[226295]: 2025-11-29 08:13:25.383 226310 DEBUG nova.compute.manager [req-363a6d88-62e8-4453-863c-be41f75e6d75 req-47d7e78c-34b5-4f6f-9ca8-ec9c5404272a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:25 np0005539564 nova_compute[226295]: 2025-11-29 08:13:25.383 226310 WARNING nova.compute.manager [req-363a6d88-62e8-4453-863c-be41f75e6d75 req-47d7e78c-34b5-4f6f-9ca8-ec9c5404272a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:25 np0005539564 nova_compute[226295]: 2025-11-29 08:13:25.688 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:25.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:26.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:26 np0005539564 nova_compute[226295]: 2025-11-29 08:13:26.462 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:26.463 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:26.466 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:13:27 np0005539564 nova_compute[226295]: 2025-11-29 08:13:27.162 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:27.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:28.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.608 226310 DEBUG nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.609 226310 DEBUG oslo_concurrency.lockutils [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.610 226310 DEBUG oslo_concurrency.lockutils [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.610 226310 DEBUG oslo_concurrency.lockutils [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.610 226310 DEBUG nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.611 226310 WARNING nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.612 226310 DEBUG nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.613 226310 DEBUG oslo_concurrency.lockutils [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.613 226310 DEBUG oslo_concurrency.lockutils [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.614 226310 DEBUG oslo_concurrency.lockutils [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.614 226310 DEBUG nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:28 np0005539564 nova_compute[226295]: 2025-11-29 08:13:28.614 226310 WARNING nova.compute.manager [req-7141b8a9-bd5a-40fb-857f-dddcddbea6c3 req-c1e0c59e-17b7-465c-bda6-357516a9e324 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:29.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:30.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:30 np0005539564 nova_compute[226295]: 2025-11-29 08:13:30.692 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:31 np0005539564 nova_compute[226295]: 2025-11-29 08:13:31.301 226310 DEBUG nova.compute.manager [req-e8fa219e-10e1-43b2-a3e9-4b5c8570bf71 req-7b1923f1-aa72-49d0-9e24-c328e55c6e7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:31 np0005539564 nova_compute[226295]: 2025-11-29 08:13:31.302 226310 DEBUG nova.compute.manager [req-e8fa219e-10e1-43b2-a3e9-4b5c8570bf71 req-7b1923f1-aa72-49d0-9e24-c328e55c6e7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing instance network info cache due to event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:31 np0005539564 nova_compute[226295]: 2025-11-29 08:13:31.303 226310 DEBUG oslo_concurrency.lockutils [req-e8fa219e-10e1-43b2-a3e9-4b5c8570bf71 req-7b1923f1-aa72-49d0-9e24-c328e55c6e7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:31 np0005539564 nova_compute[226295]: 2025-11-29 08:13:31.303 226310 DEBUG oslo_concurrency.lockutils [req-e8fa219e-10e1-43b2-a3e9-4b5c8570bf71 req-7b1923f1-aa72-49d0-9e24-c328e55c6e7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:31 np0005539564 nova_compute[226295]: 2025-11-29 08:13:31.304 226310 DEBUG nova.network.neutron [req-e8fa219e-10e1-43b2-a3e9-4b5c8570bf71 req-7b1923f1-aa72-49d0-9e24-c328e55c6e7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:31 np0005539564 systemd[1]: Starting dnf makecache...
Nov 29 03:13:31 np0005539564 podman[265712]: 2025-11-29 08:13:31.546028016 +0000 UTC m=+0.092987749 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:13:31 np0005539564 podman[265713]: 2025-11-29 08:13:31.554838274 +0000 UTC m=+0.093351309 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:13:31 np0005539564 podman[265711]: 2025-11-29 08:13:31.598895212 +0000 UTC m=+0.145681290 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:13:31 np0005539564 dnf[265714]: Metadata cache refreshed recently.
Nov 29 03:13:31 np0005539564 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 03:13:31 np0005539564 systemd[1]: Finished dnf makecache.
Nov 29 03:13:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:31.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:32 np0005539564 nova_compute[226295]: 2025-11-29 08:13:32.167 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:33.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:34.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:34 np0005539564 nova_compute[226295]: 2025-11-29 08:13:34.338 226310 DEBUG nova.network.neutron [req-e8fa219e-10e1-43b2-a3e9-4b5c8570bf71 req-7b1923f1-aa72-49d0-9e24-c328e55c6e7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updated VIF entry in instance network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:34 np0005539564 nova_compute[226295]: 2025-11-29 08:13:34.339 226310 DEBUG nova.network.neutron [req-e8fa219e-10e1-43b2-a3e9-4b5c8570bf71 req-7b1923f1-aa72-49d0-9e24-c328e55c6e7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:34 np0005539564 nova_compute[226295]: 2025-11-29 08:13:34.380 226310 DEBUG oslo_concurrency.lockutils [req-e8fa219e-10e1-43b2-a3e9-4b5c8570bf71 req-7b1923f1-aa72-49d0-9e24-c328e55c6e7b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:35 np0005539564 nova_compute[226295]: 2025-11-29 08:13:35.695 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:35 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 29 03:13:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:35.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:36.468 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:36Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:57:d0 10.100.0.13
Nov 29 03:13:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:36Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:57:d0 10.100.0.13
Nov 29 03:13:37 np0005539564 nova_compute[226295]: 2025-11-29 08:13:37.169 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:37 np0005539564 podman[266163]: 2025-11-29 08:13:37.828636283 +0000 UTC m=+0.038632783 container create 789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:37 np0005539564 systemd[1]: Started libpod-conmon-789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846.scope.
Nov 29 03:13:37 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:13:37 np0005539564 podman[266163]: 2025-11-29 08:13:37.810288718 +0000 UTC m=+0.020285248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:37 np0005539564 podman[266163]: 2025-11-29 08:13:37.924960171 +0000 UTC m=+0.134956701 container init 789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:37 np0005539564 podman[266163]: 2025-11-29 08:13:37.93900523 +0000 UTC m=+0.149001760 container start 789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_visvesvaraya, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:13:37 np0005539564 podman[266163]: 2025-11-29 08:13:37.943265995 +0000 UTC m=+0.153262515 container attach 789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:13:37 np0005539564 gifted_visvesvaraya[266180]: 167 167
Nov 29 03:13:37 np0005539564 systemd[1]: libpod-789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846.scope: Deactivated successfully.
Nov 29 03:13:37 np0005539564 conmon[266180]: conmon 789c91f60f256c8240a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846.scope/container/memory.events
Nov 29 03:13:37 np0005539564 podman[266163]: 2025-11-29 08:13:37.948097155 +0000 UTC m=+0.158093655 container died 789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:13:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:37.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:37 np0005539564 systemd[1]: var-lib-containers-storage-overlay-16e483193a9cccdfa9269fc921a736bb0ed773c51990947d0e23fa422c28b277-merged.mount: Deactivated successfully.
Nov 29 03:13:38 np0005539564 podman[266163]: 2025-11-29 08:13:38.000075457 +0000 UTC m=+0.210071967 container remove 789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_visvesvaraya, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:13:38 np0005539564 systemd[1]: libpod-conmon-789c91f60f256c8240a76e8d551a2bf4392f3886c9ddab4dd6d6e3326be91846.scope: Deactivated successfully.
Nov 29 03:13:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:38 np0005539564 podman[266202]: 2025-11-29 08:13:38.245876638 +0000 UTC m=+0.050065822 container create 14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:13:38 np0005539564 systemd[1]: Started libpod-conmon-14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c.scope.
Nov 29 03:13:38 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:13:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78e3ab78c25e2bc3ddf8aa6022e582a86a73fd4fe6ff9ef224554d54bb1a982/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78e3ab78c25e2bc3ddf8aa6022e582a86a73fd4fe6ff9ef224554d54bb1a982/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78e3ab78c25e2bc3ddf8aa6022e582a86a73fd4fe6ff9ef224554d54bb1a982/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:38 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78e3ab78c25e2bc3ddf8aa6022e582a86a73fd4fe6ff9ef224554d54bb1a982/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:38 np0005539564 podman[266202]: 2025-11-29 08:13:38.22778884 +0000 UTC m=+0.031978024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:38 np0005539564 podman[266202]: 2025-11-29 08:13:38.337474398 +0000 UTC m=+0.141663582 container init 14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:38 np0005539564 podman[266202]: 2025-11-29 08:13:38.346616575 +0000 UTC m=+0.150805729 container start 14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:13:38 np0005539564 podman[266202]: 2025-11-29 08:13:38.350851859 +0000 UTC m=+0.155041043 container attach 14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:13:38 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:38Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:a7:ee 10.100.0.7
Nov 29 03:13:38 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:38Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:a7:ee 10.100.0.7
Nov 29 03:13:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]: [
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:    {
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        "available": false,
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        "ceph_device": false,
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        "lsm_data": {},
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        "lvs": [],
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        "path": "/dev/sr0",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        "rejected_reasons": [
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "Insufficient space (<5GB)",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "Has a FileSystem"
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        ],
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        "sys_api": {
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "actuators": null,
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "device_nodes": "sr0",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "devname": "sr0",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "human_readable_size": "482.00 KB",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "id_bus": "ata",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "model": "QEMU DVD-ROM",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "nr_requests": "2",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "parent": "/dev/sr0",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "partitions": {},
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "path": "/dev/sr0",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "removable": "1",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "rev": "2.5+",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "ro": "0",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "rotational": "1",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "sas_address": "",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "sas_device_handle": "",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "scheduler_mode": "mq-deadline",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "sectors": 0,
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "sectorsize": "2048",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "size": 493568.0,
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "support_discard": "2048",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "type": "disk",
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:            "vendor": "QEMU"
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:        }
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]:    }
Nov 29 03:13:39 np0005539564 lucid_visvesvaraya[266219]: ]
Nov 29 03:13:39 np0005539564 systemd[1]: libpod-14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c.scope: Deactivated successfully.
Nov 29 03:13:39 np0005539564 podman[266202]: 2025-11-29 08:13:39.849212436 +0000 UTC m=+1.653401630 container died 14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:13:39 np0005539564 systemd[1]: libpod-14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c.scope: Consumed 1.539s CPU time.
Nov 29 03:13:39 np0005539564 systemd[1]: var-lib-containers-storage-overlay-e78e3ab78c25e2bc3ddf8aa6022e582a86a73fd4fe6ff9ef224554d54bb1a982-merged.mount: Deactivated successfully.
Nov 29 03:13:39 np0005539564 podman[266202]: 2025-11-29 08:13:39.91831863 +0000 UTC m=+1.722507794 container remove 14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:13:39 np0005539564 systemd[1]: libpod-conmon-14b3d3933b0c7aaafd94c7d16ea29922eafcb3bf035529e13d9cfbe102058e0c.scope: Deactivated successfully.
Nov 29 03:13:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:39.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:40.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539564 nova_compute[226295]: 2025-11-29 08:13:40.699 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:41.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:42 np0005539564 nova_compute[226295]: 2025-11-29 08:13:42.173 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:13:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:13:43 np0005539564 nova_compute[226295]: 2025-11-29 08:13:43.930 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:43.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:44.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:44 np0005539564 nova_compute[226295]: 2025-11-29 08:13:44.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:45 np0005539564 nova_compute[226295]: 2025-11-29 08:13:45.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:45 np0005539564 nova_compute[226295]: 2025-11-29 08:13:45.703 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:45.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:46.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:47 np0005539564 nova_compute[226295]: 2025-11-29 08:13:47.177 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:47.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:48.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:48 np0005539564 nova_compute[226295]: 2025-11-29 08:13:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:48 np0005539564 nova_compute[226295]: 2025-11-29 08:13:48.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:13:49 np0005539564 nova_compute[226295]: 2025-11-29 08:13:49.301 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:49 np0005539564 nova_compute[226295]: 2025-11-29 08:13:49.301 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:49 np0005539564 nova_compute[226295]: 2025-11-29 08:13:49.302 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:13:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:49.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:50.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:50 np0005539564 nova_compute[226295]: 2025-11-29 08:13:50.706 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:51 np0005539564 nova_compute[226295]: 2025-11-29 08:13:51.848 226310 DEBUG oslo_concurrency.lockutils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:51 np0005539564 nova_compute[226295]: 2025-11-29 08:13:51.848 226310 DEBUG oslo_concurrency.lockutils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:51 np0005539564 nova_compute[226295]: 2025-11-29 08:13:51.864 226310 DEBUG nova.objects.instance [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'flavor' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:51 np0005539564 nova_compute[226295]: 2025-11-29 08:13:51.915 226310 DEBUG oslo_concurrency.lockutils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:51.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.179 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:52.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.256 226310 DEBUG oslo_concurrency.lockutils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.257 226310 DEBUG oslo_concurrency.lockutils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.258 226310 INFO nova.compute.manager [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Attaching volume e19fd9ae-371b-4152-b2b2-910bd950e653 to /dev/vdb#033[00m
Nov 29 03:13:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e285 e285: 3 total, 3 up, 3 in
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.644 226310 DEBUG os_brick.utils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.646 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.660 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.661 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[02385505-9351-4082-832a-f9aca18a70af]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.662 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.672 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.672 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[0295a8ac-093c-455c-b459-c45b295a0d57]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.674 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.683 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.684 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[e724023b-72b0-461e-b45c-a9d981a1c812]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.686 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6fb138-f708-48ea-b382-5a6544270f9d]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.687 226310 DEBUG oslo_concurrency.processutils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.735 226310 DEBUG oslo_concurrency.processutils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "nvme version" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.739 226310 DEBUG os_brick.initiator.connectors.lightos [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.740 226310 DEBUG os_brick.initiator.connectors.lightos [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.740 226310 DEBUG os_brick.initiator.connectors.lightos [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.741 226310 DEBUG os_brick.utils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:13:52 np0005539564 nova_compute[226295]: 2025-11-29 08:13:52.742 226310 DEBUG nova.virt.block_device [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating existing volume attachment record: a84ca83b-facb-4a2c-8046-34d303905b5e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.625 226310 DEBUG nova.objects.instance [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'flavor' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.707 226310 DEBUG nova.virt.libvirt.driver [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Attempting to attach volume e19fd9ae-371b-4152-b2b2-910bd950e653 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.710 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updating instance_info_cache with network_info: [{"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.713 226310 DEBUG nova.virt.libvirt.guest [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:13:53 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:13:53 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-e19fd9ae-371b-4152-b2b2-910bd950e653">
Nov 29 03:13:53 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:53 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:53 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:53 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:13:53 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:13:53 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:53 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:13:53 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:13:53 np0005539564 nova_compute[226295]:  <serial>e19fd9ae-371b-4152-b2b2-910bd950e653</serial>
Nov 29 03:13:53 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:13:53 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.736 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-5575532e-65f8-4b29-bab0-a0f8e60d032c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.737 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.737 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.738 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.739 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.739 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.856 226310 DEBUG nova.virt.libvirt.driver [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.856 226310 DEBUG nova.virt.libvirt.driver [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.857 226310 DEBUG nova.virt.libvirt.driver [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:53 np0005539564 nova_compute[226295]: 2025-11-29 08:13:53.857 226310 DEBUG nova.virt.libvirt.driver [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:16:57:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:53.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:54.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:54 np0005539564 nova_compute[226295]: 2025-11-29 08:13:54.393 226310 DEBUG oslo_concurrency.lockutils [None req-451c0474-860f-41aa-9c6f-d2f981dea287 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.399 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.399 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.400 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.400 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.401 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.709 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2413399167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:55 np0005539564 nova_compute[226295]: 2025-11-29 08:13:55.941 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:55.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.053 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.054 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.054 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.059 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.059 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:56.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.341 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.342 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4104MB free_disk=20.78482437133789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.342 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.343 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.437 226310 INFO nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating resource usage from migration 2909cc6b-3d8b-4f0b-bc0b-f0caf4f98d5f#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.502 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 5575532e-65f8-4b29-bab0-a0f8e60d032c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.503 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 48a6ffaa-4f03-4048-bd19-c50aea2863cc actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.503 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.503 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:13:56 np0005539564 nova_compute[226295]: 2025-11-29 08:13:56.574 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4279235389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:57 np0005539564 nova_compute[226295]: 2025-11-29 08:13:57.071 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:57 np0005539564 nova_compute[226295]: 2025-11-29 08:13:57.081 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:57 np0005539564 nova_compute[226295]: 2025-11-29 08:13:57.099 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:57 np0005539564 nova_compute[226295]: 2025-11-29 08:13:57.162 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:13:57 np0005539564 nova_compute[226295]: 2025-11-29 08:13:57.163 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:57 np0005539564 nova_compute[226295]: 2025-11-29 08:13:57.181 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:57.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:13:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:58.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.828 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.829 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.829 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.829 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.830 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.831 226310 INFO nova.compute.manager [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Terminating instance#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.833 226310 DEBUG nova.compute.manager [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.863 226310 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.863 226310 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.864 226310 DEBUG nova.network.neutron [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:59 np0005539564 kernel: tap4fdcff4a-99 (unregistering): left promiscuous mode
Nov 29 03:13:59 np0005539564 NetworkManager[48997]: <info>  [1764404039.9133] device (tap4fdcff4a-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:13:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:59Z|00375|binding|INFO|Releasing lport 4fdcff4a-999b-4a95-bb5f-528102f9556f from this chassis (sb_readonly=0)
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.927 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:59Z|00376|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f down in Southbound
Nov 29 03:13:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:13:59Z|00377|binding|INFO|Removing iface tap4fdcff4a-99 ovn-installed in OVS
Nov 29 03:13:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:59.938 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a7:ee 10.100.0.7'], port_security=['fa:16:3e:5d:a7:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5575532e-65f8-4b29-bab0-a0f8e60d032c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-337f42f7-7833-4286-befc-f5fca120d50f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '146c65131f5b423287d348b351399c4e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e83feb7e-e837-4a27-87d5-f25ed404c193', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f225a6f7-d086-4f61-9a6c-4cc9bd46d793, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4fdcff4a-999b-4a95-bb5f-528102f9556f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:59.940 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4fdcff4a-999b-4a95-bb5f-528102f9556f in datapath 337f42f7-7833-4286-befc-f5fca120d50f unbound from our chassis#033[00m
Nov 29 03:13:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:59.943 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 337f42f7-7833-4286-befc-f5fca120d50f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:13:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:59.945 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6e7f99-17af-48cb-9272-12217969134d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:13:59.946 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f namespace which is not needed anymore#033[00m
Nov 29 03:13:59 np0005539564 nova_compute[226295]: 2025-11-29 08:13:59.956 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:13:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:59.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:59 np0005539564 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 29 03:13:59 np0005539564 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006c.scope: Consumed 15.958s CPU time.
Nov 29 03:13:59 np0005539564 systemd-machined[190128]: Machine qemu-46-instance-0000006c terminated.
Nov 29 03:14:00 np0005539564 kernel: tap4fdcff4a-99: entered promiscuous mode
Nov 29 03:14:00 np0005539564 NetworkManager[48997]: <info>  [1764404040.0687] manager: (tap4fdcff4a-99): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Nov 29 03:14:00 np0005539564 systemd-udevd[267743]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.070 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00378|binding|INFO|Claiming lport 4fdcff4a-999b-4a95-bb5f-528102f9556f for this chassis.
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00379|binding|INFO|4fdcff4a-999b-4a95-bb5f-528102f9556f: Claiming fa:16:3e:5d:a7:ee 10.100.0.7
Nov 29 03:14:00 np0005539564 kernel: tap4fdcff4a-99 (unregistering): left promiscuous mode
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.085 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a7:ee 10.100.0.7'], port_security=['fa:16:3e:5d:a7:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5575532e-65f8-4b29-bab0-a0f8e60d032c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-337f42f7-7833-4286-befc-f5fca120d50f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '146c65131f5b423287d348b351399c4e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e83feb7e-e837-4a27-87d5-f25ed404c193', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f225a6f7-d086-4f61-9a6c-4cc9bd46d793, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4fdcff4a-999b-4a95-bb5f-528102f9556f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:00 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[265696]: [NOTICE]   (265700) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:00 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[265696]: [NOTICE]   (265700) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:00 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[265696]: [WARNING]  (265700) : Exiting Master process...
Nov 29 03:14:00 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[265696]: [WARNING]  (265700) : Exiting Master process...
Nov 29 03:14:00 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[265696]: [ALERT]    (265700) : Current worker (265702) exited with code 143 (Terminated)
Nov 29 03:14:00 np0005539564 neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f[265696]: [WARNING]  (265700) : All workers exited. Exiting... (0)
Nov 29 03:14:00 np0005539564 systemd[1]: libpod-f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5.scope: Deactivated successfully.
Nov 29 03:14:00 np0005539564 podman[267761]: 2025-11-29 08:14:00.106846851 +0000 UTC m=+0.054171242 container died f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00380|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f ovn-installed in OVS
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00381|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f up in Southbound
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00382|binding|INFO|Releasing lport 4fdcff4a-999b-4a95-bb5f-528102f9556f from this chassis (sb_readonly=1)
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00383|if_status|INFO|Dropped 1 log messages in last 772 seconds (most recently, 772 seconds ago) due to excessive rate
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00384|if_status|INFO|Not setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f down as sb is readonly
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.117 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00385|binding|INFO|Removing iface tap4fdcff4a-99 ovn-installed in OVS
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.119 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00386|binding|INFO|Releasing lport 4fdcff4a-999b-4a95-bb5f-528102f9556f from this chassis (sb_readonly=0)
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00387|binding|INFO|Setting lport 4fdcff4a-999b-4a95-bb5f-528102f9556f down in Southbound
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.122 226310 INFO nova.virt.libvirt.driver [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Instance destroyed successfully.#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.122 226310 DEBUG nova.objects.instance [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lazy-loading 'resources' on Instance uuid 5575532e-65f8-4b29-bab0-a0f8e60d032c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.132 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.134 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a7:ee 10.100.0.7'], port_security=['fa:16:3e:5d:a7:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5575532e-65f8-4b29-bab0-a0f8e60d032c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-337f42f7-7833-4286-befc-f5fca120d50f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '146c65131f5b423287d348b351399c4e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e83feb7e-e837-4a27-87d5-f25ed404c193', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f225a6f7-d086-4f61-9a6c-4cc9bd46d793, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4fdcff4a-999b-4a95-bb5f-528102f9556f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.141 226310 DEBUG nova.virt.libvirt.vif [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-817085192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1111293699',id=108,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjacxFQsKdMM4UDJE5tqVty/GRJtgDwjO80+Cb748HjTaOefbBqANvkqMMVhv8OZRL0vzsrbYfDHv6t2rc90ONK+EYpM6HO7fjiT30tNIEWFgoPJhxB+XUGt8iA5muhkg==',key_name='tempest-keypair-1424948510',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='146c65131f5b423287d348b351399c4e',ramdisk_id='',reservation_id='r-ivfz3olx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-1695306825',owner_user_name='tempest-ServerActionsV293TestJSON-1695306825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9cb37d6d47ac46aaa19aebb2e5b21658',uuid=5575532e-65f8-4b29-bab0-a0f8e60d032c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.141 226310 DEBUG nova.network.os_vif_util [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converting VIF {"id": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "address": "fa:16:3e:5d:a7:ee", "network": {"id": "337f42f7-7833-4286-befc-f5fca120d50f", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1164364075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146c65131f5b423287d348b351399c4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fdcff4a-99", "ovs_interfaceid": "4fdcff4a-999b-4a95-bb5f-528102f9556f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.142 226310 DEBUG nova.network.os_vif_util [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.142 226310 DEBUG os_vif [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.144 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.144 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fdcff4a-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.149 226310 INFO os_vif [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a7:ee,bridge_name='br-int',has_traffic_filtering=True,id=4fdcff4a-999b-4a95-bb5f-528102f9556f,network=Network(337f42f7-7833-4286-befc-f5fca120d50f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fdcff4a-99')#033[00m
Nov 29 03:14:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:00.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:00 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:00 np0005539564 systemd[1]: var-lib-containers-storage-overlay-34f6efc1d32a4d1c9a85707f27218706f474badd62bb3e0608a74146eff0e69f-merged.mount: Deactivated successfully.
Nov 29 03:14:00 np0005539564 podman[267761]: 2025-11-29 08:14:00.356762203 +0000 UTC m=+0.304086574 container cleanup f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:14:00 np0005539564 systemd[1]: libpod-conmon-f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5.scope: Deactivated successfully.
Nov 29 03:14:00 np0005539564 podman[267813]: 2025-11-29 08:14:00.520831998 +0000 UTC m=+0.136440012 container remove f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.528 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[48e2f364-bc96-444a-a736-b7d74e50246a]: (4, ('Sat Nov 29 08:14:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f (f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5)\nf466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5\nSat Nov 29 08:14:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f (f466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5)\nf466550ce3169cd5a2c585c3ed37d172b82063aedf87c191b620b751f34b8be5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.530 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e3674a-0332-4de1-af04-0f7edf89b0be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.530 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap337f42f7-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:00 np0005539564 kernel: tap337f42f7-70: left promiscuous mode
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.578 226310 INFO nova.virt.libvirt.driver [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Deleting instance files /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c_del#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.579 226310 INFO nova.virt.libvirt.driver [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Deletion of /var/lib/nova/instances/5575532e-65f8-4b29-bab0-a0f8e60d032c_del complete#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.582 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e286 e286: 3 total, 3 up, 3 in
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.593 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.597 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[976937fd-ff06-4112-b6da-8ba63760ce93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:00Z|00388|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.623 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6934e5b4-f9f5-4a7c-a133-77bd73966c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.625 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2c247d-750a-489b-ac81-00e853cfd789]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.640 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[67022e6a-d304-45d6-804c-73a19cbbd403]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697723, 'reachable_time': 38160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267830, 'error': None, 'target': 'ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.644 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-337f42f7-7833-4286-befc-f5fca120d50f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:00 np0005539564 systemd[1]: run-netns-ovnmeta\x2d337f42f7\x2d7833\x2d4286\x2dbefc\x2df5fca120d50f.mount: Deactivated successfully.
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.644 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[34bbaeeb-8024-43ef-80bf-2b04a0f43d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.645 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4fdcff4a-999b-4a95-bb5f-528102f9556f in datapath 337f42f7-7833-4286-befc-f5fca120d50f unbound from our chassis#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.647 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 337f42f7-7833-4286-befc-f5fca120d50f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.648 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2933c29e-146d-4adb-91da-19cda1e08582]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.648 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4fdcff4a-999b-4a95-bb5f-528102f9556f in datapath 337f42f7-7833-4286-befc-f5fca120d50f unbound from our chassis#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.649 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 337f42f7-7833-4286-befc-f5fca120d50f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:00.650 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6a16d1-429d-4280-930e-8959e6c0766c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.660 226310 INFO nova.compute.manager [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.660 226310 DEBUG oslo.service.loopingcall [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.661 226310 DEBUG nova.compute.manager [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.661 226310 DEBUG nova.network.neutron [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.852 226310 DEBUG nova.compute.manager [req-39a9da9a-a19b-4765-913c-3d2e398d8644 req-222dcf93-ba11-47dc-aa24-d5b84b3e5437 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-unplugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.856 226310 DEBUG oslo_concurrency.lockutils [req-39a9da9a-a19b-4765-913c-3d2e398d8644 req-222dcf93-ba11-47dc-aa24-d5b84b3e5437 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.856 226310 DEBUG oslo_concurrency.lockutils [req-39a9da9a-a19b-4765-913c-3d2e398d8644 req-222dcf93-ba11-47dc-aa24-d5b84b3e5437 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.857 226310 DEBUG oslo_concurrency.lockutils [req-39a9da9a-a19b-4765-913c-3d2e398d8644 req-222dcf93-ba11-47dc-aa24-d5b84b3e5437 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.857 226310 DEBUG nova.compute.manager [req-39a9da9a-a19b-4765-913c-3d2e398d8644 req-222dcf93-ba11-47dc-aa24-d5b84b3e5437 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-unplugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:00 np0005539564 nova_compute[226295]: 2025-11-29 08:14:00.857 226310 DEBUG nova.compute.manager [req-39a9da9a-a19b-4765-913c-3d2e398d8644 req-222dcf93-ba11-47dc-aa24-d5b84b3e5437 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-unplugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:14:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:02.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.323 226310 DEBUG nova.network.neutron [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.352 226310 DEBUG nova.network.neutron [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.405 226310 INFO nova.compute.manager [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Took 1.74 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.409 226310 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.449 226310 DEBUG nova.compute.manager [req-85210bbb-a499-4504-8d26-f05bd6de2c57 req-09b29d0b-4ac7-453a-8c80-e226b2e82126 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-deleted-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.450 226310 INFO nova.compute.manager [req-85210bbb-a499-4504-8d26-f05bd6de2c57 req-09b29d0b-4ac7-453a-8c80-e226b2e82126 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Neutron deleted interface 4fdcff4a-999b-4a95-bb5f-528102f9556f; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.450 226310 DEBUG nova.network.neutron [req-85210bbb-a499-4504-8d26-f05bd6de2c57 req-09b29d0b-4ac7-453a-8c80-e226b2e82126 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.535 226310 DEBUG nova.compute.manager [req-85210bbb-a499-4504-8d26-f05bd6de2c57 req-09b29d0b-4ac7-453a-8c80-e226b2e82126 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Detach interface failed, port_id=4fdcff4a-999b-4a95-bb5f-528102f9556f, reason: Instance 5575532e-65f8-4b29-bab0-a0f8e60d032c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:14:02 np0005539564 podman[267833]: 2025-11-29 08:14:02.560193768 +0000 UTC m=+0.090970524 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:02 np0005539564 podman[267832]: 2025-11-29 08:14:02.570023044 +0000 UTC m=+0.104262694 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.616 226310 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.616 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Creating file /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/debd363a4ff54b75881bfdef2a77bb6c.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.616 226310 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/debd363a4ff54b75881bfdef2a77bb6c.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:02 np0005539564 podman[267831]: 2025-11-29 08:14:02.625614883 +0000 UTC m=+0.163139502 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.715 226310 INFO nova.compute.manager [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Took 0.31 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:14:02 np0005539564 nova_compute[226295]: 2025-11-29 08:14:02.718 226310 DEBUG nova.compute.manager [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Deleting volume: c5655890-57a1-4371-8ce4-c9179f1c49bb _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.028 226310 DEBUG nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.029 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.030 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.030 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.030 226310 DEBUG nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.030 226310 WARNING nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.031 226310 DEBUG nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.031 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.031 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.031 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.032 226310 DEBUG nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.032 226310 WARNING nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.032 226310 DEBUG nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.032 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.033 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.033 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.033 226310 DEBUG nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.033 226310 WARNING nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.034 226310 DEBUG nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.034 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.034 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.034 226310 DEBUG oslo_concurrency.lockutils [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.035 226310 DEBUG nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] No waiting events found dispatching network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.035 226310 WARNING nova.compute.manager [req-67b898b5-fe6e-4083-8220-4dbc4854c6fe req-cab10bd2-a142-4493-a489-3274d078c0a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Received unexpected event network-vif-plugged-4fdcff4a-999b-4a95-bb5f-528102f9556f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.054 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.055 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.066 226310 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/debd363a4ff54b75881bfdef2a77bb6c.tmp" returned: 1 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.067 226310 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/debd363a4ff54b75881bfdef2a77bb6c.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.071 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Creating directory /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.072 226310 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.184 226310 DEBUG oslo_concurrency.processutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.309 226310 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.319 226310 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:14:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4082016165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.638 226310 DEBUG oslo_concurrency.processutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.647 226310 DEBUG nova.compute.provider_tree [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.694 226310 DEBUG nova.scheduler.client.report [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:03.725 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:03.725 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:03.726 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.749 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.856 226310 INFO nova.scheduler.client.report [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Deleted allocations for instance 5575532e-65f8-4b29-bab0-a0f8e60d032c#033[00m
Nov 29 03:14:03 np0005539564 nova_compute[226295]: 2025-11-29 08:14:03.973 226310 DEBUG oslo_concurrency.lockutils [None req-e43738e8-0ada-4ebd-81dd-585ca092fc93 9cb37d6d47ac46aaa19aebb2e5b21658 146c65131f5b423287d348b351399c4e - - default default] Lock "5575532e-65f8-4b29-bab0-a0f8e60d032c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:03.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:04.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:05 np0005539564 nova_compute[226295]: 2025-11-29 08:14:05.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:05.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:06 np0005539564 nova_compute[226295]: 2025-11-29 08:14:06.344 226310 INFO nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.189 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539564 kernel: tap18df9eaa-14 (unregistering): left promiscuous mode
Nov 29 03:14:07 np0005539564 NetworkManager[48997]: <info>  [1764404047.5674] device (tap18df9eaa-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.575 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:07.574 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:07.575 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:14:07 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:07Z|00389|binding|INFO|Releasing lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f from this chassis (sb_readonly=1)
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.582 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:07Z|00390|binding|INFO|Removing iface tap18df9eaa-14 ovn-installed in OVS
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.585 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:07Z|00391|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f down in Southbound
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.620 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539564 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 29 03:14:07 np0005539564 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006f.scope: Consumed 16.775s CPU time.
Nov 29 03:14:07 np0005539564 systemd-machined[190128]: Machine qemu-45-instance-0000006f terminated.
Nov 29 03:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:07.666 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:57:d0 10.100.0.13'], port_security=['fa:16:3e:16:57:d0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '48a6ffaa-4f03-4048-bd19-c50aea2863cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=18df9eaa-1422-4e4b-ac00-67cdb84e329f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:07.668 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 18df9eaa-1422-4e4b-ac00-67cdb84e329f in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:07.669 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d5b8c11-b69e-4a74-846b-03943fb29a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:07.670 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9e139b94-d5f3-4714-8ba9-0a55d095d264]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:07.671 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace which is not needed anymore#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.798 226310 INFO nova.virt.libvirt.driver [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance destroyed successfully.#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.800 226310 DEBUG nova.virt.libvirt.vif [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:16:57:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.801 226310 DEBUG nova.network.os_vif_util [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:16:57:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.801 226310 DEBUG nova.network.os_vif_util [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.801 226310 DEBUG os_vif [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.804 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.804 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18df9eaa-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.856 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.859 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.861 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.863 226310 INFO os_vif [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14')#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.870 226310 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.870 226310 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:07 np0005539564 nova_compute[226295]: 2025-11-29 08:14:07.870 226310 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:07 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[265542]: [NOTICE]   (265561) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:07 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[265542]: [NOTICE]   (265561) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:07 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[265542]: [WARNING]  (265561) : Exiting Master process...
Nov 29 03:14:07 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[265542]: [ALERT]    (265561) : Current worker (265566) exited with code 143 (Terminated)
Nov 29 03:14:07 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[265542]: [WARNING]  (265561) : All workers exited. Exiting... (0)
Nov 29 03:14:07 np0005539564 systemd[1]: libpod-bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5.scope: Deactivated successfully.
Nov 29 03:14:07 np0005539564 podman[267951]: 2025-11-29 08:14:07.990466966 +0000 UTC m=+0.109849695 container died bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:14:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:07.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:08 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:08 np0005539564 systemd[1]: var-lib-containers-storage-overlay-f7945ee8c7dbf1c178094b6e694e404139d661a362af422cc25834ddfdae7f86-merged.mount: Deactivated successfully.
Nov 29 03:14:08 np0005539564 podman[267951]: 2025-11-29 08:14:08.149104784 +0000 UTC m=+0.268487503 container cleanup bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:14:08 np0005539564 systemd[1]: libpod-conmon-bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5.scope: Deactivated successfully.
Nov 29 03:14:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:08.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:08 np0005539564 podman[267980]: 2025-11-29 08:14:08.334371441 +0000 UTC m=+0.151191399 container remove bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.342 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[24560616-8412-4b8e-8214-9c01d717f4ca]: (4, ('Sat Nov 29 08:14:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5)\nbb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5\nSat Nov 29 08:14:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (bb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5)\nbb0ad30917e8299976005c262903ad77c385d1133ee40082ead14dbd8d94d6f5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.345 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3c96f9de-edac-4661-932e-6c413792d49b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.347 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.349 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539564 kernel: tap4d5b8c11-b0: left promiscuous mode
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.364 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.366 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.368 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d62596fc-4aee-4660-8e13-d992349e57be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.388 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[59bbcdd1-2247-43ee-bc9c-1d9441f51ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.390 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[050638c0-f57a-4d10-bf22-24fce4403805]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.417 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[834a9fdb-0f4a-47b6-995f-e72f43b210f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697595, 'reachable_time': 44344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267995, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.419 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:08.420 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[930586f8-16dd-4f64-9f83-3a1ddebea2ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539564 systemd[1]: run-netns-ovnmeta\x2d4d5b8c11\x2db69e\x2d4a74\x2d846b\x2d03943fb29a81.mount: Deactivated successfully.
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.440 226310 DEBUG nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.440 226310 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.441 226310 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.441 226310 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.442 226310 DEBUG nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.442 226310 WARNING nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.443 226310 DEBUG nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.443 226310 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.444 226310 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.444 226310 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.445 226310 DEBUG nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:08 np0005539564 nova_compute[226295]: 2025-11-29 08:14:08.445 226310 WARNING nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:14:09 np0005539564 nova_compute[226295]: 2025-11-29 08:14:09.033 226310 DEBUG neutronclient.v2_0.client [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:14:09 np0005539564 nova_compute[226295]: 2025-11-29 08:14:09.307 226310 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:09 np0005539564 nova_compute[226295]: 2025-11-29 08:14:09.308 226310 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:09 np0005539564 nova_compute[226295]: 2025-11-29 08:14:09.309 226310 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:09.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:10.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:11 np0005539564 nova_compute[226295]: 2025-11-29 08:14:11.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:11.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:12 np0005539564 nova_compute[226295]: 2025-11-29 08:14:12.017 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:12 np0005539564 nova_compute[226295]: 2025-11-29 08:14:12.192 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e287 e287: 3 total, 3 up, 3 in
Nov 29 03:14:12 np0005539564 nova_compute[226295]: 2025-11-29 08:14:12.899 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:12 np0005539564 nova_compute[226295]: 2025-11-29 08:14:12.959 226310 DEBUG nova.compute.manager [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:12 np0005539564 nova_compute[226295]: 2025-11-29 08:14:12.959 226310 DEBUG nova.compute.manager [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing instance network info cache due to event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:12 np0005539564 nova_compute[226295]: 2025-11-29 08:14:12.959 226310 DEBUG oslo_concurrency.lockutils [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:12 np0005539564 nova_compute[226295]: 2025-11-29 08:14:12.960 226310 DEBUG oslo_concurrency.lockutils [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:12 np0005539564 nova_compute[226295]: 2025-11-29 08:14:12.960 226310 DEBUG nova.network.neutron [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:13.577 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:14.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:14.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:15 np0005539564 nova_compute[226295]: 2025-11-29 08:14:15.116 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404040.1154196, 5575532e-65f8-4b29-bab0-a0f8e60d032c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:15 np0005539564 nova_compute[226295]: 2025-11-29 08:14:15.117 226310 INFO nova.compute.manager [-] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:15 np0005539564 nova_compute[226295]: 2025-11-29 08:14:15.174 226310 DEBUG nova.compute.manager [None req-cca7c5fb-0949-491b-827f-f6aac476ed6e - - - - - -] [instance: 5575532e-65f8-4b29-bab0-a0f8e60d032c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:15 np0005539564 nova_compute[226295]: 2025-11-29 08:14:15.639 226310 DEBUG nova.network.neutron [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updated VIF entry in instance network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:15 np0005539564 nova_compute[226295]: 2025-11-29 08:14:15.640 226310 DEBUG nova.network.neutron [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:15 np0005539564 nova_compute[226295]: 2025-11-29 08:14:15.682 226310 DEBUG oslo_concurrency.lockutils [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e288 e288: 3 total, 3 up, 3 in
Nov 29 03:14:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:16.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:17 np0005539564 nova_compute[226295]: 2025-11-29 08:14:17.194 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:17 np0005539564 nova_compute[226295]: 2025-11-29 08:14:17.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:18.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:18.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:19 np0005539564 nova_compute[226295]: 2025-11-29 08:14:19.006 226310 DEBUG nova.compute.manager [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:19 np0005539564 nova_compute[226295]: 2025-11-29 08:14:19.007 226310 DEBUG oslo_concurrency.lockutils [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:19 np0005539564 nova_compute[226295]: 2025-11-29 08:14:19.007 226310 DEBUG oslo_concurrency.lockutils [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:19 np0005539564 nova_compute[226295]: 2025-11-29 08:14:19.008 226310 DEBUG oslo_concurrency.lockutils [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:19 np0005539564 nova_compute[226295]: 2025-11-29 08:14:19.008 226310 DEBUG nova.compute.manager [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:19 np0005539564 nova_compute[226295]: 2025-11-29 08:14:19.008 226310 WARNING nova.compute.manager [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:14:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:20.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:20.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:21 np0005539564 nova_compute[226295]: 2025-11-29 08:14:21.241 226310 DEBUG nova.compute.manager [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:21 np0005539564 nova_compute[226295]: 2025-11-29 08:14:21.242 226310 DEBUG oslo_concurrency.lockutils [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:21 np0005539564 nova_compute[226295]: 2025-11-29 08:14:21.242 226310 DEBUG oslo_concurrency.lockutils [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:21 np0005539564 nova_compute[226295]: 2025-11-29 08:14:21.243 226310 DEBUG oslo_concurrency.lockutils [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:21 np0005539564 nova_compute[226295]: 2025-11-29 08:14:21.243 226310 DEBUG nova.compute.manager [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:21 np0005539564 nova_compute[226295]: 2025-11-29 08:14:21.244 226310 WARNING nova.compute.manager [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:14:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:22.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:22 np0005539564 nova_compute[226295]: 2025-11-29 08:14:22.196 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:22.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:22 np0005539564 nova_compute[226295]: 2025-11-29 08:14:22.797 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404047.795846, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:22 np0005539564 nova_compute[226295]: 2025-11-29 08:14:22.798 226310 INFO nova.compute.manager [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:22 np0005539564 nova_compute[226295]: 2025-11-29 08:14:22.829 226310 DEBUG nova.compute.manager [None req-328ad1c5-d757-4b5c-b96c-59a3b9ad93eb - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:22 np0005539564 nova_compute[226295]: 2025-11-29 08:14:22.834 226310 DEBUG nova.compute.manager [None req-328ad1c5-d757-4b5c-b96c-59a3b9ad93eb - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:22 np0005539564 nova_compute[226295]: 2025-11-29 08:14:22.891 226310 INFO nova.compute.manager [None req-328ad1c5-d757-4b5c-b96c-59a3b9ad93eb - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Nov 29 03:14:22 np0005539564 nova_compute[226295]: 2025-11-29 08:14:22.941 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:24.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3153978743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:24.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:25 np0005539564 nova_compute[226295]: 2025-11-29 08:14:25.618 226310 DEBUG nova.compute.manager [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:25 np0005539564 nova_compute[226295]: 2025-11-29 08:14:25.619 226310 DEBUG oslo_concurrency.lockutils [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:25 np0005539564 nova_compute[226295]: 2025-11-29 08:14:25.620 226310 DEBUG oslo_concurrency.lockutils [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:25 np0005539564 nova_compute[226295]: 2025-11-29 08:14:25.620 226310 DEBUG oslo_concurrency.lockutils [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:25 np0005539564 nova_compute[226295]: 2025-11-29 08:14:25.620 226310 DEBUG nova.compute.manager [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:25 np0005539564 nova_compute[226295]: 2025-11-29 08:14:25.621 226310 WARNING nova.compute.manager [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:14:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:26.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:26.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:26 np0005539564 nova_compute[226295]: 2025-11-29 08:14:26.769 226310 INFO nova.compute.manager [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Swapping old allocation on dict_keys(['ea190a43-1246-44b8-8f8b-a61b155a1d3b']) held by migration 2909cc6b-3d8b-4f0b-bc0b-f0caf4f98d5f for instance#033[00m
Nov 29 03:14:26 np0005539564 nova_compute[226295]: 2025-11-29 08:14:26.821 226310 DEBUG nova.scheduler.client.report [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Overwriting current allocation {'allocations': {'190eff98-dce8-46c0-8a7d-870d6fa5cbbd': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 57}}, 'project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'user_id': 'ca93c8e3eac142c0aa6b61807727dea2', 'consumer_generation': 1} on consumer 48a6ffaa-4f03-4048-bd19-c50aea2863cc move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.086 226310 INFO nova.network.neutron [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating port 18df9eaa-1422-4e4b-ac00-67cdb84e329f with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.198 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.776 226310 DEBUG nova.compute.manager [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.777 226310 DEBUG oslo_concurrency.lockutils [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.777 226310 DEBUG oslo_concurrency.lockutils [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.777 226310 DEBUG oslo_concurrency.lockutils [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.777 226310 DEBUG nova.compute.manager [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.778 226310 WARNING nova.compute.manager [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:14:27 np0005539564 nova_compute[226295]: 2025-11-29 08:14:27.944 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:28.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:28.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:28 np0005539564 nova_compute[226295]: 2025-11-29 08:14:28.372 226310 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:28 np0005539564 nova_compute[226295]: 2025-11-29 08:14:28.373 226310 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:28 np0005539564 nova_compute[226295]: 2025-11-29 08:14:28.373 226310 DEBUG nova.network.neutron [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.054 226310 DEBUG nova.compute.manager [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.054 226310 DEBUG nova.compute.manager [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing instance network info cache due to event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.055 226310 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.887 226310 DEBUG nova.network.neutron [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.937 226310 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.938 226310 DEBUG os_brick.utils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.939 226310 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.939 226310 DEBUG nova.network.neutron [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.939 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.963 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.963 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[25d95e46-97e7-4c10-8ce7-13ea30f4c51c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.964 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.970 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.971 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[5eacbf3e-5009-41e2-a160-c55a67f7d1f7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.972 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.980 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.980 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[894585ad-6587-431f-96da-a1b1f2f3d354]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.982 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b453dc-2817-47b5-9a6d-edc3cdcbbe43]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:29 np0005539564 nova_compute[226295]: 2025-11-29 08:14:29.983 226310 DEBUG oslo_concurrency.processutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:30 np0005539564 nova_compute[226295]: 2025-11-29 08:14:30.016 226310 DEBUG oslo_concurrency.processutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:30 np0005539564 nova_compute[226295]: 2025-11-29 08:14:30.019 226310 DEBUG os_brick.initiator.connectors.lightos [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:14:30 np0005539564 nova_compute[226295]: 2025-11-29 08:14:30.019 226310 DEBUG os_brick.initiator.connectors.lightos [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:14:30 np0005539564 nova_compute[226295]: 2025-11-29 08:14:30.020 226310 DEBUG os_brick.initiator.connectors.lightos [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:14:30 np0005539564 nova_compute[226295]: 2025-11-29 08:14:30.020 226310 DEBUG os_brick.utils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:14:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:30.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:30.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1557549871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.204 226310 DEBUG nova.virt.libvirt.driver [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.296 226310 DEBUG nova.storage.rbd_utils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rolling back rbd image(48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.408 226310 DEBUG nova.storage.rbd_utils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] removing snapshot(nova-resize) on rbd image(48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:14:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e289 e289: 3 total, 3 up, 3 in
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.911 226310 DEBUG nova.network.neutron [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updated VIF entry in instance network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.911 226310 DEBUG nova.network.neutron [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.920 226310 DEBUG nova.virt.libvirt.driver [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Start _get_guest_xml network_info=[{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e19fd9ae-371b-4152-b2b2-910bd950e653', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e19fd9ae-371b-4152-b2b2-910bd950e653', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '48a6ffaa-4f03-4048-bd19-c50aea2863cc', 'attached_at': '2025-11-29T08:14:30.000000', 'detached_at': '', 'volume_id': 'e19fd9ae-371b-4152-b2b2-910bd950e653', 'serial': 'e19fd9ae-371b-4152-b2b2-910bd950e653'}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '626278eb-5fb9-4f71-963a-03aa253eabf0', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.924 226310 WARNING nova.virt.libvirt.driver [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.929 226310 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.929 226310 DEBUG nova.virt.libvirt.host [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.930 226310 DEBUG nova.virt.libvirt.host [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.934 226310 DEBUG nova.virt.libvirt.host [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.934 226310 DEBUG nova.virt.libvirt.host [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.935 226310 DEBUG nova.virt.libvirt.driver [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.936 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.936 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.937 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.937 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.937 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.938 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.938 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.938 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.939 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.939 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.939 226310 DEBUG nova.virt.hardware [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.939 226310 DEBUG nova.objects.instance [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:31 np0005539564 nova_compute[226295]: 2025-11-29 08:14:31.959 226310 DEBUG oslo_concurrency.processutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:32.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.177 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "98ef0160-f50f-4264-a93b-31e6e8909b19" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.177 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.195 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.200 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:32.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.327 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.328 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.334 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.335 226310 INFO nova.compute.claims [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:14:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1211497857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.430 226310 DEBUG oslo_concurrency.processutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.495 226310 DEBUG oslo_concurrency.processutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.590 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.946 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/587050811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:32 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.972 226310 DEBUG oslo_concurrency.processutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.998 226310 DEBUG nova.virt.libvirt.vif [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:32.999 226310 DEBUG nova.network.os_vif_util [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.001 226310 DEBUG nova.network.os_vif_util [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.007 226310 DEBUG nova.virt.libvirt.driver [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <uuid>48a6ffaa-4f03-4048-bd19-c50aea2863cc</uuid>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <name>instance-0000006f</name>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherB-server-127920487</nova:name>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:14:31</nova:creationTime>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <nova:port uuid="18df9eaa-1422-4e4b-ac00-67cdb84e329f">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <entry name="serial">48a6ffaa-4f03-4048-bd19-c50aea2863cc</entry>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <entry name="uuid">48a6ffaa-4f03-4048-bd19-c50aea2863cc</entry>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk.config">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-e19fd9ae-371b-4152-b2b2-910bd950e653">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <serial>e19fd9ae-371b-4152-b2b2-910bd950e653</serial>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:16:57:d0"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <target dev="tap18df9eaa-14"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/console.log" append="off"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:14:33 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:14:33 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:14:33 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:14:33 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.007 226310 DEBUG nova.compute.manager [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Preparing to wait for external event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.008 226310 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.008 226310 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.009 226310 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.010 226310 DEBUG nova.virt.libvirt.vif [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.011 226310 DEBUG nova.network.os_vif_util [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.012 226310 DEBUG nova.network.os_vif_util [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.013 226310 DEBUG os_vif [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.014 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.015 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.016 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.018 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.019 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18df9eaa-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.019 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18df9eaa-14, col_values=(('external_ids', {'iface-id': '18df9eaa-1422-4e4b-ac00-67cdb84e329f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:57:d0', 'vm-uuid': '48a6ffaa-4f03-4048-bd19-c50aea2863cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.020 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.0216] manager: (tap18df9eaa-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.023 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.026 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.027 226310 INFO os_vif [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14')#033[00m
Nov 29 03:14:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1971257752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.069 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.075 226310 DEBUG nova.compute.provider_tree [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.095 226310 DEBUG nova.scheduler.client.report [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:33 np0005539564 podman[268144]: 2025-11-29 08:14:33.11970775 +0000 UTC m=+0.059458345 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:14:33 np0005539564 kernel: tap18df9eaa-14: entered promiscuous mode
Nov 29 03:14:33 np0005539564 podman[268146]: 2025-11-29 08:14:33.126869022 +0000 UTC m=+0.058829458 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.1278] manager: (tap18df9eaa-14): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Nov 29 03:14:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:33Z|00392|binding|INFO|Claiming lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f for this chassis.
Nov 29 03:14:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:33Z|00393|binding|INFO|18df9eaa-1422-4e4b-ac00-67cdb84e329f: Claiming fa:16:3e:16:57:d0 10.100.0.13
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.128 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.128 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.130 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.133 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.138 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.143 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.1501] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.149 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.1506] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.157 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:57:d0 10.100.0.13'], port_security=['fa:16:3e:16:57:d0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '48a6ffaa-4f03-4048-bd19-c50aea2863cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=18df9eaa-1422-4e4b-ac00-67cdb84e329f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.159 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 18df9eaa-1422-4e4b-ac00-67cdb84e329f in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 bound to our chassis#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.160 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:14:33 np0005539564 systemd-udevd[268216]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:33 np0005539564 systemd-machined[190128]: New machine qemu-47-instance-0000006f.
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.171 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6604a5-4638-4c3a-909e-28542d947bd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.173 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d5b8c11-b1 in ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.175 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d5b8c11-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.175 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb3430f-76a9-4926-a84b-440ad014975d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.1774] device (tap18df9eaa-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.1780] device (tap18df9eaa-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.177 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6a22b3-4dea-40f2-947a-5fcbf21a5bb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 systemd[1]: Started Virtual Machine qemu-47-instance-0000006f.
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.188 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[5a636a2e-9cdc-4fcd-9949-1080cfbbc808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.209 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.210 226310 DEBUG nova.network.neutron [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.211 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[711bcdc8-7276-41dc-944b-c3f4faa4ee19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.231 226310 INFO nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.235 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[fddc7447-e0e7-40ee-9336-128b1d405b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.2445] manager: (tap4d5b8c11-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/200)
Nov 29 03:14:33 np0005539564 systemd-udevd[268223]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.243 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49ca2607-72fa-49e9-9513-d0da928be088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.251 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.278 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5893f407-71e6-4a7c-b0f2-ea97a27a943b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.281 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[39f10251-0e3e-4c67-9c9f-0f7919388ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.3041] device (tap4d5b8c11-b0): carrier: link connected
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.309 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[60cb01fc-bfd5-462a-8d1e-ab4a54cc9527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.328 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b35b46ca-49b5-40a3-9190-0b4bb169a56e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704798, 'reachable_time': 25748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268253, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.357 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[26953737-a8da-4e68-989a-74cb0b7459c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:6d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704798, 'tstamp': 704798}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268254, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 podman[268143]: 2025-11-29 08:14:33.359817357 +0000 UTC m=+0.298045552 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.376 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[43323fdb-e0e1-4365-9985-28f677757311]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704798, 'reachable_time': 25748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268256, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:33Z|00394|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f up in Southbound
Nov 29 03:14:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:33Z|00395|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f ovn-installed in OVS
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.435 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0981620a-39da-42a1-a802-4cddc08f2444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.460 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.464 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.465 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.465 226310 INFO nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Creating image(s)#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.491 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7e871ce2-532d-43b9-aeba-e10f1c952a2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.492 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.493 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.493 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:33 np0005539564 kernel: tap4d5b8c11-b0: entered promiscuous mode
Nov 29 03:14:33 np0005539564 NetworkManager[48997]: <info>  [1764404073.4972] manager: (tap4d5b8c11-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.499 226310 DEBUG nova.storage.rbd_utils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] rbd image 98ef0160-f50f-4264-a93b-31e6e8909b19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.501 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:33Z|00396|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.531 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.532 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[08dc7c6d-8d41-4078-a246-b3ba16297001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.533 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:33.533 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'env', 'PROCESS_TAG=haproxy-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d5b8c11-b69e-4a74-846b-03943fb29a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.539 226310 DEBUG nova.storage.rbd_utils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] rbd image 98ef0160-f50f-4264-a93b-31e6e8909b19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.573 226310 DEBUG nova.storage.rbd_utils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] rbd image 98ef0160-f50f-4264-a93b-31e6e8909b19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.578 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.609 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.659 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.660 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.661 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.661 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.690 226310 DEBUG nova.storage.rbd_utils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] rbd image 98ef0160-f50f-4264-a93b-31e6e8909b19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.701 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 98ef0160-f50f-4264-a93b-31e6e8909b19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.883 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404073.8825912, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.884 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.916 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.921 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404073.8829768, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.922 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.948 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.953 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:33 np0005539564 podman[268441]: 2025-11-29 08:14:33.963246113 +0000 UTC m=+0.101385325 container create b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.975 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:14:33 np0005539564 nova_compute[226295]: 2025-11-29 08:14:33.987 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 98ef0160-f50f-4264-a93b-31e6e8909b19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:33 np0005539564 podman[268441]: 2025-11-29 08:14:33.897778238 +0000 UTC m=+0.035917500 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:34 np0005539564 systemd[1]: Started libpod-conmon-b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab.scope.
Nov 29 03:14:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:34.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:34 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:14:34 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0593163c9d3bb2a0cc854bf7d3d5400e43d6894644a44051a442416141cf6b7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:34 np0005539564 podman[268441]: 2025-11-29 08:14:34.057473465 +0000 UTC m=+0.195612707 container init b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:14:34 np0005539564 podman[268441]: 2025-11-29 08:14:34.064879234 +0000 UTC m=+0.203018476 container start b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.078 226310 DEBUG nova.storage.rbd_utils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] resizing rbd image 98ef0160-f50f-4264-a93b-31e6e8909b19_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:14:34 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[268468]: [NOTICE]   (268495) : New worker (268513) forked
Nov 29 03:14:34 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[268468]: [NOTICE]   (268495) : Loading success.
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.201 226310 DEBUG nova.objects.instance [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lazy-loading 'migration_context' on Instance uuid 98ef0160-f50f-4264-a93b-31e6e8909b19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.216 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.217 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Ensure instance console log exists: /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.217 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.218 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.218 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:34.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.449 226310 DEBUG nova.policy [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '462999b573374bfcb81584d2238b4b10', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '97aaebdc8caa4b86b7ec1a55d18e557d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.849 226310 DEBUG nova.compute.manager [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.850 226310 DEBUG oslo_concurrency.lockutils [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.850 226310 DEBUG oslo_concurrency.lockutils [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.850 226310 DEBUG oslo_concurrency.lockutils [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.850 226310 DEBUG nova.compute.manager [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Processing event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.851 226310 DEBUG nova.compute.manager [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.856 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404074.856424, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.857 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.862 226310 INFO nova.virt.libvirt.driver [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance running successfully.#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.862 226310 DEBUG nova.virt.libvirt.driver [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.885 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.888 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:34 np0005539564 nova_compute[226295]: 2025-11-29 08:14:34.925 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:14:35 np0005539564 nova_compute[226295]: 2025-11-29 08:14:35.128 226310 INFO nova.compute.manager [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance to original state: 'active'#033[00m
Nov 29 03:14:35 np0005539564 nova_compute[226295]: 2025-11-29 08:14:35.231 226310 DEBUG nova.network.neutron [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Successfully created port: 9da2a7e5-fdef-478c-b459-29c95af9eeb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:14:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:36.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:36.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:36 np0005539564 nova_compute[226295]: 2025-11-29 08:14:36.995 226310 DEBUG nova.compute.manager [req-6cb99c31-c24e-4315-be12-51f70495f65d req-4c7e206e-b14e-43c6-93f1-eadc8fa15310 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:36 np0005539564 nova_compute[226295]: 2025-11-29 08:14:36.996 226310 DEBUG oslo_concurrency.lockutils [req-6cb99c31-c24e-4315-be12-51f70495f65d req-4c7e206e-b14e-43c6-93f1-eadc8fa15310 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:36 np0005539564 nova_compute[226295]: 2025-11-29 08:14:36.996 226310 DEBUG oslo_concurrency.lockutils [req-6cb99c31-c24e-4315-be12-51f70495f65d req-4c7e206e-b14e-43c6-93f1-eadc8fa15310 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:36 np0005539564 nova_compute[226295]: 2025-11-29 08:14:36.996 226310 DEBUG oslo_concurrency.lockutils [req-6cb99c31-c24e-4315-be12-51f70495f65d req-4c7e206e-b14e-43c6-93f1-eadc8fa15310 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:36 np0005539564 nova_compute[226295]: 2025-11-29 08:14:36.996 226310 DEBUG nova.compute.manager [req-6cb99c31-c24e-4315-be12-51f70495f65d req-4c7e206e-b14e-43c6-93f1-eadc8fa15310 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:36 np0005539564 nova_compute[226295]: 2025-11-29 08:14:36.996 226310 WARNING nova.compute.manager [req-6cb99c31-c24e-4315-be12-51f70495f65d req-4c7e206e-b14e-43c6-93f1-eadc8fa15310 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.164 226310 DEBUG nova.network.neutron [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Successfully updated port: 9da2a7e5-fdef-478c-b459-29c95af9eeb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.185 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.185 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquired lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.186 226310 DEBUG nova.network.neutron [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.203 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e290 e290: 3 total, 3 up, 3 in
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.499 226310 DEBUG nova.compute.manager [req-0a5585ce-c3fe-421c-a012-ed0bb5535ddf req-d50bf30f-3fd2-4b3d-8968-3b41c86e4e30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received event network-changed-9da2a7e5-fdef-478c-b459-29c95af9eeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.499 226310 DEBUG nova.compute.manager [req-0a5585ce-c3fe-421c-a012-ed0bb5535ddf req-d50bf30f-3fd2-4b3d-8968-3b41c86e4e30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Refreshing instance network info cache due to event network-changed-9da2a7e5-fdef-478c-b459-29c95af9eeb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.500 226310 DEBUG oslo_concurrency.lockutils [req-0a5585ce-c3fe-421c-a012-ed0bb5535ddf req-d50bf30f-3fd2-4b3d-8968-3b41c86e4e30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:37 np0005539564 nova_compute[226295]: 2025-11-29 08:14:37.619 226310 DEBUG nova.network.neutron [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:14:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:38.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.058 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.441 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.442 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.442 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.442 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.443 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.444 226310 INFO nova.compute.manager [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Terminating instance#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.445 226310 DEBUG nova.compute.manager [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:38 np0005539564 kernel: tap18df9eaa-14 (unregistering): left promiscuous mode
Nov 29 03:14:38 np0005539564 NetworkManager[48997]: <info>  [1764404078.5043] device (tap18df9eaa-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.516 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:38Z|00397|binding|INFO|Releasing lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f from this chassis (sb_readonly=0)
Nov 29 03:14:38 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:38Z|00398|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f down in Southbound
Nov 29 03:14:38 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:38Z|00399|binding|INFO|Removing iface tap18df9eaa-14 ovn-installed in OVS
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.520 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.526 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:57:d0 10.100.0.13'], port_security=['fa:16:3e:16:57:d0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '48a6ffaa-4f03-4048-bd19-c50aea2863cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=18df9eaa-1422-4e4b-ac00-67cdb84e329f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.529 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 18df9eaa-1422-4e4b-ac00-67cdb84e329f in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.533 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d5b8c11-b69e-4a74-846b-03943fb29a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.535 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecb74f6-47db-47a1-b133-afe5b1029264]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.536 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace which is not needed anymore#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.561 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539564 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 29 03:14:38 np0005539564 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006f.scope: Consumed 4.527s CPU time.
Nov 29 03:14:38 np0005539564 systemd-machined[190128]: Machine qemu-47-instance-0000006f terminated.
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.687 226310 INFO nova.virt.libvirt.driver [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance destroyed successfully.#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.687 226310 DEBUG nova.objects.instance [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.710 226310 DEBUG nova.virt.libvirt.vif [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.711 226310 DEBUG nova.network.os_vif_util [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.712 226310 DEBUG nova.network.os_vif_util [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.712 226310 DEBUG os_vif [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.715 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.716 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18df9eaa-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.718 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.721 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.723 226310 INFO os_vif [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14')#033[00m
Nov 29 03:14:38 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[268468]: [NOTICE]   (268495) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:38 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[268468]: [NOTICE]   (268495) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:38 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[268468]: [WARNING]  (268495) : Exiting Master process...
Nov 29 03:14:38 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[268468]: [ALERT]    (268495) : Current worker (268513) exited with code 143 (Terminated)
Nov 29 03:14:38 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[268468]: [WARNING]  (268495) : All workers exited. Exiting... (0)
Nov 29 03:14:38 np0005539564 systemd[1]: libpod-b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab.scope: Deactivated successfully.
Nov 29 03:14:38 np0005539564 podman[268569]: 2025-11-29 08:14:38.750557386 +0000 UTC m=+0.075885428 container died b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:38 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:38 np0005539564 systemd[1]: var-lib-containers-storage-overlay-0593163c9d3bb2a0cc854bf7d3d5400e43d6894644a44051a442416141cf6b7e-merged.mount: Deactivated successfully.
Nov 29 03:14:38 np0005539564 podman[268569]: 2025-11-29 08:14:38.796652309 +0000 UTC m=+0.121980321 container cleanup b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:14:38 np0005539564 systemd[1]: libpod-conmon-b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab.scope: Deactivated successfully.
Nov 29 03:14:38 np0005539564 podman[268627]: 2025-11-29 08:14:38.870371418 +0000 UTC m=+0.049002053 container remove b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.876 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1de77351-0bb3-4797-b64b-54c3ca4e440b]: (4, ('Sat Nov 29 08:14:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab)\nb389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab\nSat Nov 29 08:14:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (b389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab)\nb389f9b165a61b1488b5f71f84a1b6bf95b10abd03b289c59f04bc4ef526ccab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.878 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7592e2-e2d7-4661-bff3-af71139e3cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.879 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.881 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539564 kernel: tap4d5b8c11-b0: left promiscuous mode
Nov 29 03:14:38 np0005539564 nova_compute[226295]: 2025-11-29 08:14:38.895 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.898 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e11da993-87c9-4532-9a55-e2ca912f8486]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.914 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0cd535-7afd-404b-8adc-c8c3085dd10d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.915 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a50ab9a6-88f6-4f5b-87ad-510c2f289537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.932 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5b4930-a911-4765-a841-3791c164037c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704791, 'reachable_time': 27313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268642, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.934 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:38.934 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d747be6c-d412-4ca6-8ace-deadf7638462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:38 np0005539564 systemd[1]: run-netns-ovnmeta\x2d4d5b8c11\x2db69e\x2d4a74\x2d846b\x2d03943fb29a81.mount: Deactivated successfully.
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.120 226310 INFO nova.virt.libvirt.driver [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Deleting instance files /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc_del#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.121 226310 INFO nova.virt.libvirt.driver [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Deletion of /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc_del complete#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.200 226310 INFO nova.compute.manager [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.201 226310 DEBUG oslo.service.loopingcall [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.201 226310 DEBUG nova.compute.manager [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.201 226310 DEBUG nova.network.neutron [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.431 226310 DEBUG nova.network.neutron [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Updating instance_info_cache with network_info: [{"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.457 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Releasing lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.457 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Instance network_info: |[{"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.457 226310 DEBUG oslo_concurrency.lockutils [req-0a5585ce-c3fe-421c-a012-ed0bb5535ddf req-d50bf30f-3fd2-4b3d-8968-3b41c86e4e30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.458 226310 DEBUG nova.network.neutron [req-0a5585ce-c3fe-421c-a012-ed0bb5535ddf req-d50bf30f-3fd2-4b3d-8968-3b41c86e4e30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Refreshing network info cache for port 9da2a7e5-fdef-478c-b459-29c95af9eeb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.460 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Start _get_guest_xml network_info=[{"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.465 226310 WARNING nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.471 226310 DEBUG nova.virt.libvirt.host [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.472 226310 DEBUG nova.virt.libvirt.host [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.477 226310 DEBUG nova.virt.libvirt.host [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.477 226310 DEBUG nova.virt.libvirt.host [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.479 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.479 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.480 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.481 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.481 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.481 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.482 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.482 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.482 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.483 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.483 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.484 226310 DEBUG nova.virt.hardware [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.488 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.616 226310 DEBUG nova.compute.manager [req-bcd77925-519d-4cbd-8e84-cc9b68f3699b req-4f3edf57-0076-40c2-a833-6b64deaeb689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.617 226310 DEBUG oslo_concurrency.lockutils [req-bcd77925-519d-4cbd-8e84-cc9b68f3699b req-4f3edf57-0076-40c2-a833-6b64deaeb689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.617 226310 DEBUG oslo_concurrency.lockutils [req-bcd77925-519d-4cbd-8e84-cc9b68f3699b req-4f3edf57-0076-40c2-a833-6b64deaeb689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.618 226310 DEBUG oslo_concurrency.lockutils [req-bcd77925-519d-4cbd-8e84-cc9b68f3699b req-4f3edf57-0076-40c2-a833-6b64deaeb689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.618 226310 DEBUG nova.compute.manager [req-bcd77925-519d-4cbd-8e84-cc9b68f3699b req-4f3edf57-0076-40c2-a833-6b64deaeb689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.618 226310 DEBUG nova.compute.manager [req-bcd77925-519d-4cbd-8e84-cc9b68f3699b req-4f3edf57-0076-40c2-a833-6b64deaeb689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:14:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3784317977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:39 np0005539564 nova_compute[226295]: 2025-11-29 08:14:39.987 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.027 226310 DEBUG nova.storage.rbd_utils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] rbd image 98ef0160-f50f-4264-a93b-31e6e8909b19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.032 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:40.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1395427114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.585 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.588 226310 DEBUG nova.virt.libvirt.vif [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-157958247',display_name='tempest-ServerAddressesNegativeTestJSON-server-157958247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-157958247',id=114,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='97aaebdc8caa4b86b7ec1a55d18e557d',ramdisk_id='',reservation_id='r-0g35qhvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-746806271',owner_user_name='tempest-ServerAddressesNegativeTestJSON-746806271-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:33Z,user_data=None,user_id='462999b573374bfcb81584d2238b4b10',uuid=98ef0160-f50f-4264-a93b-31e6e8909b19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.589 226310 DEBUG nova.network.os_vif_util [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Converting VIF {"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.590 226310 DEBUG nova.network.os_vif_util [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d2:e1,bridge_name='br-int',has_traffic_filtering=True,id=9da2a7e5-fdef-478c-b459-29c95af9eeb6,network=Network(6481d98a-5ea4-43df-9201-fd592376b789),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9da2a7e5-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.592 226310 DEBUG nova.objects.instance [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lazy-loading 'pci_devices' on Instance uuid 98ef0160-f50f-4264-a93b-31e6e8909b19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.701 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <uuid>98ef0160-f50f-4264-a93b-31e6e8909b19</uuid>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <name>instance-00000072</name>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-157958247</nova:name>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:14:39</nova:creationTime>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <nova:user uuid="462999b573374bfcb81584d2238b4b10">tempest-ServerAddressesNegativeTestJSON-746806271-project-member</nova:user>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <nova:project uuid="97aaebdc8caa4b86b7ec1a55d18e557d">tempest-ServerAddressesNegativeTestJSON-746806271</nova:project>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <nova:port uuid="9da2a7e5-fdef-478c-b459-29c95af9eeb6">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <entry name="serial">98ef0160-f50f-4264-a93b-31e6e8909b19</entry>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <entry name="uuid">98ef0160-f50f-4264-a93b-31e6e8909b19</entry>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/98ef0160-f50f-4264-a93b-31e6e8909b19_disk">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/98ef0160-f50f-4264-a93b-31e6e8909b19_disk.config">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:c4:d2:e1"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <target dev="tap9da2a7e5-fd"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19/console.log" append="off"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:14:40 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:14:40 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:14:40 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:14:40 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.703 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Preparing to wait for external event network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.704 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.704 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.705 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.706 226310 DEBUG nova.virt.libvirt.vif [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-157958247',display_name='tempest-ServerAddressesNegativeTestJSON-server-157958247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-157958247',id=114,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='97aaebdc8caa4b86b7ec1a55d18e557d',ramdisk_id='',reservation_id='r-0g35qhvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-746806271',owner_user_name='tempest-ServerAddressesNegativeTestJSON-746806271-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:33Z,user_data=None,user_id='462999b573374bfcb81584d2238b4b10',uuid=98ef0160-f50f-4264-a93b-31e6e8909b19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.707 226310 DEBUG nova.network.os_vif_util [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Converting VIF {"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.708 226310 DEBUG nova.network.os_vif_util [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d2:e1,bridge_name='br-int',has_traffic_filtering=True,id=9da2a7e5-fdef-478c-b459-29c95af9eeb6,network=Network(6481d98a-5ea4-43df-9201-fd592376b789),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9da2a7e5-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.709 226310 DEBUG os_vif [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d2:e1,bridge_name='br-int',has_traffic_filtering=True,id=9da2a7e5-fdef-478c-b459-29c95af9eeb6,network=Network(6481d98a-5ea4-43df-9201-fd592376b789),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9da2a7e5-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.710 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.712 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.712 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.717 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.718 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9da2a7e5-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.719 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9da2a7e5-fd, col_values=(('external_ids', {'iface-id': '9da2a7e5-fdef-478c-b459-29c95af9eeb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:d2:e1', 'vm-uuid': '98ef0160-f50f-4264-a93b-31e6e8909b19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:40 np0005539564 NetworkManager[48997]: <info>  [1764404080.7242] manager: (tap9da2a7e5-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.724 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.729 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.732 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.733 226310 INFO os_vif [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d2:e1,bridge_name='br-int',has_traffic_filtering=True,id=9da2a7e5-fdef-478c-b459-29c95af9eeb6,network=Network(6481d98a-5ea4-43df-9201-fd592376b789),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9da2a7e5-fd')#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.813 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.813 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.814 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] No VIF found with MAC fa:16:3e:c4:d2:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.814 226310 INFO nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Using config drive#033[00m
Nov 29 03:14:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:40 np0005539564 nova_compute[226295]: 2025-11-29 08:14:40.850 226310 DEBUG nova.storage.rbd_utils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] rbd image 98ef0160-f50f-4264-a93b-31e6e8909b19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.074 226310 DEBUG nova.network.neutron [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.125 226310 INFO nova.compute.manager [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Took 1.92 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.267 226310 DEBUG nova.compute.manager [req-e2fdbbd8-b56c-43b8-9dc5-bf5f6b5c3cde req-3c187b2c-6d1a-4f66-ba70-e0ef817a0794 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-deleted-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.423 226310 INFO nova.compute.manager [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Took 0.30 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.429 226310 INFO nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Creating config drive at /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19/disk.config#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.435 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3k_x_0r7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.526 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.526 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.582 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3k_x_0r7" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.621 226310 DEBUG nova.storage.rbd_utils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] rbd image 98ef0160-f50f-4264-a93b-31e6e8909b19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.625 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19/disk.config 98ef0160-f50f-4264-a93b-31e6e8909b19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.692 226310 DEBUG oslo_concurrency.processutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.743 226310 DEBUG nova.compute.manager [req-3136be22-37e7-42fc-907b-624bd66b16fb req-3b774589-e76c-4a20-bb1d-9bede105947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.744 226310 DEBUG oslo_concurrency.lockutils [req-3136be22-37e7-42fc-907b-624bd66b16fb req-3b774589-e76c-4a20-bb1d-9bede105947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.744 226310 DEBUG oslo_concurrency.lockutils [req-3136be22-37e7-42fc-907b-624bd66b16fb req-3b774589-e76c-4a20-bb1d-9bede105947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.744 226310 DEBUG oslo_concurrency.lockutils [req-3136be22-37e7-42fc-907b-624bd66b16fb req-3b774589-e76c-4a20-bb1d-9bede105947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.744 226310 DEBUG nova.compute.manager [req-3136be22-37e7-42fc-907b-624bd66b16fb req-3b774589-e76c-4a20-bb1d-9bede105947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.745 226310 WARNING nova.compute.manager [req-3136be22-37e7-42fc-907b-624bd66b16fb req-3b774589-e76c-4a20-bb1d-9bede105947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.815 226310 DEBUG oslo_concurrency.processutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19/disk.config 98ef0160-f50f-4264-a93b-31e6e8909b19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.816 226310 INFO nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Deleting local config drive /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19/disk.config because it was imported into RBD.#033[00m
Nov 29 03:14:41 np0005539564 kernel: tap9da2a7e5-fd: entered promiscuous mode
Nov 29 03:14:41 np0005539564 NetworkManager[48997]: <info>  [1764404081.8891] manager: (tap9da2a7e5-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.890 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:41Z|00400|binding|INFO|Claiming lport 9da2a7e5-fdef-478c-b459-29c95af9eeb6 for this chassis.
Nov 29 03:14:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:41Z|00401|binding|INFO|9da2a7e5-fdef-478c-b459-29c95af9eeb6: Claiming fa:16:3e:c4:d2:e1 10.100.0.3
Nov 29 03:14:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:41Z|00402|binding|INFO|Setting lport 9da2a7e5-fdef-478c-b459-29c95af9eeb6 ovn-installed in OVS
Nov 29 03:14:41 np0005539564 systemd-udevd[268797]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.924 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:41 np0005539564 nova_compute[226295]: 2025-11-29 08:14:41.926 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:41Z|00403|binding|INFO|Setting lport 9da2a7e5-fdef-478c-b459-29c95af9eeb6 up in Southbound
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.929 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:d2:e1 10.100.0.3'], port_security=['fa:16:3e:c4:d2:e1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '98ef0160-f50f-4264-a93b-31e6e8909b19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6481d98a-5ea4-43df-9201-fd592376b789', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '97aaebdc8caa4b86b7ec1a55d18e557d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97410aaf-988a-43a0-8396-7354ca2cded8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ade529b5-c2ca-4ce5-926d-6122417e8a83, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=9da2a7e5-fdef-478c-b459-29c95af9eeb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.930 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 9da2a7e5-fdef-478c-b459-29c95af9eeb6 in datapath 6481d98a-5ea4-43df-9201-fd592376b789 bound to our chassis#033[00m
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.932 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6481d98a-5ea4-43df-9201-fd592376b789#033[00m
Nov 29 03:14:41 np0005539564 NetworkManager[48997]: <info>  [1764404081.9452] device (tap9da2a7e5-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:41 np0005539564 NetworkManager[48997]: <info>  [1764404081.9465] device (tap9da2a7e5-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.946 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4a80504f-ff05-4323-8525-dc459b7de7da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.947 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6481d98a-51 in ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.949 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6481d98a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.949 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[92ef9348-4ce9-4490-8be0-5c64a7c45e68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.950 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cc72d643-a903-413a-ab65-ec64274f6112]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:41 np0005539564 systemd-machined[190128]: New machine qemu-48-instance-00000072.
Nov 29 03:14:41 np0005539564 systemd[1]: Started Virtual Machine qemu-48-instance-00000072.
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.966 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[93218376-5ee2-4765-9d67-52307b15d28b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:41.993 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[71e76a94-dc6b-4ce6-a961-41d838d4a212]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.030 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7190fe9b-f234-41b6-9cc7-ad4615088f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.036 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b35787-93bf-4da5-8b4d-c66cb7d34e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 NetworkManager[48997]: <info>  [1764404082.0386] manager: (tap6481d98a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/204)
Nov 29 03:14:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:42.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.080 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e34d9235-6aed-4b2d-8869-b7d55633e41f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.084 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f8896b93-a0a2-4471-95b0-28c255aa4c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 NetworkManager[48997]: <info>  [1764404082.1215] device (tap6481d98a-50): carrier: link connected
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.135 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[effc8584-be4d-4e66-9398-af0a64d8b11f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2928880347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.158 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ce80b033-a1a0-4156-a8df-953cef319c25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6481d98a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:f1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705680, 'reachable_time': 23979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268833, 'error': None, 'target': 'ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.174 226310 DEBUG oslo_concurrency.processutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.179 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3e470b-15a1-464f-a3a8-8cda824f9935]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:f12b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705680, 'tstamp': 705680}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268836, 'error': None, 'target': 'ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.182 226310 DEBUG nova.compute.provider_tree [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.197 226310 DEBUG nova.network.neutron [req-0a5585ce-c3fe-421c-a012-ed0bb5535ddf req-d50bf30f-3fd2-4b3d-8968-3b41c86e4e30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Updated VIF entry in instance network info cache for port 9da2a7e5-fdef-478c-b459-29c95af9eeb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.197 226310 DEBUG nova.network.neutron [req-0a5585ce-c3fe-421c-a012-ed0bb5535ddf req-d50bf30f-3fd2-4b3d-8968-3b41c86e4e30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Updating instance_info_cache with network_info: [{"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.202 226310 DEBUG nova.scheduler.client.report [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.203 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[09c8759b-f4f6-4a13-b1cd-8e74fe431897]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6481d98a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:f1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705680, 'reachable_time': 23979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268837, 'error': None, 'target': 'ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.208 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.221 226310 DEBUG oslo_concurrency.lockutils [req-0a5585ce-c3fe-421c-a012-ed0bb5535ddf req-d50bf30f-3fd2-4b3d-8968-3b41c86e4e30 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.253 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebe7237-5abf-4f86-b4ea-c1b11eaf48f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.260 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.287 226310 INFO nova.scheduler.client.report [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Deleted allocations for instance 48a6ffaa-4f03-4048-bd19-c50aea2863cc#033[00m
Nov 29 03:14:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.336 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4040975f-db7b-4911-a39e-af9f287af413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.338 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6481d98a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.338 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.339 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6481d98a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.341 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:42 np0005539564 kernel: tap6481d98a-50: entered promiscuous mode
Nov 29 03:14:42 np0005539564 NetworkManager[48997]: <info>  [1764404082.3426] manager: (tap6481d98a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.343 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.346 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6481d98a-50, col_values=(('external_ids', {'iface-id': '45682a4f-c17b-4695-ba08-c228ee0da757'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:42Z|00404|binding|INFO|Releasing lport 45682a4f-c17b-4695-ba08-c228ee0da757 from this chassis (sb_readonly=0)
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.347 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.367 226310 DEBUG oslo_concurrency.lockutils [None req-9af8d287-9dfd-4701-8d3c-9060b0199d4b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.367 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6481d98a-5ea4-43df-9201-fd592376b789.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6481d98a-5ea4-43df-9201-fd592376b789.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.368 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.368 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc17bb8-7195-45d5-8873-51d562415d0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.369 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-6481d98a-5ea4-43df-9201-fd592376b789
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/6481d98a-5ea4-43df-9201-fd592376b789.pid.haproxy
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 6481d98a-5ea4-43df-9201-fd592376b789
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:42.370 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789', 'env', 'PROCESS_TAG=haproxy-6481d98a-5ea4-43df-9201-fd592376b789', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6481d98a-5ea4-43df-9201-fd592376b789.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.722 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404082.7216225, 98ef0160-f50f-4264-a93b-31e6e8909b19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.723 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.747 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.752 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404082.7222035, 98ef0160-f50f-4264-a93b-31e6e8909b19 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.753 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.773 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.780 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:42 np0005539564 nova_compute[226295]: 2025-11-29 08:14:42.804 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:42 np0005539564 podman[268909]: 2025-11-29 08:14:42.822259115 +0000 UTC m=+0.079158956 container create 0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:42 np0005539564 systemd[1]: Started libpod-conmon-0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268.scope.
Nov 29 03:14:42 np0005539564 podman[268909]: 2025-11-29 08:14:42.786879352 +0000 UTC m=+0.043779203 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:42 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:14:42 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10886a2b65f980b25494989e90a22f958420812b2dd7006598b127de3898b9a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:42 np0005539564 podman[268909]: 2025-11-29 08:14:42.948365697 +0000 UTC m=+0.205265568 container init 0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:42 np0005539564 podman[268909]: 2025-11-29 08:14:42.958367687 +0000 UTC m=+0.215267528 container start 0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:43 np0005539564 neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789[268924]: [NOTICE]   (268928) : New worker (268930) forked
Nov 29 03:14:43 np0005539564 neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789[268924]: [NOTICE]   (268928) : Loading success.
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.412 226310 DEBUG nova.compute.manager [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received event network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.412 226310 DEBUG oslo_concurrency.lockutils [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.413 226310 DEBUG oslo_concurrency.lockutils [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.413 226310 DEBUG oslo_concurrency.lockutils [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.413 226310 DEBUG nova.compute.manager [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Processing event network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.413 226310 DEBUG nova.compute.manager [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received event network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.414 226310 DEBUG oslo_concurrency.lockutils [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.414 226310 DEBUG oslo_concurrency.lockutils [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.414 226310 DEBUG oslo_concurrency.lockutils [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.414 226310 DEBUG nova.compute.manager [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] No waiting events found dispatching network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.415 226310 WARNING nova.compute.manager [req-d3e0ac24-2954-4b43-86e3-ffbc994bdfca req-66701ff0-bd50-4d06-bcc3-e0ebfbb12dba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received unexpected event network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.415 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.420 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404083.4197233, 98ef0160-f50f-4264-a93b-31e6e8909b19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.420 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.425 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.431 226310 INFO nova.virt.libvirt.driver [-] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Instance spawned successfully.#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.432 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.453 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.460 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.464 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.465 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.465 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.466 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.466 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.467 226310 DEBUG nova.virt.libvirt.driver [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.496 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.535 226310 INFO nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Took 10.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.536 226310 DEBUG nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.636 226310 INFO nova.compute.manager [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Took 11.37 seconds to build instance.#033[00m
Nov 29 03:14:43 np0005539564 nova_compute[226295]: 2025-11-29 08:14:43.654 226310 DEBUG oslo_concurrency.lockutils [None req-97e009d2-8e10-4220-ae22-5a65f3619381 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:44.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:44.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:45 np0005539564 nova_compute[226295]: 2025-11-29 08:14:45.722 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:46.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.157 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.157 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:46.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.778 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "98ef0160-f50f-4264-a93b-31e6e8909b19" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.779 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.780 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.780 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.780 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.782 226310 INFO nova.compute.manager [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Terminating instance#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.783 226310 DEBUG nova.compute.manager [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:46 np0005539564 kernel: tap9da2a7e5-fd (unregistering): left promiscuous mode
Nov 29 03:14:46 np0005539564 NetworkManager[48997]: <info>  [1764404086.8238] device (tap9da2a7e5-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.830 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:46Z|00405|binding|INFO|Releasing lport 9da2a7e5-fdef-478c-b459-29c95af9eeb6 from this chassis (sb_readonly=0)
Nov 29 03:14:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:46Z|00406|binding|INFO|Setting lport 9da2a7e5-fdef-478c-b459-29c95af9eeb6 down in Southbound
Nov 29 03:14:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:46Z|00407|binding|INFO|Removing iface tap9da2a7e5-fd ovn-installed in OVS
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.832 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:46.848 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:d2:e1 10.100.0.3'], port_security=['fa:16:3e:c4:d2:e1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '98ef0160-f50f-4264-a93b-31e6e8909b19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6481d98a-5ea4-43df-9201-fd592376b789', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '97aaebdc8caa4b86b7ec1a55d18e557d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97410aaf-988a-43a0-8396-7354ca2cded8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ade529b5-c2ca-4ce5-926d-6122417e8a83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=9da2a7e5-fdef-478c-b459-29c95af9eeb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:46.851 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 9da2a7e5-fdef-478c-b459-29c95af9eeb6 in datapath 6481d98a-5ea4-43df-9201-fd592376b789 unbound from our chassis#033[00m
Nov 29 03:14:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:46.853 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6481d98a-5ea4-43df-9201-fd592376b789, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:46.855 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5c66d55d-88e2-43c9-9539-ea50be616972]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:46.856 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789 namespace which is not needed anymore#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.870 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:46 np0005539564 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000072.scope: Deactivated successfully.
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.878 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:46 np0005539564 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000072.scope: Consumed 4.256s CPU time.
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.879 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:46 np0005539564 systemd-machined[190128]: Machine qemu-48-instance-00000072 terminated.
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.912 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.913 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.915 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:14:46 np0005539564 nova_compute[226295]: 2025-11-29 08:14:46.948 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.005 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.006 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.007 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539564 neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789[268924]: [NOTICE]   (268928) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:47 np0005539564 neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789[268924]: [NOTICE]   (268928) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:47 np0005539564 neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789[268924]: [WARNING]  (268928) : Exiting Master process...
Nov 29 03:14:47 np0005539564 neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789[268924]: [ALERT]    (268928) : Current worker (268930) exited with code 143 (Terminated)
Nov 29 03:14:47 np0005539564 neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789[268924]: [WARNING]  (268928) : All workers exited. Exiting... (0)
Nov 29 03:14:47 np0005539564 systemd[1]: libpod-0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268.scope: Deactivated successfully.
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.015 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.017 226310 INFO nova.compute.claims [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:14:47 np0005539564 podman[268964]: 2025-11-29 08:14:47.01869858 +0000 UTC m=+0.052350943 container died 0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.025 226310 INFO nova.virt.libvirt.driver [-] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Instance destroyed successfully.#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.026 226310 DEBUG nova.objects.instance [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lazy-loading 'resources' on Instance uuid 98ef0160-f50f-4264-a93b-31e6e8909b19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:47 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:47 np0005539564 systemd[1]: var-lib-containers-storage-overlay-10886a2b65f980b25494989e90a22f958420812b2dd7006598b127de3898b9a4-merged.mount: Deactivated successfully.
Nov 29 03:14:47 np0005539564 podman[268964]: 2025-11-29 08:14:47.056783228 +0000 UTC m=+0.090435591 container cleanup 0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.064 226310 DEBUG nova.virt.libvirt.vif [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-157958247',display_name='tempest-ServerAddressesNegativeTestJSON-server-157958247',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-157958247',id=114,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='97aaebdc8caa4b86b7ec1a55d18e557d',ramdisk_id='',reservation_id='r-0g35qhvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-746806271',owner_user_name='tempest-ServerAddressesNegativeTestJSON-746806271-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:43Z,user_data=None,user_id='462999b573374bfcb81584d2238b4b10',uuid=98ef0160-f50f-4264-a93b-31e6e8909b19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.064 226310 DEBUG nova.network.os_vif_util [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Converting VIF {"id": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "address": "fa:16:3e:c4:d2:e1", "network": {"id": "6481d98a-5ea4-43df-9201-fd592376b789", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1662741541-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "97aaebdc8caa4b86b7ec1a55d18e557d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9da2a7e5-fd", "ovs_interfaceid": "9da2a7e5-fdef-478c-b459-29c95af9eeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.065 226310 DEBUG nova.network.os_vif_util [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d2:e1,bridge_name='br-int',has_traffic_filtering=True,id=9da2a7e5-fdef-478c-b459-29c95af9eeb6,network=Network(6481d98a-5ea4-43df-9201-fd592376b789),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9da2a7e5-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.066 226310 DEBUG os_vif [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d2:e1,bridge_name='br-int',has_traffic_filtering=True,id=9da2a7e5-fdef-478c-b459-29c95af9eeb6,network=Network(6481d98a-5ea4-43df-9201-fd592376b789),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9da2a7e5-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.068 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.068 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da2a7e5-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.070 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539564 systemd[1]: libpod-conmon-0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268.scope: Deactivated successfully.
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.074 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.074 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.076 226310 INFO os_vif [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d2:e1,bridge_name='br-int',has_traffic_filtering=True,id=9da2a7e5-fdef-478c-b459-29c95af9eeb6,network=Network(6481d98a-5ea4-43df-9201-fd592376b789),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9da2a7e5-fd')#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.122 226310 DEBUG nova.compute.manager [req-0e1c9a8f-0427-4ced-8f17-7868bfcbd75f req-35d833f6-b307-4490-9586-015686a8ddc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received event network-vif-unplugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.122 226310 DEBUG oslo_concurrency.lockutils [req-0e1c9a8f-0427-4ced-8f17-7868bfcbd75f req-35d833f6-b307-4490-9586-015686a8ddc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.122 226310 DEBUG oslo_concurrency.lockutils [req-0e1c9a8f-0427-4ced-8f17-7868bfcbd75f req-35d833f6-b307-4490-9586-015686a8ddc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.122 226310 DEBUG oslo_concurrency.lockutils [req-0e1c9a8f-0427-4ced-8f17-7868bfcbd75f req-35d833f6-b307-4490-9586-015686a8ddc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.123 226310 DEBUG nova.compute.manager [req-0e1c9a8f-0427-4ced-8f17-7868bfcbd75f req-35d833f6-b307-4490-9586-015686a8ddc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] No waiting events found dispatching network-vif-unplugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.123 226310 DEBUG nova.compute.manager [req-0e1c9a8f-0427-4ced-8f17-7868bfcbd75f req-35d833f6-b307-4490-9586-015686a8ddc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received event network-vif-unplugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:14:47 np0005539564 podman[268999]: 2025-11-29 08:14:47.136714963 +0000 UTC m=+0.053156745 container remove 0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.142 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9806df-9efc-496a-886d-d8fd2127da29]: (4, ('Sat Nov 29 08:14:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789 (0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268)\n0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268\nSat Nov 29 08:14:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789 (0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268)\n0bac5bdc7706adc2780b10e5ab9795096c62ac2e54bffa97ac770b6a3e26d268\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.143 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbd0e11-0ee8-435e-a302-32fb358030c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.144 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6481d98a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:47 np0005539564 kernel: tap6481d98a-50: left promiscuous mode
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.159 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.161 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[97c994e6-b111-402b-8185-67fb5fb21719]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.178 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ee13ec-3c6d-49da-921b-f714cc387ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.179 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e413512f-2d6b-42dd-b426-64b28e284053]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.192 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b2b9c3-2cb6-4e16-b196-f7481338b702]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705670, 'reachable_time': 17846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269030, 'error': None, 'target': 'ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.194 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6481d98a-5ea4-43df-9201-fd592376b789 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:47.194 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[7f252e86-f6c0-47b5-8234-3e116df6992c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539564 systemd[1]: run-netns-ovnmeta\x2d6481d98a\x2d5ea4\x2d43df\x2d9201\x2dfd592376b789.mount: Deactivated successfully.
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.205 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.457 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.503 226310 INFO nova.virt.libvirt.driver [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Deleting instance files /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19_del#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.504 226310 INFO nova.virt.libvirt.driver [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Deletion of /var/lib/nova/instances/98ef0160-f50f-4264-a93b-31e6e8909b19_del complete#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.549 226310 INFO nova.compute.manager [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.549 226310 DEBUG oslo.service.loopingcall [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.550 226310 DEBUG nova.compute.manager [-] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.550 226310 DEBUG nova.network.neutron [-] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2785337149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.930 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.938 226310 DEBUG nova.compute.provider_tree [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.964 226310 DEBUG nova.scheduler.client.report [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.990 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.991 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:14:47 np0005539564 nova_compute[226295]: 2025-11-29 08:14:47.994 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.002 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.003 226310 INFO nova.compute.claims [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:14:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:48.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.062 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.063 226310 DEBUG nova.network.neutron [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.082 226310 INFO nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.151 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.186 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.266 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.267 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.268 226310 INFO nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Creating image(s)#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.295 226310 DEBUG nova.storage.rbd_utils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.332 226310 DEBUG nova.storage.rbd_utils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:48.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.373 226310 DEBUG nova.storage.rbd_utils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.377 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.444 226310 DEBUG nova.network.neutron [-] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.452 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.453 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.453 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.454 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.484 226310 DEBUG nova.storage.rbd_utils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.487 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.512 226310 DEBUG nova.policy [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca93c8e3eac142c0aa6b61807727dea2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.517 226310 INFO nova.compute.manager [-] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Took 0.97 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.560 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1897348982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.650 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.659 226310 DEBUG nova.compute.provider_tree [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.684 226310 DEBUG nova.scheduler.client.report [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.719 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.720 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.722 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.776 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.776 226310 DEBUG nova.network.neutron [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.807 226310 INFO nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.821 226310 DEBUG oslo_concurrency.processutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.851 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.919 226310 INFO nova.virt.block_device [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Booting with volume ad74cad8-db25-41c6-a50a-ce4cad08c1ea at /dev/vda#033[00m
Nov 29 03:14:48 np0005539564 nova_compute[226295]: 2025-11-29 08:14:48.970 226310 DEBUG nova.policy [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '58625e4c2b5d43a1abbab05b98853a65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '250671461f27498d9f6b4476c7b69533', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.102 226310 DEBUG os_brick.utils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.104 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.116 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.117 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[900fa11d-d989-4f88-a91a-c4ae5ce20db6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.118 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.130 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.130 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[aec0395e-b4a1-40cd-9f8c-fd76cc8709e6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.132 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.145 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.145 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[5222d70c-4272-449f-b239-19e4fa575245]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.147 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[09b93a70-af5e-4142-9dc5-7b94a1399620]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.148 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.199 226310 DEBUG nova.compute.manager [req-f36abb14-d603-4a1a-b76d-772cf33947fa req-0c1534c0-2b84-4caf-b0b1-03cb28b31eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received event network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.200 226310 DEBUG oslo_concurrency.lockutils [req-f36abb14-d603-4a1a-b76d-772cf33947fa req-0c1534c0-2b84-4caf-b0b1-03cb28b31eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.201 226310 DEBUG oslo_concurrency.lockutils [req-f36abb14-d603-4a1a-b76d-772cf33947fa req-0c1534c0-2b84-4caf-b0b1-03cb28b31eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.201 226310 DEBUG oslo_concurrency.lockutils [req-f36abb14-d603-4a1a-b76d-772cf33947fa req-0c1534c0-2b84-4caf-b0b1-03cb28b31eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.202 226310 DEBUG nova.compute.manager [req-f36abb14-d603-4a1a-b76d-772cf33947fa req-0c1534c0-2b84-4caf-b0b1-03cb28b31eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] No waiting events found dispatching network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.202 226310 WARNING nova.compute.manager [req-f36abb14-d603-4a1a-b76d-772cf33947fa req-0c1534c0-2b84-4caf-b0b1-03cb28b31eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received unexpected event network-vif-plugged-9da2a7e5-fdef-478c-b459-29c95af9eeb6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.203 226310 DEBUG nova.compute.manager [req-f36abb14-d603-4a1a-b76d-772cf33947fa req-0c1534c0-2b84-4caf-b0b1-03cb28b31eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Received event network-vif-deleted-9da2a7e5-fdef-478c-b459-29c95af9eeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.205 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "nvme version" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.209 226310 DEBUG os_brick.initiator.connectors.lightos [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.210 226310 DEBUG os_brick.initiator.connectors.lightos [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.210 226310 DEBUG os_brick.initiator.connectors.lightos [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.211 226310 DEBUG os_brick.utils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] <== get_connector_properties: return (107ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.212 226310 DEBUG nova.virt.block_device [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating existing volume attachment record: f431964f-554d-4d01-8f7b-7d775175bff1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:14:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:49 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/525671678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.286 226310 DEBUG oslo_concurrency.processutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.291 226310 DEBUG nova.compute.provider_tree [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.299 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.811s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.352 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.353 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.353 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.355 226310 DEBUG nova.scheduler.client.report [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.365 226310 DEBUG nova.storage.rbd_utils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] resizing rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.772 226310 DEBUG nova.network.neutron [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Successfully created port: a221d286-cb0f-41fd-9997-b1687a875e0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.777 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.781 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.782 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.852 226310 INFO nova.scheduler.client.report [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Deleted allocations for instance 98ef0160-f50f-4264-a93b-31e6e8909b19#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.860 226310 DEBUG nova.objects.instance [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'migration_context' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.870 226310 DEBUG nova.network.neutron [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Successfully created port: cee20a9b-a551-4e60-9b01-151c48dc45fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.874 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.874 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Ensure instance console log exists: /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.875 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.875 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.875 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:49 np0005539564 nova_compute[226295]: 2025-11-29 08:14:49.976 226310 DEBUG oslo_concurrency.lockutils [None req-ead319cc-8c16-4157-919c-6ab2afe7fce9 462999b573374bfcb81584d2238b4b10 97aaebdc8caa4b86b7ec1a55d18e557d - - default default] Lock "98ef0160-f50f-4264-a93b-31e6e8909b19" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:50.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.064 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.065 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.065 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.065 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98ef0160-f50f-4264-a93b-31e6e8909b19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.109 226310 DEBUG nova.compute.utils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.270 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.289 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.292 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.293 226310 INFO nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Creating image(s)#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.294 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.294 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Ensure instance console log exists: /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.295 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.296 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.296 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:50.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.752 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.777 226310 DEBUG nova.network.neutron [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Successfully updated port: a221d286-cb0f-41fd-9997-b1687a875e0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.781 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-98ef0160-f50f-4264-a93b-31e6e8909b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.782 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:14:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:14:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.784 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.786 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.786 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.790 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.791 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.791 226310 DEBUG nova.network.neutron [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.912 226310 DEBUG nova.compute.manager [req-f83008e8-ba12-4849-a48b-6bdfa7fe9163 req-a4f7b231-8072-4601-94d9-db735742301d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-changed-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.913 226310 DEBUG nova.compute.manager [req-f83008e8-ba12-4849-a48b-6bdfa7fe9163 req-a4f7b231-8072-4601-94d9-db735742301d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Refreshing instance network info cache due to event network-changed-a221d286-cb0f-41fd-9997-b1687a875e0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.913 226310 DEBUG oslo_concurrency.lockutils [req-f83008e8-ba12-4849-a48b-6bdfa7fe9163 req-a4f7b231-8072-4601-94d9-db735742301d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.954 226310 DEBUG nova.network.neutron [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:14:50 np0005539564 nova_compute[226295]: 2025-11-29 08:14:50.997 226310 DEBUG nova.network.neutron [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Successfully updated port: cee20a9b-a551-4e60-9b01-151c48dc45fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:14:51 np0005539564 nova_compute[226295]: 2025-11-29 08:14:51.016 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:51 np0005539564 nova_compute[226295]: 2025-11-29 08:14:51.016 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquired lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:51 np0005539564 nova_compute[226295]: 2025-11-29 08:14:51.016 226310 DEBUG nova.network.neutron [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:51 np0005539564 nova_compute[226295]: 2025-11-29 08:14:51.310 226310 DEBUG nova.compute.manager [req-d88105a8-312e-4862-b67d-a0c456a51f02 req-c6934a49-70ea-4603-82c5-573de5449aac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-changed-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:51 np0005539564 nova_compute[226295]: 2025-11-29 08:14:51.310 226310 DEBUG nova.compute.manager [req-d88105a8-312e-4862-b67d-a0c456a51f02 req-c6934a49-70ea-4603-82c5-573de5449aac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Refreshing instance network info cache due to event network-changed-cee20a9b-a551-4e60-9b01-151c48dc45fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:51 np0005539564 nova_compute[226295]: 2025-11-29 08:14:51.311 226310 DEBUG oslo_concurrency.lockutils [req-d88105a8-312e-4862-b67d-a0c456a51f02 req-c6934a49-70ea-4603-82c5-573de5449aac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:51 np0005539564 nova_compute[226295]: 2025-11-29 08:14:51.509 226310 DEBUG nova.network.neutron [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:14:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:51.972 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:51 np0005539564 nova_compute[226295]: 2025-11-29 08:14:51.972 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:51.974 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:14:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:52.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.073 226310 DEBUG nova.network.neutron [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.101 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.102 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance network_info: |[{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.102 226310 DEBUG oslo_concurrency.lockutils [req-f83008e8-ba12-4849-a48b-6bdfa7fe9163 req-a4f7b231-8072-4601-94d9-db735742301d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.103 226310 DEBUG nova.network.neutron [req-f83008e8-ba12-4849-a48b-6bdfa7fe9163 req-a4f7b231-8072-4601-94d9-db735742301d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Refreshing network info cache for port a221d286-cb0f-41fd-9997-b1687a875e0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.108 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Start _get_guest_xml network_info=[{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.115 226310 WARNING nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.120 226310 DEBUG nova.virt.libvirt.host [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.121 226310 DEBUG nova.virt.libvirt.host [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.132 226310 DEBUG nova.virt.libvirt.host [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.132 226310 DEBUG nova.virt.libvirt.host [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.134 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.134 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.135 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.136 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.136 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.137 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.137 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.137 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.138 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.138 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.139 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.139 226310 DEBUG nova.virt.hardware [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.144 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.208 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2406103316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.566 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.608 226310 DEBUG nova.storage.rbd_utils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.615 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.649 226310 DEBUG nova.network.neutron [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating instance_info_cache with network_info: [{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.678 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Releasing lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.678 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance network_info: |[{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.680 226310 DEBUG oslo_concurrency.lockutils [req-d88105a8-312e-4862-b67d-a0c456a51f02 req-c6934a49-70ea-4603-82c5-573de5449aac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.680 226310 DEBUG nova.network.neutron [req-d88105a8-312e-4862-b67d-a0c456a51f02 req-c6934a49-70ea-4603-82c5-573de5449aac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Refreshing network info cache for port cee20a9b-a551-4e60-9b01-151c48dc45fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.686 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Start _get_guest_xml network_info=[{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ad74cad8-db25-41c6-a50a-ce4cad08c1ea', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ad74cad8-db25-41c6-a50a-ce4cad08c1ea', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bba0127d-6332-44fa-8fc1-c3d7321260fa', 'attached_at': '', 'detached_at': '', 'volume_id': 'ad74cad8-db25-41c6-a50a-ce4cad08c1ea', 'serial': 'ad74cad8-db25-41c6-a50a-ce4cad08c1ea'}, 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'f431964f-554d-4d01-8f7b-7d775175bff1', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.692 226310 WARNING nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.700 226310 DEBUG nova.virt.libvirt.host [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.701 226310 DEBUG nova.virt.libvirt.host [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.705 226310 DEBUG nova.virt.libvirt.host [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.706 226310 DEBUG nova.virt.libvirt.host [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.708 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.708 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.709 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.710 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.710 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.710 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.711 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.711 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.712 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.712 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.713 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.714 226310 DEBUG nova.virt.hardware [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.759 226310 DEBUG nova.storage.rbd_utils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image bba0127d-6332-44fa-8fc1-c3d7321260fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:52 np0005539564 nova_compute[226295]: 2025-11-29 08:14:52.764 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/185540646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.098 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.100 226310 DEBUG nova.virt.libvirt.vif [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-694037075',display_name='tempest-ServerActionsTestOtherB-server-694037075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-694037075',id=115,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-bqwpho60',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=6033af89-d6d3-45c5-bf88-b2f17800f12e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.101 226310 DEBUG nova.network.os_vif_util [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.101 226310 DEBUG nova.network.os_vif_util [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.103 226310 DEBUG nova.objects.instance [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.121 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <uuid>6033af89-d6d3-45c5-bf88-b2f17800f12e</uuid>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <name>instance-00000073</name>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherB-server-694037075</nova:name>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:14:52</nova:creationTime>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:port uuid="a221d286-cb0f-41fd-9997-b1687a875e0e">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="serial">6033af89-d6d3-45c5-bf88-b2f17800f12e</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="uuid">6033af89-d6d3-45c5-bf88-b2f17800f12e</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/6033af89-d6d3-45c5-bf88-b2f17800f12e_disk">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5e:cf:8d"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <target dev="tapa221d286-cb"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/console.log" append="off"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:14:53 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:14:53 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.122 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Preparing to wait for external event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.123 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.123 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.123 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.124 226310 DEBUG nova.virt.libvirt.vif [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-694037075',display_name='tempest-ServerActionsTestOtherB-server-694037075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-694037075',id=115,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-bqwpho60',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=6033af89-d6d3-45c5-bf88-b2f17800f12e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.124 226310 DEBUG nova.network.os_vif_util [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.125 226310 DEBUG nova.network.os_vif_util [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.125 226310 DEBUG os_vif [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.126 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.126 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.126 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.129 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.130 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa221d286-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.130 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa221d286-cb, col_values=(('external_ids', {'iface-id': 'a221d286-cb0f-41fd-9997-b1687a875e0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:cf:8d', 'vm-uuid': '6033af89-d6d3-45c5-bf88-b2f17800f12e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.170 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539564 NetworkManager[48997]: <info>  [1764404093.1725] manager: (tapa221d286-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.176 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2143360591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.181 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.183 226310 INFO os_vif [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb')#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.199 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.252 226310 DEBUG nova.virt.libvirt.vif [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-699492332',display_name='tempest-ServerActionsTestOtherA-server-699492332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-699492332',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-8ajfbgd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=bba0127d-6332-44fa-8fc1-c3d7321260fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.253 226310 DEBUG nova.network.os_vif_util [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.253 226310 DEBUG nova.network.os_vif_util [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.254 226310 DEBUG nova.objects.instance [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_devices' on Instance uuid bba0127d-6332-44fa-8fc1-c3d7321260fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.265 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.265 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.266 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:5e:cf:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.267 226310 INFO nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Using config drive#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.300 226310 DEBUG nova.storage.rbd_utils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.308 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <uuid>bba0127d-6332-44fa-8fc1-c3d7321260fa</uuid>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <name>instance-00000074</name>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherA-server-699492332</nova:name>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:14:52</nova:creationTime>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:user uuid="58625e4c2b5d43a1abbab05b98853a65">tempest-ServerActionsTestOtherA-552273978-project-member</nova:user>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:project uuid="250671461f27498d9f6b4476c7b69533">tempest-ServerActionsTestOtherA-552273978</nova:project>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <nova:port uuid="cee20a9b-a551-4e60-9b01-151c48dc45fe">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="serial">bba0127d-6332-44fa-8fc1-c3d7321260fa</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="uuid">bba0127d-6332-44fa-8fc1-c3d7321260fa</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/bba0127d-6332-44fa-8fc1-c3d7321260fa_disk.config">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-ad74cad8-db25-41c6-a50a-ce4cad08c1ea">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <serial>ad74cad8-db25-41c6-a50a-ce4cad08c1ea</serial>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:a2:7d:a4"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <target dev="tapcee20a9b-a5"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/console.log" append="off"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:14:53 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:14:53 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:14:53 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:14:53 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.309 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Preparing to wait for external event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.310 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.310 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.310 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.311 226310 DEBUG nova.virt.libvirt.vif [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-699492332',display_name='tempest-ServerActionsTestOtherA-server-699492332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-699492332',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-8ajfbgd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=bba0127d-6332-44fa-8fc1-c3d7321260fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.311 226310 DEBUG nova.network.os_vif_util [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.311 226310 DEBUG nova.network.os_vif_util [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.312 226310 DEBUG os_vif [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.312 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.313 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.313 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.315 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.316 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcee20a9b-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.316 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcee20a9b-a5, col_values=(('external_ids', {'iface-id': 'cee20a9b-a551-4e60-9b01-151c48dc45fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:7d:a4', 'vm-uuid': 'bba0127d-6332-44fa-8fc1-c3d7321260fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.318 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539564 NetworkManager[48997]: <info>  [1764404093.3189] manager: (tapcee20a9b-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.324 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.326 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.326 226310 INFO os_vif [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5')#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.381 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.381 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.382 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:a2:7d:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.382 226310 INFO nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Using config drive#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.415 226310 DEBUG nova.storage.rbd_utils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image bba0127d-6332-44fa-8fc1-c3d7321260fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.685 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404078.6833808, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.685 226310 INFO nova.compute.manager [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.710 226310 DEBUG nova.compute.manager [None req-35dbb015-9403-41c0-a7d9-fb88bb37845f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.953 226310 INFO nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Creating config drive at /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/disk.config#033[00m
Nov 29 03:14:53 np0005539564 nova_compute[226295]: 2025-11-29 08:14:53.961 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7tc8v73i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:54.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.086 226310 INFO nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Creating config drive at /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.099 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp13slsq78 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.134 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7tc8v73i" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.138 226310 DEBUG nova.network.neutron [req-f83008e8-ba12-4849-a48b-6bdfa7fe9163 req-a4f7b231-8072-4601-94d9-db735742301d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updated VIF entry in instance network info cache for port a221d286-cb0f-41fd-9997-b1687a875e0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.139 226310 DEBUG nova.network.neutron [req-f83008e8-ba12-4849-a48b-6bdfa7fe9163 req-a4f7b231-8072-4601-94d9-db735742301d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.181 226310 DEBUG nova.storage.rbd_utils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image bba0127d-6332-44fa-8fc1-c3d7321260fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.186 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/disk.config bba0127d-6332-44fa-8fc1-c3d7321260fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.232 226310 DEBUG oslo_concurrency.lockutils [req-f83008e8-ba12-4849-a48b-6bdfa7fe9163 req-a4f7b231-8072-4601-94d9-db735742301d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.243 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp13slsq78" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.277 226310 DEBUG nova.storage.rbd_utils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.284 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:54.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.369 226310 DEBUG oslo_concurrency.processutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/disk.config bba0127d-6332-44fa-8fc1-c3d7321260fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.371 226310 INFO nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Deleting local config drive /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/disk.config because it was imported into RBD.#033[00m
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.4336] manager: (tapcee20a9b-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Nov 29 03:14:54 np0005539564 kernel: tapcee20a9b-a5: entered promiscuous mode
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00408|binding|INFO|Claiming lport cee20a9b-a551-4e60-9b01-151c48dc45fe for this chassis.
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00409|binding|INFO|cee20a9b-a551-4e60-9b01-151c48dc45fe: Claiming fa:16:3e:a2:7d:a4 10.100.0.4
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.438 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.449 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:7d:a4 10.100.0.4'], port_security=['fa:16:3e:a2:7d:a4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bba0127d-6332-44fa-8fc1-c3d7321260fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80090c82-90f6-4c43-a017-5be03974adfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=cee20a9b-a551-4e60-9b01-151c48dc45fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.451 139780 INFO neutron.agent.ovn.metadata.agent [-] Port cee20a9b-a551-4e60-9b01-151c48dc45fe in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 bound to our chassis#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.453 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10a9b8d1-2de6-4e47-8e44-16b661da8624#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.468 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ab9111-25b5-4f68-9168-edcb635f4154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.470 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10a9b8d1-21 in ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:54 np0005539564 systemd-udevd[269641]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.472 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10a9b8d1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.472 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae1e6bd-2341-45d5-832d-b960543f6d11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.473 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f5f5fc-9b49-48ea-94b6-22c1231abf93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 systemd-machined[190128]: New machine qemu-49-instance-00000074.
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00410|binding|INFO|Setting lport cee20a9b-a551-4e60-9b01-151c48dc45fe ovn-installed in OVS
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00411|binding|INFO|Setting lport cee20a9b-a551-4e60-9b01-151c48dc45fe up in Southbound
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.484 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.486 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.486 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[501a00f6-a568-4f12-b0c0-4d14af65464d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.4933] device (tapcee20a9b-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:54 np0005539564 systemd[1]: Started Virtual Machine qemu-49-instance-00000074.
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.4940] device (tapcee20a9b-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.501 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c59a2f20-1eed-40ba-99d2-943ae2d9298c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.517 226310 DEBUG oslo_concurrency.processutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.517 226310 INFO nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Deleting local config drive /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.537 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b94c63-76ad-4406-84ec-32c4dcd85416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.5479] manager: (tap10a9b8d1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.548 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbc5245-8899-4a36-b7a0-dbb5ff1ff908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.586 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[88803d04-f5c3-40d3-aa7b-448d7ba054cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.589 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bda1eb45-8720-4878-804d-2e6774ed9446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 kernel: tapa221d286-cb: entered promiscuous mode
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.6021] manager: (tapa221d286-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00412|binding|INFO|Claiming lport a221d286-cb0f-41fd-9997-b1687a875e0e for this chassis.
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00413|binding|INFO|a221d286-cb0f-41fd-9997-b1687a875e0e: Claiming fa:16:3e:5e:cf:8d 10.100.0.10
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.603 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 systemd-udevd[269677]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.618 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:cf:8d 10.100.0.10'], port_security=['fa:16:3e:5e:cf:8d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6033af89-d6d3-45c5-bf88-b2f17800f12e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a221d286-cb0f-41fd-9997-b1687a875e0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.6215] device (tapa221d286-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.6233] device (tap10a9b8d1-20): carrier: link connected
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.6236] device (tapa221d286-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00414|binding|INFO|Setting lport a221d286-cb0f-41fd-9997-b1687a875e0e ovn-installed in OVS
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00415|binding|INFO|Setting lport a221d286-cb0f-41fd-9997-b1687a875e0e up in Southbound
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.625 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.630 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 systemd-machined[190128]: New machine qemu-50-instance-00000073.
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.642 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a562cdca-47b8-4559-91d7-2832f18f746a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 systemd[1]: Started Virtual Machine qemu-50-instance-00000073.
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.660 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebe5ef2-55ac-4ab9-9ac3-6ea915001a44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706930, 'reachable_time': 31221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269691, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.679 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0a972d-e379-4dbb-af1d-ec8f7f28078e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:676'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706930, 'tstamp': 706930}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269692, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.702 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b0007c-a479-4457-9906-4d5a9c577556]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706930, 'reachable_time': 31221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269709, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.739 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1ddc89-289b-45fa-9278-f12c69ae2183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.798 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[05d7d8a3-c213-4335-87ba-859474b5006f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.800 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.800 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.800 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10a9b8d1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.802 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 NetworkManager[48997]: <info>  [1764404094.8032] manager: (tap10a9b8d1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Nov 29 03:14:54 np0005539564 kernel: tap10a9b8d1-20: entered promiscuous mode
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.806 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.807 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10a9b8d1-20, col_values=(('external_ids', {'iface-id': '56facbc8-1a3f-4008-8f77-23eeac832994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.808 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00416|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.809 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.810 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.811 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[af2d8bf8-3cb4-49d4-817a-6938137c1c95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.812 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:54.813 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'env', 'PROCESS_TAG=haproxy-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10a9b8d1-2de6-4e47-8e44-16b661da8624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:54 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:54Z|00417|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.836 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.894 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.913 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404094.91326, bba0127d-6332-44fa-8fc1-c3d7321260fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.914 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.932 226310 DEBUG nova.compute.manager [req-05f32cdb-fa24-4996-a0ad-3d5b0718e72a req-9eca2e8e-8a4b-4576-aea7-077f4fff59b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.932 226310 DEBUG oslo_concurrency.lockutils [req-05f32cdb-fa24-4996-a0ad-3d5b0718e72a req-9eca2e8e-8a4b-4576-aea7-077f4fff59b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.933 226310 DEBUG oslo_concurrency.lockutils [req-05f32cdb-fa24-4996-a0ad-3d5b0718e72a req-9eca2e8e-8a4b-4576-aea7-077f4fff59b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.933 226310 DEBUG oslo_concurrency.lockutils [req-05f32cdb-fa24-4996-a0ad-3d5b0718e72a req-9eca2e8e-8a4b-4576-aea7-077f4fff59b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.933 226310 DEBUG nova.compute.manager [req-05f32cdb-fa24-4996-a0ad-3d5b0718e72a req-9eca2e8e-8a4b-4576-aea7-077f4fff59b3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Processing event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.934 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.941 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.944 226310 INFO nova.virt.libvirt.driver [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance spawned successfully.#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.944 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.948 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.951 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.970 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.971 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.971 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.971 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.972 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.972 226310 DEBUG nova.virt.libvirt.driver [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.977 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.977 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404094.914838, bba0127d-6332-44fa-8fc1-c3d7321260fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:54 np0005539564 nova_compute[226295]: 2025-11-29 08:14:54.977 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.005 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.015 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404094.9368446, bba0127d-6332-44fa-8fc1-c3d7321260fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.015 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.034 226310 INFO nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Took 4.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.035 226310 DEBUG nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.036 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.041 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.085 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.104 226310 INFO nova.compute.manager [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Took 8.06 seconds to build instance.#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.107 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404095.1066835, 6033af89-d6d3-45c5-bf88-b2f17800f12e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.107 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.126 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.127 226310 DEBUG oslo_concurrency.lockutils [None req-3f7d8ce4-47a9-4138-af75-96cda764a63b 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.130 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404095.1071382, 6033af89-d6d3-45c5-bf88-b2f17800f12e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.130 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.157 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.160 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.178 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.202 226310 DEBUG nova.compute.manager [req-f91985b5-02f1-44c5-88c4-091dbd3012f3 req-f50bc192-b894-4a20-9285-9f63505f6d82 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.203 226310 DEBUG oslo_concurrency.lockutils [req-f91985b5-02f1-44c5-88c4-091dbd3012f3 req-f50bc192-b894-4a20-9285-9f63505f6d82 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.203 226310 DEBUG oslo_concurrency.lockutils [req-f91985b5-02f1-44c5-88c4-091dbd3012f3 req-f50bc192-b894-4a20-9285-9f63505f6d82 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.203 226310 DEBUG oslo_concurrency.lockutils [req-f91985b5-02f1-44c5-88c4-091dbd3012f3 req-f50bc192-b894-4a20-9285-9f63505f6d82 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.204 226310 DEBUG nova.compute.manager [req-f91985b5-02f1-44c5-88c4-091dbd3012f3 req-f50bc192-b894-4a20-9285-9f63505f6d82 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Processing event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.204 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.213 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404095.2130504, 6033af89-d6d3-45c5-bf88-b2f17800f12e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.214 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.215 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.219 226310 INFO nova.virt.libvirt.driver [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance spawned successfully.#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.219 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.233 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.238 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.243 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.243 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.244 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.244 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.245 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.245 226310 DEBUG nova.virt.libvirt.driver [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.266 226310 DEBUG nova.network.neutron [req-d88105a8-312e-4862-b67d-a0c456a51f02 req-c6934a49-70ea-4603-82c5-573de5449aac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updated VIF entry in instance network info cache for port cee20a9b-a551-4e60-9b01-151c48dc45fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.267 226310 DEBUG nova.network.neutron [req-d88105a8-312e-4862-b67d-a0c456a51f02 req-c6934a49-70ea-4603-82c5-573de5449aac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating instance_info_cache with network_info: [{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.282 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.304 226310 DEBUG oslo_concurrency.lockutils [req-d88105a8-312e-4862-b67d-a0c456a51f02 req-c6934a49-70ea-4603-82c5-573de5449aac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:55 np0005539564 podman[269813]: 2025-11-29 08:14:55.215176372 +0000 UTC m=+0.025631832 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.348 226310 INFO nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Took 7.08 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.348 226310 DEBUG nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.398 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.399 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.399 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.399 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.400 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:55 np0005539564 podman[269813]: 2025-11-29 08:14:55.448798644 +0000 UTC m=+0.259254084 container create 29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.474 226310 INFO nova.compute.manager [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Took 8.49 seconds to build instance.#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.495 226310 DEBUG oslo_concurrency.lockutils [None req-8ccc27d9-1484-4c74-ae0a-f740b9ce722b ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:55 np0005539564 systemd[1]: Started libpod-conmon-29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c.scope.
Nov 29 03:14:55 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:14:55 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df43302f2b868452dd8c04de6b6123ea671a0685bb5697c6fe00dd57a62bd11/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539564 podman[269813]: 2025-11-29 08:14:55.734454409 +0000 UTC m=+0.544909879 container init 29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:14:55 np0005539564 podman[269813]: 2025-11-29 08:14:55.747170752 +0000 UTC m=+0.557626192 container start 29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:55 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[269827]: [NOTICE]   (269850) : New worker (269852) forked
Nov 29 03:14:55 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[269827]: [NOTICE]   (269850) : Loading success.
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.835 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a221d286-cb0f-41fd-9997-b1687a875e0e in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.838 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:14:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.850 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[47725cc4-c431-4cb2-9351-17f53e2c3ac0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.851 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d5b8c11-b1 in ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.852 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d5b8c11-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.852 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[890b0e46-6904-4a55-82a3-0effcd66dd0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.854 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce500e2-c4c6-41be-a048-32884a0ff22f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.874 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[20e43b10-a15a-42e7-aa0b-32d3f144f818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.897 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb01e32-f549-4352-9a5c-f3d68e0764f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2679506722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.928 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4789dcec-0aed-4724-93ec-bd32017e396a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:55 np0005539564 nova_compute[226295]: 2025-11-29 08:14:55.930 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.939 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad7efb8-3da9-4f7b-b7a1-23e335e7d212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:55 np0005539564 NetworkManager[48997]: <info>  [1764404095.9407] manager: (tap4d5b8c11-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.972 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2fbfbc-2792-4e51-9e1b-1fd969b40667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:55.978 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[91fb498c-cb37-4af2-967c-106f4a5bca5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:56 np0005539564 NetworkManager[48997]: <info>  [1764404095.9998] device (tap4d5b8c11-b0): carrier: link connected
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.006 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f63d8d65-9f9d-4af5-b6b2-e9f5215cc269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.026 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[db6cf8de-ce29-4e7e-b0d5-304b2509f559]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707068, 'reachable_time': 37879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269876, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.039 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.039 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.042 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.042 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.048 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[38d3ecc4-ea62-4919-a8d4-a9cb1a58f217]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:6d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707068, 'tstamp': 707068}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269877, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:56.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.067 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0a45421e-6b21-4d5b-adef-36099d127e25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707068, 'reachable_time': 37879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269878, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.111 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b688062-fb9e-4c1f-b3bf-a2bb3f3af093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.196 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab2d7dd-2397-4704-8466-962c949dda6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.198 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.198 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.199 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.200 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:56 np0005539564 NetworkManager[48997]: <info>  [1764404096.2014] manager: (tap4d5b8c11-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Nov 29 03:14:56 np0005539564 kernel: tap4d5b8c11-b0: entered promiscuous mode
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.222 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.226 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.227 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:56 np0005539564 ovn_controller[130591]: 2025-11-29T08:14:56Z|00418|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.229 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.230 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.231 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab02540-e8d4-4f45-90f7-e361ba2096c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.232 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:14:56.233 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'env', 'PROCESS_TAG=haproxy-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d5b8c11-b69e-4a74-846b-03943fb29a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.246 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.302 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.304 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4272MB free_disk=20.785400390625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.304 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.305 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.367 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 6033af89-d6d3-45c5-bf88-b2f17800f12e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.367 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance bba0127d-6332-44fa-8fc1-c3d7321260fa actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.368 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.368 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.419 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:56 np0005539564 podman[269911]: 2025-11-29 08:14:56.58160474 +0000 UTC m=+0.028293664 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:56 np0005539564 podman[269911]: 2025-11-29 08:14:56.702320216 +0000 UTC m=+0.149009120 container create f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:14:56 np0005539564 systemd[1]: Started libpod-conmon-f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46.scope.
Nov 29 03:14:56 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:14:56 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5caaefd0d22b976f00d791ff78363709c734276e168660fdfae53c0f5bb55f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:56 np0005539564 podman[269911]: 2025-11-29 08:14:56.784123542 +0000 UTC m=+0.230812466 container init f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:56 np0005539564 podman[269911]: 2025-11-29 08:14:56.79068723 +0000 UTC m=+0.237376164 container start f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:56 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[269945]: [NOTICE]   (269949) : New worker (269951) forked
Nov 29 03:14:56 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[269945]: [NOTICE]   (269949) : Loading success.
Nov 29 03:14:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3638998199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.926 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.931 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.958 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.985 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:14:56 np0005539564 nova_compute[226295]: 2025-11-29 08:14:56.986 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.099 226310 DEBUG nova.compute.manager [req-d6abe347-e962-439b-8206-b007845c512b req-f57a548b-6ac9-458c-8b55-1c02216242a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.101 226310 DEBUG oslo_concurrency.lockutils [req-d6abe347-e962-439b-8206-b007845c512b req-f57a548b-6ac9-458c-8b55-1c02216242a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.101 226310 DEBUG oslo_concurrency.lockutils [req-d6abe347-e962-439b-8206-b007845c512b req-f57a548b-6ac9-458c-8b55-1c02216242a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.102 226310 DEBUG oslo_concurrency.lockutils [req-d6abe347-e962-439b-8206-b007845c512b req-f57a548b-6ac9-458c-8b55-1c02216242a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.102 226310 DEBUG nova.compute.manager [req-d6abe347-e962-439b-8206-b007845c512b req-f57a548b-6ac9-458c-8b55-1c02216242a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] No waiting events found dispatching network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.103 226310 WARNING nova.compute.manager [req-d6abe347-e962-439b-8206-b007845c512b req-f57a548b-6ac9-458c-8b55-1c02216242a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received unexpected event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.211 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.298 226310 DEBUG nova.compute.manager [req-d5752781-c463-4892-abf5-1c054b494625 req-24fd062d-04a3-4396-97bc-50654afe559d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.298 226310 DEBUG oslo_concurrency.lockutils [req-d5752781-c463-4892-abf5-1c054b494625 req-24fd062d-04a3-4396-97bc-50654afe559d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.299 226310 DEBUG oslo_concurrency.lockutils [req-d5752781-c463-4892-abf5-1c054b494625 req-24fd062d-04a3-4396-97bc-50654afe559d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.299 226310 DEBUG oslo_concurrency.lockutils [req-d5752781-c463-4892-abf5-1c054b494625 req-24fd062d-04a3-4396-97bc-50654afe559d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.299 226310 DEBUG nova.compute.manager [req-d5752781-c463-4892-abf5-1c054b494625 req-24fd062d-04a3-4396-97bc-50654afe559d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] No waiting events found dispatching network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:57 np0005539564 nova_compute[226295]: 2025-11-29 08:14:57.299 226310 WARNING nova.compute.manager [req-d5752781-c463-4892-abf5-1c054b494625 req-24fd062d-04a3-4396-97bc-50654afe559d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received unexpected event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:58.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:58 np0005539564 nova_compute[226295]: 2025-11-29 08:14:58.319 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:14:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:59 np0005539564 nova_compute[226295]: 2025-11-29 08:14:59.275 226310 DEBUG nova.compute.manager [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-changed-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:59 np0005539564 nova_compute[226295]: 2025-11-29 08:14:59.275 226310 DEBUG nova.compute.manager [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Refreshing instance network info cache due to event network-changed-cee20a9b-a551-4e60-9b01-151c48dc45fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:59 np0005539564 nova_compute[226295]: 2025-11-29 08:14:59.275 226310 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:59 np0005539564 nova_compute[226295]: 2025-11-29 08:14:59.276 226310 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:59 np0005539564 nova_compute[226295]: 2025-11-29 08:14:59.276 226310 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Refreshing network info cache for port cee20a9b-a551-4e60-9b01-151c48dc45fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:00.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:01 np0005539564 nova_compute[226295]: 2025-11-29 08:15:01.803 226310 DEBUG nova.compute.manager [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:15:01 np0005539564 nova_compute[226295]: 2025-11-29 08:15:01.928 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:01 np0005539564 nova_compute[226295]: 2025-11-29 08:15:01.929 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:01 np0005539564 nova_compute[226295]: 2025-11-29 08:15:01.960 226310 DEBUG nova.objects.instance [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_requests' on Instance uuid bba0127d-6332-44fa-8fc1-c3d7321260fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:01.977 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.000 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.001 226310 INFO nova.compute.claims [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.002 226310 DEBUG nova.objects.instance [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'resources' on Instance uuid bba0127d-6332-44fa-8fc1-c3d7321260fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.022 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404087.0176682, 98ef0160-f50f-4264-a93b-31e6e8909b19 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.023 226310 INFO nova.compute.manager [-] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.032 226310 DEBUG nova.objects.instance [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_devices' on Instance uuid bba0127d-6332-44fa-8fc1-c3d7321260fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.057 226310 DEBUG nova.compute.manager [None req-d11d55b8-883d-4b42-bb93-8f5dd45b0597 - - - - - -] [instance: 98ef0160-f50f-4264-a93b-31e6e8909b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:02.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.150 226310 INFO nova.compute.resource_tracker [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating resource usage from migration eb7b9f1a-e015-45b9-993b-21e134c7d768#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.214 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.266 226310 DEBUG oslo_concurrency.processutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.523 226310 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updated VIF entry in instance network info cache for port cee20a9b-a551-4e60-9b01-151c48dc45fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.524 226310 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating instance_info_cache with network_info: [{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.541 226310 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.541 226310 DEBUG nova.compute.manager [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-changed-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.542 226310 DEBUG nova.compute.manager [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Refreshing instance network info cache due to event network-changed-a221d286-cb0f-41fd-9997-b1687a875e0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.542 226310 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.542 226310 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.543 226310 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Refreshing network info cache for port a221d286-cb0f-41fd-9997-b1687a875e0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/404797161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.770 226310 DEBUG oslo_concurrency.processutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.777 226310 DEBUG nova.compute.provider_tree [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.799 226310 DEBUG nova.scheduler.client.report [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.850 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.851 226310 INFO nova.compute.manager [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Migrating#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.904 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.905 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquired lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:02 np0005539564 nova_compute[226295]: 2025-11-29 08:15:02.906 226310 DEBUG nova.network.neutron [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:03 np0005539564 nova_compute[226295]: 2025-11-29 08:15:03.322 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:03 np0005539564 podman[270036]: 2025-11-29 08:15:03.50494854 +0000 UTC m=+0.058448727 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:15:03 np0005539564 podman[270034]: 2025-11-29 08:15:03.528680641 +0000 UTC m=+0.089312130 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:03 np0005539564 podman[270035]: 2025-11-29 08:15:03.535434573 +0000 UTC m=+0.085091696 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:15:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:03.726 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:03.728 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:03.729 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:04.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:04.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:05 np0005539564 nova_compute[226295]: 2025-11-29 08:15:05.516 226310 DEBUG nova.network.neutron [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating instance_info_cache with network_info: [{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:05 np0005539564 nova_compute[226295]: 2025-11-29 08:15:05.527 226310 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updated VIF entry in instance network info cache for port a221d286-cb0f-41fd-9997-b1687a875e0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:05 np0005539564 nova_compute[226295]: 2025-11-29 08:15:05.528 226310 DEBUG nova.network.neutron [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:05 np0005539564 nova_compute[226295]: 2025-11-29 08:15:05.544 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Releasing lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:05 np0005539564 nova_compute[226295]: 2025-11-29 08:15:05.556 226310 DEBUG oslo_concurrency.lockutils [req-ab2f6fc8-0055-48af-9e59-e3153f2be634 req-c829a8ba-7dda-447f-9624-3aab19b5a6fe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:05 np0005539564 nova_compute[226295]: 2025-11-29 08:15:05.684 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:15:05 np0005539564 nova_compute[226295]: 2025-11-29 08:15:05.691 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:15:05 np0005539564 nova_compute[226295]: 2025-11-29 08:15:05.913 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:06.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:06.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:07 np0005539564 nova_compute[226295]: 2025-11-29 08:15:07.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:08.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:08 np0005539564 nova_compute[226295]: 2025-11-29 08:15:08.326 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:08Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a2:7d:a4 10.100.0.4
Nov 29 03:15:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:08Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:7d:a4 10.100.0.4
Nov 29 03:15:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:09Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:cf:8d 10.100.0.10
Nov 29 03:15:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:09Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:cf:8d 10.100.0.10
Nov 29 03:15:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:10.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:10.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:12.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:12 np0005539564 nova_compute[226295]: 2025-11-29 08:15:12.217 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:12.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:13 np0005539564 nova_compute[226295]: 2025-11-29 08:15:13.357 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:13 np0005539564 nova_compute[226295]: 2025-11-29 08:15:13.842 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:14.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e291 e291: 3 total, 3 up, 3 in
Nov 29 03:15:15 np0005539564 nova_compute[226295]: 2025-11-29 08:15:15.774 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:15:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:16.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:16 np0005539564 nova_compute[226295]: 2025-11-29 08:15:16.982 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:17 np0005539564 nova_compute[226295]: 2025-11-29 08:15:17.221 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:18.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.359 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539564 kernel: tapcee20a9b-a5 (unregistering): left promiscuous mode
Nov 29 03:15:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:18.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:18 np0005539564 NetworkManager[48997]: <info>  [1764404118.3961] device (tapcee20a9b-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.409 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:18Z|00419|binding|INFO|Releasing lport cee20a9b-a551-4e60-9b01-151c48dc45fe from this chassis (sb_readonly=0)
Nov 29 03:15:18 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:18Z|00420|binding|INFO|Setting lport cee20a9b-a551-4e60-9b01-151c48dc45fe down in Southbound
Nov 29 03:15:18 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:18Z|00421|binding|INFO|Removing iface tapcee20a9b-a5 ovn-installed in OVS
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.413 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:18.422 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:7d:a4 10.100.0.4'], port_security=['fa:16:3e:a2:7d:a4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bba0127d-6332-44fa-8fc1-c3d7321260fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80090c82-90f6-4c43-a017-5be03974adfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=cee20a9b-a551-4e60-9b01-151c48dc45fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:18.424 139780 INFO neutron.agent.ovn.metadata.agent [-] Port cee20a9b-a551-4e60-9b01-151c48dc45fe in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:15:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:18.426 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:18.428 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0e476e-225f-4ca3-9d1b-32a882452ff5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:18.429 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace which is not needed anymore#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.448 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539564 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 29 03:15:18 np0005539564 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000074.scope: Consumed 14.655s CPU time.
Nov 29 03:15:18 np0005539564 systemd-machined[190128]: Machine qemu-49-instance-00000074 terminated.
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.657 226310 DEBUG nova.compute.manager [req-7c91e95f-1e58-4d38-966e-d5d1336da5d4 req-06ca93a2-db05-4b7d-9455-53e7945d22eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-unplugged-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.657 226310 DEBUG oslo_concurrency.lockutils [req-7c91e95f-1e58-4d38-966e-d5d1336da5d4 req-06ca93a2-db05-4b7d-9455-53e7945d22eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.658 226310 DEBUG oslo_concurrency.lockutils [req-7c91e95f-1e58-4d38-966e-d5d1336da5d4 req-06ca93a2-db05-4b7d-9455-53e7945d22eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.658 226310 DEBUG oslo_concurrency.lockutils [req-7c91e95f-1e58-4d38-966e-d5d1336da5d4 req-06ca93a2-db05-4b7d-9455-53e7945d22eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.659 226310 DEBUG nova.compute.manager [req-7c91e95f-1e58-4d38-966e-d5d1336da5d4 req-06ca93a2-db05-4b7d-9455-53e7945d22eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] No waiting events found dispatching network-vif-unplugged-cee20a9b-a551-4e60-9b01-151c48dc45fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.659 226310 WARNING nova.compute.manager [req-7c91e95f-1e58-4d38-966e-d5d1336da5d4 req-06ca93a2-db05-4b7d-9455-53e7945d22eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received unexpected event network-vif-unplugged-cee20a9b-a551-4e60-9b01-151c48dc45fe for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.790 226310 INFO nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.797 226310 INFO nova.virt.libvirt.driver [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance destroyed successfully.#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.798 226310 DEBUG nova.virt.libvirt.vif [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-699492332',display_name='tempest-ServerActionsTestOtherA-server-699492332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-699492332',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-8ajfbgd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=bba0127d-6332-44fa-8fc1-c3d7321260fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-656101484-network", "vif_mac": "fa:16:3e:a2:7d:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.798 226310 DEBUG nova.network.os_vif_util [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-656101484-network", "vif_mac": "fa:16:3e:a2:7d:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.799 226310 DEBUG nova.network.os_vif_util [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.799 226310 DEBUG os_vif [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.801 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.802 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcee20a9b-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.803 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.805 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.809 226310 INFO os_vif [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5')#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.815 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:18 np0005539564 nova_compute[226295]: 2025-11-29 08:15:18.816 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:19 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[269827]: [NOTICE]   (269850) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:19 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[269827]: [NOTICE]   (269850) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:19 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[269827]: [WARNING]  (269850) : Exiting Master process...
Nov 29 03:15:19 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[269827]: [WARNING]  (269850) : Exiting Master process...
Nov 29 03:15:19 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[269827]: [ALERT]    (269850) : Current worker (269852) exited with code 143 (Terminated)
Nov 29 03:15:19 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[269827]: [WARNING]  (269850) : All workers exited. Exiting... (0)
Nov 29 03:15:19 np0005539564 systemd[1]: libpod-29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c.scope: Deactivated successfully.
Nov 29 03:15:19 np0005539564 podman[270119]: 2025-11-29 08:15:19.021216586 +0000 UTC m=+0.465090266 container died 29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:15:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay-5df43302f2b868452dd8c04de6b6123ea671a0685bb5697c6fe00dd57a62bd11-merged.mount: Deactivated successfully.
Nov 29 03:15:19 np0005539564 podman[270119]: 2025-11-29 08:15:19.10181209 +0000 UTC m=+0.545685770 container cleanup 29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:19 np0005539564 systemd[1]: libpod-conmon-29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c.scope: Deactivated successfully.
Nov 29 03:15:19 np0005539564 podman[270160]: 2025-11-29 08:15:19.28680721 +0000 UTC m=+0.146155183 container remove 29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.296 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[65a73f62-2562-44e1-a682-2b951d7d8c73]: (4, ('Sat Nov 29 08:15:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c)\n29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c\nSat Nov 29 08:15:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c)\n29610eb514d3db951b4816c009299e4e42032c18342740efdc39ae9f4276fb6c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.298 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49bd8ab4-6c5e-435a-8a9f-6f4fe357de68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.299 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.301 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:19 np0005539564 kernel: tap10a9b8d1-20: left promiscuous mode
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.330 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.334 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e05d598b-dd20-4e00-8518-5722786ec5c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.354 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca6167c-5be0-4dd2-80f7-2fa17d1bf1c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.355 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dad72361-42e0-492a-98ab-21aaceff42e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.382 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a7e0c6-44bb-4307-9231-8282c715eb69]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706921, 'reachable_time': 18584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270178, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.387 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:19.387 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[66edeef7-dc91-40fc-a174-86c252246fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:19 np0005539564 systemd[1]: run-netns-ovnmeta\x2d10a9b8d1\x2d2de6\x2d4e47\x2d8e44\x2d16b661da8624.mount: Deactivated successfully.
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.549 226310 DEBUG nova.network.neutron [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Port cee20a9b-a551-4e60-9b01-151c48dc45fe binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.655 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.656 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.656 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.903 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.904 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquired lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:19 np0005539564 nova_compute[226295]: 2025-11-29 08:15:19.904 226310 DEBUG nova.network.neutron [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:20.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:20.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:20 np0005539564 nova_compute[226295]: 2025-11-29 08:15:20.796 226310 DEBUG nova.compute.manager [req-88396ad1-ab56-4a4d-bbee-5c1936f9cb88 req-1773b862-bcce-488d-9943-9ef28978c6b5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:20 np0005539564 nova_compute[226295]: 2025-11-29 08:15:20.797 226310 DEBUG oslo_concurrency.lockutils [req-88396ad1-ab56-4a4d-bbee-5c1936f9cb88 req-1773b862-bcce-488d-9943-9ef28978c6b5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:20 np0005539564 nova_compute[226295]: 2025-11-29 08:15:20.798 226310 DEBUG oslo_concurrency.lockutils [req-88396ad1-ab56-4a4d-bbee-5c1936f9cb88 req-1773b862-bcce-488d-9943-9ef28978c6b5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:20 np0005539564 nova_compute[226295]: 2025-11-29 08:15:20.798 226310 DEBUG oslo_concurrency.lockutils [req-88396ad1-ab56-4a4d-bbee-5c1936f9cb88 req-1773b862-bcce-488d-9943-9ef28978c6b5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:20 np0005539564 nova_compute[226295]: 2025-11-29 08:15:20.799 226310 DEBUG nova.compute.manager [req-88396ad1-ab56-4a4d-bbee-5c1936f9cb88 req-1773b862-bcce-488d-9943-9ef28978c6b5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] No waiting events found dispatching network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:20 np0005539564 nova_compute[226295]: 2025-11-29 08:15:20.799 226310 WARNING nova.compute.manager [req-88396ad1-ab56-4a4d-bbee-5c1936f9cb88 req-1773b862-bcce-488d-9943-9ef28978c6b5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received unexpected event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:15:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.285 226310 DEBUG nova.network.neutron [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating instance_info_cache with network_info: [{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.314 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Releasing lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.432 226310 DEBUG os_brick.utils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.444 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.464 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.464 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[dc085c9f-e370-416e-ac30-f86a6cf866b5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.466 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.479 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.480 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5840dd-960a-4e6c-852c-4404c20ce430]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.482 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.495 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.495 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a2ecb7-b041-4f6c-b4db-04fff510e612]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.497 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[d486995b-bff9-4580-8e02-3353b970abe5]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.497 226310 DEBUG oslo_concurrency.processutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.546 226310 DEBUG oslo_concurrency.processutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "nvme version" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.550 226310 DEBUG os_brick.initiator.connectors.lightos [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.551 226310 DEBUG os_brick.initiator.connectors.lightos [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.551 226310 DEBUG os_brick.initiator.connectors.lightos [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:15:21 np0005539564 nova_compute[226295]: 2025-11-29 08:15:21.552 226310 DEBUG os_brick.utils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] <== get_connector_properties: return (118ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:15:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:22.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.224 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3990925621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:22.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.596 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.599 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.600 226310 INFO nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Creating image(s)#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.601 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.601 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Ensure instance console log exists: /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.602 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.603 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.603 226310 DEBUG oslo_concurrency.lockutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.608 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Start _get_guest_xml network_info=[{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-656101484-network", "vif_mac": "fa:16:3e:a2:7d:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ad74cad8-db25-41c6-a50a-ce4cad08c1ea', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ad74cad8-db25-41c6-a50a-ce4cad08c1ea', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'bba0127d-6332-44fa-8fc1-c3d7321260fa', 'attached_at': '2025-11-29T08:15:22.000000', 'detached_at': '', 'volume_id': 'ad74cad8-db25-41c6-a50a-ce4cad08c1ea', 'serial': 'ad74cad8-db25-41c6-a50a-ce4cad08c1ea'}, 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '7fe3516b-e843-4c3e-9990-524cf03c1787', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.616 226310 WARNING nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.623 226310 DEBUG nova.virt.libvirt.host [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.624 226310 DEBUG nova.virt.libvirt.host [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.628 226310 DEBUG nova.virt.libvirt.host [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.629 226310 DEBUG nova.virt.libvirt.host [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.632 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.632 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.633 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.634 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.635 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.635 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.636 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.636 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.637 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.637 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.638 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.638 226310 DEBUG nova.virt.hardware [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.639 226310 DEBUG nova.objects.instance [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bba0127d-6332-44fa-8fc1-c3d7321260fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:22 np0005539564 nova_compute[226295]: 2025-11-29 08:15:22.773 226310 DEBUG oslo_concurrency.processutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:23 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1172514175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.234 226310 DEBUG oslo_concurrency.processutils [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.265 226310 DEBUG nova.virt.libvirt.vif [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-699492332',display_name='tempest-ServerActionsTestOtherA-server-699492332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-699492332',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-8ajfbgd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=bba0127d-6332-44fa-8fc1-c3d7321260fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-656101484-network", "vif_mac": "fa:16:3e:a2:7d:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.265 226310 DEBUG nova.network.os_vif_util [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-656101484-network", "vif_mac": "fa:16:3e:a2:7d:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.266 226310 DEBUG nova.network.os_vif_util [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.269 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <uuid>bba0127d-6332-44fa-8fc1-c3d7321260fa</uuid>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <name>instance-00000074</name>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <memory>196608</memory>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherA-server-699492332</nova:name>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:15:22</nova:creationTime>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.micro">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <nova:memory>192</nova:memory>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <nova:user uuid="58625e4c2b5d43a1abbab05b98853a65">tempest-ServerActionsTestOtherA-552273978-project-member</nova:user>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <nova:project uuid="250671461f27498d9f6b4476c7b69533">tempest-ServerActionsTestOtherA-552273978</nova:project>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <nova:port uuid="cee20a9b-a551-4e60-9b01-151c48dc45fe">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <entry name="serial">bba0127d-6332-44fa-8fc1-c3d7321260fa</entry>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <entry name="uuid">bba0127d-6332-44fa-8fc1-c3d7321260fa</entry>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/bba0127d-6332-44fa-8fc1-c3d7321260fa_disk.config">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-ad74cad8-db25-41c6-a50a-ce4cad08c1ea">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <serial>ad74cad8-db25-41c6-a50a-ce4cad08c1ea</serial>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:a2:7d:a4"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <target dev="tapcee20a9b-a5"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa/console.log" append="off"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:15:23 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:15:23 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:15:23 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:15:23 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.271 226310 DEBUG nova.virt.libvirt.vif [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-699492332',display_name='tempest-ServerActionsTestOtherA-server-699492332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-699492332',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-8ajfbgd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=bba0127d-6332-44fa-8fc1-c3d7321260fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-656101484-network", "vif_mac": "fa:16:3e:a2:7d:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.271 226310 DEBUG nova.network.os_vif_util [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-656101484-network", "vif_mac": "fa:16:3e:a2:7d:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.272 226310 DEBUG nova.network.os_vif_util [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.272 226310 DEBUG os_vif [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.273 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.274 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.274 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.277 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.278 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcee20a9b-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.278 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcee20a9b-a5, col_values=(('external_ids', {'iface-id': 'cee20a9b-a551-4e60-9b01-151c48dc45fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:7d:a4', 'vm-uuid': 'bba0127d-6332-44fa-8fc1-c3d7321260fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539564 NetworkManager[48997]: <info>  [1764404123.2819] manager: (tapcee20a9b-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.283 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.286 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.287 226310 INFO os_vif [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5')#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.522 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.523 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.523 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:a2:7d:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.524 226310 INFO nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Using config drive#033[00m
Nov 29 03:15:23 np0005539564 kernel: tapcee20a9b-a5: entered promiscuous mode
Nov 29 03:15:23 np0005539564 NetworkManager[48997]: <info>  [1764404123.6452] manager: (tapcee20a9b-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.645 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:23Z|00422|binding|INFO|Claiming lport cee20a9b-a551-4e60-9b01-151c48dc45fe for this chassis.
Nov 29 03:15:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:23Z|00423|binding|INFO|cee20a9b-a551-4e60-9b01-151c48dc45fe: Claiming fa:16:3e:a2:7d:a4 10.100.0.4
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.656 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:7d:a4 10.100.0.4'], port_security=['fa:16:3e:a2:7d:a4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bba0127d-6332-44fa-8fc1-c3d7321260fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '5', 'neutron:security_group_ids': '80090c82-90f6-4c43-a017-5be03974adfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=cee20a9b-a551-4e60-9b01-151c48dc45fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.658 139780 INFO neutron.agent.ovn.metadata.agent [-] Port cee20a9b-a551-4e60-9b01-151c48dc45fe in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 bound to our chassis#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.659 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10a9b8d1-2de6-4e47-8e44-16b661da8624#033[00m
Nov 29 03:15:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:23Z|00424|binding|INFO|Setting lport cee20a9b-a551-4e60-9b01-151c48dc45fe ovn-installed in OVS
Nov 29 03:15:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:23Z|00425|binding|INFO|Setting lport cee20a9b-a551-4e60-9b01-151c48dc45fe up in Southbound
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539564 nova_compute[226295]: 2025-11-29 08:15:23.671 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.675 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[80213900-c380-4b6d-a1db-b276d37e9452]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.676 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10a9b8d1-21 in ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.678 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10a9b8d1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.678 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[365813fd-6cbd-41ed-8e80-ee546feb8215]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.679 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[41e94a8a-1357-4718-b5bc-e0df21258bf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 systemd-machined[190128]: New machine qemu-51-instance-00000074.
Nov 29 03:15:23 np0005539564 systemd-udevd[270261]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.695 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[7f11ae60-f8c5-4e57-a0de-55becf66543c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 systemd[1]: Started Virtual Machine qemu-51-instance-00000074.
Nov 29 03:15:23 np0005539564 NetworkManager[48997]: <info>  [1764404123.7014] device (tapcee20a9b-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:23 np0005539564 NetworkManager[48997]: <info>  [1764404123.7024] device (tapcee20a9b-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.712 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac475e2-322d-4a2d-ab42-0e72b8e790e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.745 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[38d85ab9-bf8f-4d4a-8693-3111ad54c348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 systemd-udevd[270264]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:23 np0005539564 NetworkManager[48997]: <info>  [1764404123.7517] manager: (tap10a9b8d1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.750 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[78f79906-d4fc-4924-a44b-9879d98479a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.777 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f33857-4bc0-49aa-a6a4-e973c7f95d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.781 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[10951384-21d6-419a-a841-06e27a39d0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 NetworkManager[48997]: <info>  [1764404123.8010] device (tap10a9b8d1-20): carrier: link connected
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.804 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[83c7c101-f7a1-428b-bc7f-89d01001e089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.825 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e91756-40ed-4728-aba2-c765a6358acc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709848, 'reachable_time': 38060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270293, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.844 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f5eb7522-5092-44d7-9727-26a012ad8be7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:676'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709848, 'tstamp': 709848}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270294, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.864 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d86f3b-bed4-4eda-8c24-f77ba443da41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709848, 'reachable_time': 38060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270295, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:23.913 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[22fcebca-681a-475e-808d-9764d3f5add5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.003 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f640702e-84a3-4f10-8761-66d9b5e5f8a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.005 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.006 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.007 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10a9b8d1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.010 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539564 NetworkManager[48997]: <info>  [1764404124.0114] manager: (tap10a9b8d1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 29 03:15:24 np0005539564 kernel: tap10a9b8d1-20: entered promiscuous mode
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.038 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.040 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10a9b8d1-20, col_values=(('external_ids', {'iface-id': '56facbc8-1a3f-4008-8f77-23eeac832994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:24Z|00426|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.047 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.048 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe39b6c-f16f-43c5-bffa-93fe9111b113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.050 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:15:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:24.052 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'env', 'PROCESS_TAG=haproxy-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10a9b8d1-2de6-4e47-8e44-16b661da8624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:24.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.241 226310 DEBUG nova.compute.manager [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.242 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for bba0127d-6332-44fa-8fc1-c3d7321260fa due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.242 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404124.2405405, bba0127d-6332-44fa-8fc1-c3d7321260fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.242 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.247 226310 INFO nova.virt.libvirt.driver [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance running successfully.#033[00m
Nov 29 03:15:24 np0005539564 virtqemud[225880]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.250 226310 DEBUG nova.virt.libvirt.guest [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.251 226310 DEBUG nova.virt.libvirt.driver [None req-80c138c6-1629-455e-9671-4f973f8057f0 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.269 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.273 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.295 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.295 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404124.241518, bba0127d-6332-44fa-8fc1-c3d7321260fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.295 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.316 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.320 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:24 np0005539564 nova_compute[226295]: 2025-11-29 08:15:24.349 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:15:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:24.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:24 np0005539564 podman[270369]: 2025-11-29 08:15:24.503632719 +0000 UTC m=+0.055967731 container create fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:15:24 np0005539564 systemd[1]: Started libpod-conmon-fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776.scope.
Nov 29 03:15:24 np0005539564 podman[270369]: 2025-11-29 08:15:24.475014817 +0000 UTC m=+0.027349859 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:15:24 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:15:24 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4e3e43573729113cb443588865bdf907f5ec0eb82e9f5f9532ee147c6f717b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:24 np0005539564 podman[270369]: 2025-11-29 08:15:24.863986899 +0000 UTC m=+0.416321951 container init fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:24 np0005539564 podman[270369]: 2025-11-29 08:15:24.876453725 +0000 UTC m=+0.428788737 container start fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:15:24 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[270384]: [NOTICE]   (270388) : New worker (270390) forked
Nov 29 03:15:24 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[270384]: [NOTICE]   (270388) : Loading success.
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.340 226310 DEBUG nova.compute.manager [req-8026e528-eb6d-4f26-8d6f-63cfeff3b524 req-8c62412d-3d6a-4c69-a3fb-5b9a9730fad1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.341 226310 DEBUG oslo_concurrency.lockutils [req-8026e528-eb6d-4f26-8d6f-63cfeff3b524 req-8c62412d-3d6a-4c69-a3fb-5b9a9730fad1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.341 226310 DEBUG oslo_concurrency.lockutils [req-8026e528-eb6d-4f26-8d6f-63cfeff3b524 req-8c62412d-3d6a-4c69-a3fb-5b9a9730fad1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.342 226310 DEBUG oslo_concurrency.lockutils [req-8026e528-eb6d-4f26-8d6f-63cfeff3b524 req-8c62412d-3d6a-4c69-a3fb-5b9a9730fad1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.342 226310 DEBUG nova.compute.manager [req-8026e528-eb6d-4f26-8d6f-63cfeff3b524 req-8c62412d-3d6a-4c69-a3fb-5b9a9730fad1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] No waiting events found dispatching network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.342 226310 WARNING nova.compute.manager [req-8026e528-eb6d-4f26-8d6f-63cfeff3b524 req-8c62412d-3d6a-4c69-a3fb-5b9a9730fad1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received unexpected event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.573 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.573 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:25 np0005539564 nova_compute[226295]: 2025-11-29 08:15:25.573 226310 DEBUG nova.compute.manager [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Going to confirm migration 17 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:15:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:26 np0005539564 nova_compute[226295]: 2025-11-29 08:15:26.700 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:26 np0005539564 nova_compute[226295]: 2025-11-29 08:15:26.701 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquired lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:26 np0005539564 nova_compute[226295]: 2025-11-29 08:15:26.701 226310 DEBUG nova.network.neutron [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:26 np0005539564 nova_compute[226295]: 2025-11-29 08:15:26.702 226310 DEBUG nova.objects.instance [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'info_cache' on Instance uuid bba0127d-6332-44fa-8fc1-c3d7321260fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:27 np0005539564 nova_compute[226295]: 2025-11-29 08:15:27.230 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539564 nova_compute[226295]: 2025-11-29 08:15:27.519 226310 DEBUG nova.compute.manager [req-60c3a892-e22d-4354-b8c6-abec53bdec57 req-51bb9085-50f7-41ec-be83-75c0f4848b44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:27 np0005539564 nova_compute[226295]: 2025-11-29 08:15:27.521 226310 DEBUG oslo_concurrency.lockutils [req-60c3a892-e22d-4354-b8c6-abec53bdec57 req-51bb9085-50f7-41ec-be83-75c0f4848b44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:27 np0005539564 nova_compute[226295]: 2025-11-29 08:15:27.521 226310 DEBUG oslo_concurrency.lockutils [req-60c3a892-e22d-4354-b8c6-abec53bdec57 req-51bb9085-50f7-41ec-be83-75c0f4848b44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:27 np0005539564 nova_compute[226295]: 2025-11-29 08:15:27.522 226310 DEBUG oslo_concurrency.lockutils [req-60c3a892-e22d-4354-b8c6-abec53bdec57 req-51bb9085-50f7-41ec-be83-75c0f4848b44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:27 np0005539564 nova_compute[226295]: 2025-11-29 08:15:27.523 226310 DEBUG nova.compute.manager [req-60c3a892-e22d-4354-b8c6-abec53bdec57 req-51bb9085-50f7-41ec-be83-75c0f4848b44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] No waiting events found dispatching network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:27 np0005539564 nova_compute[226295]: 2025-11-29 08:15:27.523 226310 WARNING nova.compute.manager [req-60c3a892-e22d-4354-b8c6-abec53bdec57 req-51bb9085-50f7-41ec-be83-75c0f4848b44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received unexpected event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:15:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:28.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:28 np0005539564 nova_compute[226295]: 2025-11-29 08:15:28.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:28.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e292 e292: 3 total, 3 up, 3 in
Nov 29 03:15:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:30.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:30 np0005539564 nova_compute[226295]: 2025-11-29 08:15:30.834 226310 DEBUG nova.network.neutron [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating instance_info_cache with network_info: [{"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:30 np0005539564 nova_compute[226295]: 2025-11-29 08:15:30.892 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Releasing lock "refresh_cache-bba0127d-6332-44fa-8fc1-c3d7321260fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:30 np0005539564 nova_compute[226295]: 2025-11-29 08:15:30.893 226310 DEBUG nova.objects.instance [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'migration_context' on Instance uuid bba0127d-6332-44fa-8fc1-c3d7321260fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:30 np0005539564 nova_compute[226295]: 2025-11-29 08:15:30.979 226310 DEBUG nova.storage.rbd_utils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image bba0127d-6332-44fa-8fc1-c3d7321260fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:30 np0005539564 nova_compute[226295]: 2025-11-29 08:15:30.983 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:30 np0005539564 nova_compute[226295]: 2025-11-29 08:15:30.983 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:31 np0005539564 nova_compute[226295]: 2025-11-29 08:15:31.162 226310 DEBUG oslo_concurrency.processutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/760031319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:31 np0005539564 nova_compute[226295]: 2025-11-29 08:15:31.651 226310 DEBUG oslo_concurrency.processutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:31 np0005539564 nova_compute[226295]: 2025-11-29 08:15:31.662 226310 DEBUG nova.compute.provider_tree [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:31 np0005539564 nova_compute[226295]: 2025-11-29 08:15:31.684 226310 DEBUG nova.scheduler.client.report [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:31 np0005539564 nova_compute[226295]: 2025-11-29 08:15:31.745 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:31 np0005539564 nova_compute[226295]: 2025-11-29 08:15:31.885 226310 INFO nova.scheduler.client.report [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Deleted allocation for migration eb7b9f1a-e015-45b9-993b-21e134c7d768#033[00m
Nov 29 03:15:31 np0005539564 nova_compute[226295]: 2025-11-29 08:15:31.950 226310 DEBUG oslo_concurrency.lockutils [None req-73cb13d5-93b6-4e2f-ab8e-085dccecad76 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:32.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:32 np0005539564 nova_compute[226295]: 2025-11-29 08:15:32.232 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:33 np0005539564 nova_compute[226295]: 2025-11-29 08:15:33.323 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:34.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:34 np0005539564 podman[270440]: 2025-11-29 08:15:34.534013108 +0000 UTC m=+0.082740303 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:15:34 np0005539564 podman[270441]: 2025-11-29 08:15:34.553849423 +0000 UTC m=+0.085293002 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:34 np0005539564 podman[270439]: 2025-11-29 08:15:34.595168137 +0000 UTC m=+0.140331566 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:15:35 np0005539564 nova_compute[226295]: 2025-11-29 08:15:35.033 226310 INFO nova.compute.manager [None req-433ddaf8-0e43-4c56-818c-525c98678dbf 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Get console output#033[00m
Nov 29 03:15:35 np0005539564 nova_compute[226295]: 2025-11-29 08:15:35.040 226310 INFO oslo.privsep.daemon [None req-433ddaf8-0e43-4c56-818c-525c98678dbf 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpnk6ett8v/privsep.sock']#033[00m
Nov 29 03:15:35 np0005539564 nova_compute[226295]: 2025-11-29 08:15:35.977 226310 INFO oslo.privsep.daemon [None req-433ddaf8-0e43-4c56-818c-525c98678dbf 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 03:15:35 np0005539564 nova_compute[226295]: 2025-11-29 08:15:35.810 270504 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 03:15:35 np0005539564 nova_compute[226295]: 2025-11-29 08:15:35.815 270504 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 03:15:35 np0005539564 nova_compute[226295]: 2025-11-29 08:15:35.818 270504 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 03:15:35 np0005539564 nova_compute[226295]: 2025-11-29 08:15:35.818 270504 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270504#033[00m
Nov 29 03:15:36 np0005539564 nova_compute[226295]: 2025-11-29 08:15:36.065 270504 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:15:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:36.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:36.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:37 np0005539564 nova_compute[226295]: 2025-11-29 08:15:37.235 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e293 e293: 3 total, 3 up, 3 in
Nov 29 03:15:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e294 e294: 3 total, 3 up, 3 in
Nov 29 03:15:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:37.694 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:37.696 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:15:37 np0005539564 nova_compute[226295]: 2025-11-29 08:15:37.707 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:38.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e295 e295: 3 total, 3 up, 3 in
Nov 29 03:15:38 np0005539564 nova_compute[226295]: 2025-11-29 08:15:38.324 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:38.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:38 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:38Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:7d:a4 10.100.0.4
Nov 29 03:15:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e296 e296: 3 total, 3 up, 3 in
Nov 29 03:15:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:40.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:40.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:41.699 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:42.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:42 np0005539564 nova_compute[226295]: 2025-11-29 08:15:42.238 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:43 np0005539564 nova_compute[226295]: 2025-11-29 08:15:43.327 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:44.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:44Z|00427|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:15:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:44Z|00428|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:15:44 np0005539564 nova_compute[226295]: 2025-11-29 08:15:44.982 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539564 nova_compute[226295]: 2025-11-29 08:15:45.373 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:46.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:46 np0005539564 nova_compute[226295]: 2025-11-29 08:15:46.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:46.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:47 np0005539564 nova_compute[226295]: 2025-11-29 08:15:47.242 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e297 e297: 3 total, 3 up, 3 in
Nov 29 03:15:47 np0005539564 nova_compute[226295]: 2025-11-29 08:15:47.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:48.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:48 np0005539564 nova_compute[226295]: 2025-11-29 08:15:48.330 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:48.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:50.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:50 np0005539564 nova_compute[226295]: 2025-11-29 08:15:50.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:50 np0005539564 nova_compute[226295]: 2025-11-29 08:15:50.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:15:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:50.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:50 np0005539564 nova_compute[226295]: 2025-11-29 08:15:50.869 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:50 np0005539564 nova_compute[226295]: 2025-11-29 08:15:50.870 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:50 np0005539564 nova_compute[226295]: 2025-11-29 08:15:50.870 226310 INFO nova.compute.manager [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Shelving#033[00m
Nov 29 03:15:50 np0005539564 nova_compute[226295]: 2025-11-29 08:15:50.898 226310 DEBUG nova.virt.libvirt.driver [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:15:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:51 np0005539564 nova_compute[226295]: 2025-11-29 08:15:51.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:51 np0005539564 nova_compute[226295]: 2025-11-29 08:15:51.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:15:51 np0005539564 nova_compute[226295]: 2025-11-29 08:15:51.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:15:51 np0005539564 nova_compute[226295]: 2025-11-29 08:15:51.380 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:51 np0005539564 nova_compute[226295]: 2025-11-29 08:15:51.380 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:51 np0005539564 nova_compute[226295]: 2025-11-29 08:15:51.380 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:15:51 np0005539564 nova_compute[226295]: 2025-11-29 08:15:51.380 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:52.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.245 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.444 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.444 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.444 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.445 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.445 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.446 226310 INFO nova.compute.manager [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Terminating instance#033[00m
Nov 29 03:15:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:52.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.447 226310 DEBUG nova.compute.manager [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:15:52 np0005539564 kernel: tapcee20a9b-a5 (unregistering): left promiscuous mode
Nov 29 03:15:52 np0005539564 NetworkManager[48997]: <info>  [1764404152.4940] device (tapcee20a9b-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.503 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:52Z|00429|binding|INFO|Releasing lport cee20a9b-a551-4e60-9b01-151c48dc45fe from this chassis (sb_readonly=0)
Nov 29 03:15:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:52Z|00430|binding|INFO|Setting lport cee20a9b-a551-4e60-9b01-151c48dc45fe down in Southbound
Nov 29 03:15:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:52Z|00431|binding|INFO|Removing iface tapcee20a9b-a5 ovn-installed in OVS
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.507 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.513 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:7d:a4 10.100.0.4'], port_security=['fa:16:3e:a2:7d:a4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bba0127d-6332-44fa-8fc1-c3d7321260fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '6', 'neutron:security_group_ids': '80090c82-90f6-4c43-a017-5be03974adfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=cee20a9b-a551-4e60-9b01-151c48dc45fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.515 139780 INFO neutron.agent.ovn.metadata.agent [-] Port cee20a9b-a551-4e60-9b01-151c48dc45fe in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.518 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.519 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0514499b-b99d-4e62-b6c2-5fa7f0d65a2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.520 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace which is not needed anymore#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.538 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 29 03:15:52 np0005539564 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000074.scope: Consumed 14.643s CPU time.
Nov 29 03:15:52 np0005539564 systemd-machined[190128]: Machine qemu-51-instance-00000074 terminated.
Nov 29 03:15:52 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[270384]: [NOTICE]   (270388) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:52 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[270384]: [NOTICE]   (270388) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:52 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[270384]: [WARNING]  (270388) : Exiting Master process...
Nov 29 03:15:52 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[270384]: [WARNING]  (270388) : Exiting Master process...
Nov 29 03:15:52 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[270384]: [ALERT]    (270388) : Current worker (270390) exited with code 143 (Terminated)
Nov 29 03:15:52 np0005539564 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[270384]: [WARNING]  (270388) : All workers exited. Exiting... (0)
Nov 29 03:15:52 np0005539564 systemd[1]: libpod-fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776.scope: Deactivated successfully.
Nov 29 03:15:52 np0005539564 podman[270529]: 2025-11-29 08:15:52.684867408 +0000 UTC m=+0.053282318 container died fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.691 226310 INFO nova.virt.libvirt.driver [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Instance destroyed successfully.#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.691 226310 DEBUG nova.objects.instance [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'resources' on Instance uuid bba0127d-6332-44fa-8fc1-c3d7321260fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.709 226310 DEBUG nova.virt.libvirt.vif [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-699492332',display_name='tempest-ServerActionsTestOtherA-server-699492332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-699492332',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtkVNAdyfTZxevPypDM4BSYE1hEin7kURjxMHJymyPY9Csd3IhKS5yulx5aFvPHDU4xFm7qnx5crpT14GkSPm/EI9TigbJvl3D9u9RL82NR4qlOqRKDsJAXZt9pXEPkRw==',key_name='tempest-keypair-809017258',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-8ajfbgd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=bba0127d-6332-44fa-8fc1-c3d7321260fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.712 226310 DEBUG nova.network.os_vif_util [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "address": "fa:16:3e:a2:7d:a4", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcee20a9b-a5", "ovs_interfaceid": "cee20a9b-a551-4e60-9b01-151c48dc45fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.714 226310 DEBUG nova.network.os_vif_util [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.714 226310 DEBUG os_vif [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.716 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.716 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcee20a9b-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.719 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.720 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.723 226310 INFO os_vif [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:7d:a4,bridge_name='br-int',has_traffic_filtering=True,id=cee20a9b-a551-4e60-9b01-151c48dc45fe,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcee20a9b-a5')#033[00m
Nov 29 03:15:52 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:52 np0005539564 systemd[1]: var-lib-containers-storage-overlay-bd4e3e43573729113cb443588865bdf907f5ec0eb82e9f5f9532ee147c6f717b-merged.mount: Deactivated successfully.
Nov 29 03:15:52 np0005539564 podman[270529]: 2025-11-29 08:15:52.742741949 +0000 UTC m=+0.111156819 container cleanup fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:15:52 np0005539564 systemd[1]: libpod-conmon-fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776.scope: Deactivated successfully.
Nov 29 03:15:52 np0005539564 podman[270585]: 2025-11-29 08:15:52.809489939 +0000 UTC m=+0.042791565 container remove fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.814 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec89285-cbd3-4a02-9516-2502bee584ff]: (4, ('Sat Nov 29 08:15:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776)\nfa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776\nSat Nov 29 08:15:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (fa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776)\nfa9c84608cd7468c7471f3b6efcc3f1f136ed177ad07ebc8fd2fc0a9de082776\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.816 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[445f9cd4-bd77-4070-bdd9-b60c83b823f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.817 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:52 np0005539564 kernel: tap10a9b8d1-20: left promiscuous mode
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.818 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.824 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a8be6475-14b5-41c2-bbc5-7f2219a05bf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.834 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.842 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a0225c2b-fd06-44f0-8b88-563a2f3fc4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.844 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[eff2b3e0-adfb-42be-9599-fb66f66ca03e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.859 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[19b5fdba-1857-4c00-a7a8-ad0dda751105]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709841, 'reachable_time': 19959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270604, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.862 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:52.862 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[f23417f6-604b-46d8-982b-2ce83ca49a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:52 np0005539564 systemd[1]: run-netns-ovnmeta\x2d10a9b8d1\x2d2de6\x2d4e47\x2d8e44\x2d16b661da8624.mount: Deactivated successfully.
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.872 226310 DEBUG nova.compute.manager [req-065ece4a-6e0a-409d-bc41-230892d6e313 req-97205715-1454-4c8f-baab-d18c3637e3bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-unplugged-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.873 226310 DEBUG oslo_concurrency.lockutils [req-065ece4a-6e0a-409d-bc41-230892d6e313 req-97205715-1454-4c8f-baab-d18c3637e3bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.873 226310 DEBUG oslo_concurrency.lockutils [req-065ece4a-6e0a-409d-bc41-230892d6e313 req-97205715-1454-4c8f-baab-d18c3637e3bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.874 226310 DEBUG oslo_concurrency.lockutils [req-065ece4a-6e0a-409d-bc41-230892d6e313 req-97205715-1454-4c8f-baab-d18c3637e3bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.874 226310 DEBUG nova.compute.manager [req-065ece4a-6e0a-409d-bc41-230892d6e313 req-97205715-1454-4c8f-baab-d18c3637e3bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] No waiting events found dispatching network-vif-unplugged-cee20a9b-a551-4e60-9b01-151c48dc45fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.874 226310 DEBUG nova.compute.manager [req-065ece4a-6e0a-409d-bc41-230892d6e313 req-97205715-1454-4c8f-baab-d18c3637e3bd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-unplugged-cee20a9b-a551-4e60-9b01-151c48dc45fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.915 226310 INFO nova.virt.libvirt.driver [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Deleting instance files /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa_del#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.916 226310 INFO nova.virt.libvirt.driver [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Deletion of /var/lib/nova/instances/bba0127d-6332-44fa-8fc1-c3d7321260fa_del complete#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.985 226310 INFO nova.compute.manager [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.986 226310 DEBUG oslo.service.loopingcall [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.986 226310 DEBUG nova.compute.manager [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:15:52 np0005539564 nova_compute[226295]: 2025-11-29 08:15:52.986 226310 DEBUG nova.network.neutron [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.018 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539564 kernel: tapa221d286-cb (unregistering): left promiscuous mode
Nov 29 03:15:53 np0005539564 NetworkManager[48997]: <info>  [1764404153.2131] device (tapa221d286-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:53Z|00432|binding|INFO|Releasing lport a221d286-cb0f-41fd-9997-b1687a875e0e from this chassis (sb_readonly=0)
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.214 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:53Z|00433|binding|INFO|Setting lport a221d286-cb0f-41fd-9997-b1687a875e0e down in Southbound
Nov 29 03:15:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:15:53Z|00434|binding|INFO|Removing iface tapa221d286-cb ovn-installed in OVS
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.217 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.224 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:cf:8d 10.100.0.10'], port_security=['fa:16:3e:5e:cf:8d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6033af89-d6d3-45c5-bf88-b2f17800f12e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a221d286-cb0f-41fd-9997-b1687a875e0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.226 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a221d286-cb0f-41fd-9997-b1687a875e0e in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.229 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d5b8c11-b69e-4a74-846b-03943fb29a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.230 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[99402b9e-90fa-4955-b85a-a14b552d5b37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.230 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace which is not needed anymore#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.250 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539564 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 29 03:15:53 np0005539564 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000073.scope: Consumed 16.199s CPU time.
Nov 29 03:15:53 np0005539564 systemd-machined[190128]: Machine qemu-50-instance-00000073 terminated.
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.337 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.358 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.358 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.359 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[269945]: [NOTICE]   (269949) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[269945]: [NOTICE]   (269949) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[269945]: [WARNING]  (269949) : Exiting Master process...
Nov 29 03:15:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[269945]: [ALERT]    (269949) : Current worker (269951) exited with code 143 (Terminated)
Nov 29 03:15:53 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[269945]: [WARNING]  (269949) : All workers exited. Exiting... (0)
Nov 29 03:15:53 np0005539564 systemd[1]: libpod-f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46.scope: Deactivated successfully.
Nov 29 03:15:53 np0005539564 podman[270627]: 2025-11-29 08:15:53.380533792 +0000 UTC m=+0.055749104 container died f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:15:53 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:53 np0005539564 systemd[1]: var-lib-containers-storage-overlay-0e5caaefd0d22b976f00d791ff78363709c734276e168660fdfae53c0f5bb55f-merged.mount: Deactivated successfully.
Nov 29 03:15:53 np0005539564 podman[270627]: 2025-11-29 08:15:53.426624856 +0000 UTC m=+0.101840168 container cleanup f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:15:53 np0005539564 systemd[1]: libpod-conmon-f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46.scope: Deactivated successfully.
Nov 29 03:15:53 np0005539564 NetworkManager[48997]: <info>  [1764404153.4442] manager: (tapa221d286-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.489 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539564 podman[270658]: 2025-11-29 08:15:53.516305625 +0000 UTC m=+0.059764243 container remove f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.518 226310 DEBUG nova.compute.manager [req-c1e0ea86-460b-44ac-a7fa-398758d9775a req-0dd97609-a605-4425-8e72-311b81b0d860 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-unplugged-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.518 226310 DEBUG oslo_concurrency.lockutils [req-c1e0ea86-460b-44ac-a7fa-398758d9775a req-0dd97609-a605-4425-8e72-311b81b0d860 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.518 226310 DEBUG oslo_concurrency.lockutils [req-c1e0ea86-460b-44ac-a7fa-398758d9775a req-0dd97609-a605-4425-8e72-311b81b0d860 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.518 226310 DEBUG oslo_concurrency.lockutils [req-c1e0ea86-460b-44ac-a7fa-398758d9775a req-0dd97609-a605-4425-8e72-311b81b0d860 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.519 226310 DEBUG nova.compute.manager [req-c1e0ea86-460b-44ac-a7fa-398758d9775a req-0dd97609-a605-4425-8e72-311b81b0d860 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] No waiting events found dispatching network-vif-unplugged-a221d286-cb0f-41fd-9997-b1687a875e0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.519 226310 WARNING nova.compute.manager [req-c1e0ea86-460b-44ac-a7fa-398758d9775a req-0dd97609-a605-4425-8e72-311b81b0d860 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received unexpected event network-vif-unplugged-a221d286-cb0f-41fd-9997-b1687a875e0e for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.525 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[26fa0091-5ecd-4163-8e5e-8ea329f9178d]: (4, ('Sat Nov 29 08:15:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46)\nf9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46\nSat Nov 29 08:15:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (f9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46)\nf9096fad7fb57e350c5f0822ff3ec45f53e862f3284aa8902555ce9501027b46\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.527 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4e426063-31ff-4b36-a25b-8904678a4e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.529 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.530 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539564 kernel: tap4d5b8c11-b0: left promiscuous mode
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.561 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.565 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[24d750dc-3201-41e7-a2e2-2b5725105508]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.583 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b72d3933-e23e-4fbc-9fc9-c67c42742791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.584 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[63531ae7-b02f-416d-86b9-9ef715fe51e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.604 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4b56289f-e9db-4dce-af00-f13ff1a3de1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707060, 'reachable_time': 31671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270683, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.606 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:15:53.606 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[e62c0fc7-b365-41f8-af6a-3e1e0779a4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:53 np0005539564 systemd[1]: run-netns-ovnmeta\x2d4d5b8c11\x2db69e\x2d4a74\x2d846b\x2d03943fb29a81.mount: Deactivated successfully.
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.927 226310 INFO nova.virt.libvirt.driver [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.936 226310 INFO nova.virt.libvirt.driver [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance destroyed successfully.#033[00m
Nov 29 03:15:53 np0005539564 nova_compute[226295]: 2025-11-29 08:15:53.937 226310 DEBUG nova.objects.instance [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'numa_topology' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:54.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.370 226310 DEBUG nova.network.neutron [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.416 226310 INFO nova.compute.manager [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Took 1.43 seconds to deallocate network for instance.#033[00m
Nov 29 03:15:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:54.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.495 226310 DEBUG nova.compute.manager [req-692081cc-d35b-44c3-95d3-50d19c3cdc4f req-ea0ac10f-fe01-4770-aa38-d2cfb1f7080d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-deleted-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.578 226310 INFO nova.virt.libvirt.driver [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Beginning cold snapshot process#033[00m
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.763 226310 INFO nova.compute.manager [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Took 0.35 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.765 226310 DEBUG nova.compute.manager [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Deleting volume: ad74cad8-db25-41c6-a50a-ce4cad08c1ea _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.864 226310 DEBUG nova.virt.libvirt.imagebackend [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.993 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:54 np0005539564 nova_compute[226295]: 2025-11-29 08:15:54.994 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.000 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.039 226310 INFO nova.scheduler.client.report [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Deleted allocations for instance bba0127d-6332-44fa-8fc1-c3d7321260fa#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.045 226310 DEBUG nova.compute.manager [req-3c13cdd6-ce8c-4e5f-8278-82c63b79bc95 req-b3977a8f-dedc-4e15-87c5-4ace3f66ed9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.046 226310 DEBUG oslo_concurrency.lockutils [req-3c13cdd6-ce8c-4e5f-8278-82c63b79bc95 req-b3977a8f-dedc-4e15-87c5-4ace3f66ed9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.047 226310 DEBUG oslo_concurrency.lockutils [req-3c13cdd6-ce8c-4e5f-8278-82c63b79bc95 req-b3977a8f-dedc-4e15-87c5-4ace3f66ed9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.047 226310 DEBUG oslo_concurrency.lockutils [req-3c13cdd6-ce8c-4e5f-8278-82c63b79bc95 req-b3977a8f-dedc-4e15-87c5-4ace3f66ed9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.048 226310 DEBUG nova.compute.manager [req-3c13cdd6-ce8c-4e5f-8278-82c63b79bc95 req-b3977a8f-dedc-4e15-87c5-4ace3f66ed9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] No waiting events found dispatching network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.049 226310 WARNING nova.compute.manager [req-3c13cdd6-ce8c-4e5f-8278-82c63b79bc95 req-b3977a8f-dedc-4e15-87c5-4ace3f66ed9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Received unexpected event network-vif-plugged-cee20a9b-a551-4e60-9b01-151c48dc45fe for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.116 226310 DEBUG oslo_concurrency.lockutils [None req-a9052df1-298f-4fbc-9b80-d1e7c046d0f1 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "bba0127d-6332-44fa-8fc1-c3d7321260fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.175 226310 DEBUG nova.storage.rbd_utils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(1472ef1d379544eabb45f724b242ec09) on rbd image(6033af89-d6d3-45c5-bf88-b2f17800f12e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:15:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e298 e298: 3 total, 3 up, 3 in
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.380 226310 DEBUG nova.storage.rbd_utils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] cloning vms/6033af89-d6d3-45c5-bf88-b2f17800f12e_disk@1472ef1d379544eabb45f724b242ec09 to images/a83442a1-fb28-462f-8936-713084bd46ef clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.538 226310 DEBUG nova.storage.rbd_utils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] flattening images/a83442a1-fb28-462f-8936-713084bd46ef flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.624 226310 DEBUG nova.compute.manager [req-a7de1474-502e-4a56-8fac-dc18a2f2f816 req-d80d1ee6-ba61-4a89-aec3-994894ff25bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.625 226310 DEBUG oslo_concurrency.lockutils [req-a7de1474-502e-4a56-8fac-dc18a2f2f816 req-d80d1ee6-ba61-4a89-aec3-994894ff25bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.627 226310 DEBUG oslo_concurrency.lockutils [req-a7de1474-502e-4a56-8fac-dc18a2f2f816 req-d80d1ee6-ba61-4a89-aec3-994894ff25bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.628 226310 DEBUG oslo_concurrency.lockutils [req-a7de1474-502e-4a56-8fac-dc18a2f2f816 req-d80d1ee6-ba61-4a89-aec3-994894ff25bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.629 226310 DEBUG nova.compute.manager [req-a7de1474-502e-4a56-8fac-dc18a2f2f816 req-d80d1ee6-ba61-4a89-aec3-994894ff25bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] No waiting events found dispatching network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:55 np0005539564 nova_compute[226295]: 2025-11-29 08:15:55.630 226310 WARNING nova.compute.manager [req-a7de1474-502e-4a56-8fac-dc18a2f2f816 req-d80d1ee6-ba61-4a89-aec3-994894ff25bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received unexpected event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:15:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:15:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2076297525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:15:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:15:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2076297525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:56 np0005539564 nova_compute[226295]: 2025-11-29 08:15:56.136 226310 DEBUG nova.storage.rbd_utils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] removing snapshot(1472ef1d379544eabb45f724b242ec09) on rbd image(6033af89-d6d3-45c5-bf88-b2f17800f12e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:15:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.375906) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156376010, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2451, "num_deletes": 255, "total_data_size": 5487630, "memory_usage": 5558504, "flush_reason": "Manual Compaction"}
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156425305, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3564009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45432, "largest_seqno": 47878, "table_properties": {"data_size": 3554189, "index_size": 6122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21375, "raw_average_key_size": 20, "raw_value_size": 3534194, "raw_average_value_size": 3441, "num_data_blocks": 264, "num_entries": 1027, "num_filter_entries": 1027, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403977, "oldest_key_time": 1764403977, "file_creation_time": 1764404156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 49442 microseconds, and 15922 cpu microseconds.
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.425388) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3564009 bytes OK
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.425425) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.428335) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.428359) EVENT_LOG_v1 {"time_micros": 1764404156428352, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.428383) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5476734, prev total WAL file size 5476734, number of live WAL files 2.
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.430650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3480KB)], [87(10171KB)]
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156430697, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13979471, "oldest_snapshot_seqno": -1}
Nov 29 03:15:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7811 keys, 12113701 bytes, temperature: kUnknown
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156601632, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 12113701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12060715, "index_size": 32330, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 202051, "raw_average_key_size": 25, "raw_value_size": 11920544, "raw_average_value_size": 1526, "num_data_blocks": 1272, "num_entries": 7811, "num_filter_entries": 7811, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.602340) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12113701 bytes
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.604986) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.7 rd, 70.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.9 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 8339, records dropped: 528 output_compression: NoCompression
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.605011) EVENT_LOG_v1 {"time_micros": 1764404156604999, "job": 54, "event": "compaction_finished", "compaction_time_micros": 171066, "compaction_time_cpu_micros": 50842, "output_level": 6, "num_output_files": 1, "total_output_size": 12113701, "num_input_records": 8339, "num_output_records": 7811, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156606031, "job": 54, "event": "table_file_deletion", "file_number": 89}
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156608267, "job": 54, "event": "table_file_deletion", "file_number": 87}
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.430510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.608375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.608385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.608387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.608390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:15:56.608393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.248 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e299 e299: 3 total, 3 up, 3 in
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.373 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.373 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.418 226310 DEBUG nova.storage.rbd_utils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(snap) on rbd image(a83442a1-fb28-462f-8936-713084bd46ef) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.719 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3437211877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.897 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.982 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:57 np0005539564 nova_compute[226295]: 2025-11-29 08:15:57.982 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.141 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.142 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4411MB free_disk=20.78512954711914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.142 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.142 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.217 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 6033af89-d6d3-45c5-bf88-b2f17800f12e actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.217 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.217 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.253 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e300 e300: 3 total, 3 up, 3 in
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880472690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:15:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2667284630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.669 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.674 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.745 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.774 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:15:58 np0005539564 nova_compute[226295]: 2025-11-29 08:15:58.775 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:15:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:16:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:16:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/315555466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:16:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:16:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/315555466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:16:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:00.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:00 np0005539564 nova_compute[226295]: 2025-11-29 08:16:00.204 226310 INFO nova.virt.libvirt.driver [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Snapshot image upload complete#033[00m
Nov 29 03:16:00 np0005539564 nova_compute[226295]: 2025-11-29 08:16:00.206 226310 DEBUG nova.compute.manager [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:00 np0005539564 nova_compute[226295]: 2025-11-29 08:16:00.268 226310 INFO nova.compute.manager [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Shelve offloading#033[00m
Nov 29 03:16:00 np0005539564 nova_compute[226295]: 2025-11-29 08:16:00.277 226310 INFO nova.virt.libvirt.driver [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance destroyed successfully.#033[00m
Nov 29 03:16:00 np0005539564 nova_compute[226295]: 2025-11-29 08:16:00.278 226310 DEBUG nova.compute.manager [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:00 np0005539564 nova_compute[226295]: 2025-11-29 08:16:00.281 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:00 np0005539564 nova_compute[226295]: 2025-11-29 08:16:00.281 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:00 np0005539564 nova_compute[226295]: 2025-11-29 08:16:00.282 226310 DEBUG nova.network.neutron [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:00.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:01 np0005539564 nova_compute[226295]: 2025-11-29 08:16:01.991 226310 DEBUG nova.network.neutron [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.012 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:02.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.250 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e301 e301: 3 total, 3 up, 3 in
Nov 29 03:16:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.769 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.924 226310 INFO nova.virt.libvirt.driver [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance destroyed successfully.#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.924 226310 DEBUG nova.objects.instance [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.945 226310 DEBUG nova.virt.libvirt.vif [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-694037075',display_name='tempest-ServerActionsTestOtherB-server-694037075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-694037075',id=115,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-bqwpho60',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member',shelved_at='2025-11-29T08:16:00.206370',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='a83442a1-fb28-462f-8936-713084bd46ef'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=6033af89-d6d3-45c5-bf88-b2f17800f12e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.946 226310 DEBUG nova.network.os_vif_util [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.948 226310 DEBUG nova.network.os_vif_util [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.949 226310 DEBUG os_vif [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.952 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.953 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa221d286-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.956 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.958 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:02 np0005539564 nova_compute[226295]: 2025-11-29 08:16:02.962 226310 INFO os_vif [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb')#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.042 226310 DEBUG nova.compute.manager [req-6b27a407-7d1b-4aee-a362-8f7adbdd30ed req-1d289fae-c0ce-4e05-b8a7-b1b0b119666e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-changed-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.042 226310 DEBUG nova.compute.manager [req-6b27a407-7d1b-4aee-a362-8f7adbdd30ed req-1d289fae-c0ce-4e05-b8a7-b1b0b119666e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Refreshing instance network info cache due to event network-changed-a221d286-cb0f-41fd-9997-b1687a875e0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.043 226310 DEBUG oslo_concurrency.lockutils [req-6b27a407-7d1b-4aee-a362-8f7adbdd30ed req-1d289fae-c0ce-4e05-b8a7-b1b0b119666e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.043 226310 DEBUG oslo_concurrency.lockutils [req-6b27a407-7d1b-4aee-a362-8f7adbdd30ed req-1d289fae-c0ce-4e05-b8a7-b1b0b119666e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.044 226310 DEBUG nova.network.neutron [req-6b27a407-7d1b-4aee-a362-8f7adbdd30ed req-1d289fae-c0ce-4e05-b8a7-b1b0b119666e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Refreshing network info cache for port a221d286-cb0f-41fd-9997-b1687a875e0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.424 226310 INFO nova.virt.libvirt.driver [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Deleting instance files /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e_del#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.426 226310 INFO nova.virt.libvirt.driver [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Deletion of /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e_del complete#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.719 226310 INFO nova.scheduler.client.report [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Deleted allocations for instance 6033af89-d6d3-45c5-bf88-b2f17800f12e#033[00m
Nov 29 03:16:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:03.728 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:03.729 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:03.729 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.790 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.791 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:03 np0005539564 nova_compute[226295]: 2025-11-29 08:16:03.810 226310 DEBUG oslo_concurrency.processutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:04.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:04 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/54634430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:04 np0005539564 nova_compute[226295]: 2025-11-29 08:16:04.250 226310 DEBUG oslo_concurrency.processutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:04 np0005539564 nova_compute[226295]: 2025-11-29 08:16:04.256 226310 DEBUG nova.compute.provider_tree [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:04 np0005539564 nova_compute[226295]: 2025-11-29 08:16:04.277 226310 DEBUG nova.scheduler.client.report [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:04 np0005539564 nova_compute[226295]: 2025-11-29 08:16:04.309 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:04 np0005539564 nova_compute[226295]: 2025-11-29 08:16:04.407 226310 DEBUG oslo_concurrency.lockutils [None req-68092ab9-ee15-47e1-ab00-fcd82e74decb ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:04 np0005539564 nova_compute[226295]: 2025-11-29 08:16:04.593 226310 DEBUG nova.network.neutron [req-6b27a407-7d1b-4aee-a362-8f7adbdd30ed req-1d289fae-c0ce-4e05-b8a7-b1b0b119666e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updated VIF entry in instance network info cache for port a221d286-cb0f-41fd-9997-b1687a875e0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:16:04 np0005539564 nova_compute[226295]: 2025-11-29 08:16:04.594 226310 DEBUG nova.network.neutron [req-6b27a407-7d1b-4aee-a362-8f7adbdd30ed req-1d289fae-c0ce-4e05-b8a7-b1b0b119666e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": null, "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapa221d286-cb", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:04 np0005539564 nova_compute[226295]: 2025-11-29 08:16:04.627 226310 DEBUG oslo_concurrency.lockutils [req-6b27a407-7d1b-4aee-a362-8f7adbdd30ed req-1d289fae-c0ce-4e05-b8a7-b1b0b119666e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:05 np0005539564 podman[271047]: 2025-11-29 08:16:05.525578233 +0000 UTC m=+0.077143282 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Nov 29 03:16:05 np0005539564 podman[271046]: 2025-11-29 08:16:05.553811765 +0000 UTC m=+0.106555306 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:16:05 np0005539564 podman[271048]: 2025-11-29 08:16:05.553805914 +0000 UTC m=+0.102374613 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:16:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:06.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:16:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:16:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:06.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:07 np0005539564 nova_compute[226295]: 2025-11-29 08:16:07.252 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539564 nova_compute[226295]: 2025-11-29 08:16:07.688 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404152.6862817, bba0127d-6332-44fa-8fc1-c3d7321260fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:07 np0005539564 nova_compute[226295]: 2025-11-29 08:16:07.689 226310 INFO nova.compute.manager [-] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:07 np0005539564 nova_compute[226295]: 2025-11-29 08:16:07.721 226310 DEBUG nova.compute.manager [None req-a79e4d27-7e7c-476c-bf14-74abf2bba920 - - - - - -] [instance: bba0127d-6332-44fa-8fc1-c3d7321260fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:07 np0005539564 nova_compute[226295]: 2025-11-29 08:16:07.988 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:08.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:08.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:08 np0005539564 nova_compute[226295]: 2025-11-29 08:16:08.505 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404153.5032992, 6033af89-d6d3-45c5-bf88-b2f17800f12e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:08 np0005539564 nova_compute[226295]: 2025-11-29 08:16:08.505 226310 INFO nova.compute.manager [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:08 np0005539564 nova_compute[226295]: 2025-11-29 08:16:08.532 226310 DEBUG nova.compute.manager [None req-527838d0-9b20-4bd1-b085-4cb43a1418b4 - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.059 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.059 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.059 226310 INFO nova.compute.manager [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Unshelving#033[00m
Nov 29 03:16:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:10.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.157 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.158 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.163 226310 DEBUG nova.objects.instance [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.179 226310 DEBUG nova.objects.instance [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'numa_topology' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.192 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.192 226310 INFO nova.compute.claims [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.300 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:10 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3233351715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.784 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.794 226310 DEBUG nova.compute.provider_tree [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.816 226310 DEBUG nova.scheduler.client.report [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:10 np0005539564 nova_compute[226295]: 2025-11-29 08:16:10.853 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.071 226310 INFO nova.network.neutron [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating port a221d286-cb0f-41fd-9997-b1687a875e0e with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:16:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.378 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.720 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.721 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.721 226310 DEBUG nova.network.neutron [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.852 226310 DEBUG nova.compute.manager [req-441b5f41-2642-4708-9e5e-ae9abfed0b43 req-06e5cdfe-5e04-4fba-ad89-043f0856d027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-changed-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.852 226310 DEBUG nova.compute.manager [req-441b5f41-2642-4708-9e5e-ae9abfed0b43 req-06e5cdfe-5e04-4fba-ad89-043f0856d027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Refreshing instance network info cache due to event network-changed-a221d286-cb0f-41fd-9997-b1687a875e0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:11 np0005539564 nova_compute[226295]: 2025-11-29 08:16:11.853 226310 DEBUG oslo_concurrency.lockutils [req-441b5f41-2642-4708-9e5e-ae9abfed0b43 req-06e5cdfe-5e04-4fba-ad89-043f0856d027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:12.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:12 np0005539564 nova_compute[226295]: 2025-11-29 08:16:12.253 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:12 np0005539564 nova_compute[226295]: 2025-11-29 08:16:12.990 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:14.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.344 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.345 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.346 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.346 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.347 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.347 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.392 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.392 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Image id a83442a1-fb28-462f-8936-713084bd46ef yields fingerprint d48227f8ec4e25929f75bb163c584e928f7abe6d _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.393 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.394 226310 WARNING nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.395 226310 WARNING nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.395 226310 WARNING nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.395 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Removable base files: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.396 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.397 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.397 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.398 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.398 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.398 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 03:16:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:14 np0005539564 nova_compute[226295]: 2025-11-29 08:16:14.991 226310 DEBUG nova.network.neutron [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.016 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.018 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.019 226310 INFO nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Creating image(s)#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.063 226310 DEBUG nova.storage.rbd_utils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.068 226310 DEBUG nova.objects.instance [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.070 226310 DEBUG oslo_concurrency.lockutils [req-441b5f41-2642-4708-9e5e-ae9abfed0b43 req-06e5cdfe-5e04-4fba-ad89-043f0856d027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.070 226310 DEBUG nova.network.neutron [req-441b5f41-2642-4708-9e5e-ae9abfed0b43 req-06e5cdfe-5e04-4fba-ad89-043f0856d027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Refreshing network info cache for port a221d286-cb0f-41fd-9997-b1687a875e0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.137 226310 DEBUG nova.storage.rbd_utils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.181 226310 DEBUG nova.storage.rbd_utils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.188 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "d48227f8ec4e25929f75bb163c584e928f7abe6d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.190 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "d48227f8ec4e25929f75bb163c584e928f7abe6d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.646 226310 DEBUG nova.virt.libvirt.imagebackend [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/a83442a1-fb28-462f-8936-713084bd46ef/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/a83442a1-fb28-462f-8936-713084bd46ef/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.748 226310 DEBUG nova.virt.libvirt.imagebackend [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/a83442a1-fb28-462f-8936-713084bd46ef/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.749 226310 DEBUG nova.storage.rbd_utils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] cloning images/a83442a1-fb28-462f-8936-713084bd46ef@snap to None/6033af89-d6d3-45c5-bf88-b2f17800f12e_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:16:15 np0005539564 nova_compute[226295]: 2025-11-29 08:16:15.937 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "d48227f8ec4e25929f75bb163c584e928f7abe6d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.139 226310 DEBUG nova.objects.instance [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'migration_context' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:16.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.229 226310 DEBUG nova.storage.rbd_utils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] flattening vms/6033af89-d6d3-45c5-bf88-b2f17800f12e_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:16:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:16.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.638 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Image rbd:vms/6033af89-d6d3-45c5-bf88-b2f17800f12e_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.639 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.639 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Ensure instance console log exists: /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.640 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.640 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.641 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.644 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Start _get_guest_xml network_info=[{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:15:50Z,direct_url=<?>,disk_format='raw',id=a83442a1-fb28-462f-8936-713084bd46ef,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-694037075-shelved',owner='ba867fac17034bb28fe2cdb0fff3af2b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:15:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.650 226310 WARNING nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.655 226310 DEBUG nova.virt.libvirt.host [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.656 226310 DEBUG nova.virt.libvirt.host [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.659 226310 DEBUG nova.virt.libvirt.host [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.660 226310 DEBUG nova.virt.libvirt.host [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.662 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.662 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:15:50Z,direct_url=<?>,disk_format='raw',id=a83442a1-fb28-462f-8936-713084bd46ef,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-694037075-shelved',owner='ba867fac17034bb28fe2cdb0fff3af2b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:15:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.663 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.664 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.664 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.664 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.665 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.665 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.666 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.666 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.667 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.668 226310 DEBUG nova.virt.hardware [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.668 226310 DEBUG nova.objects.instance [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.685 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.986 226310 DEBUG nova.network.neutron [req-441b5f41-2642-4708-9e5e-ae9abfed0b43 req-06e5cdfe-5e04-4fba-ad89-043f0856d027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updated VIF entry in instance network info cache for port a221d286-cb0f-41fd-9997-b1687a875e0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:16:16 np0005539564 nova_compute[226295]: 2025-11-29 08:16:16.987 226310 DEBUG nova.network.neutron [req-441b5f41-2642-4708-9e5e-ae9abfed0b43 req-06e5cdfe-5e04-4fba-ad89-043f0856d027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [{"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.003 226310 DEBUG oslo_concurrency.lockutils [req-441b5f41-2642-4708-9e5e-ae9abfed0b43 req-06e5cdfe-5e04-4fba-ad89-043f0856d027 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6033af89-d6d3-45c5-bf88-b2f17800f12e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:17 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/982415657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.185 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.225 226310 DEBUG nova.storage.rbd_utils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.230 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:17 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2378779951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.664 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.666 226310 DEBUG nova.virt.libvirt.vif [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-694037075',display_name='tempest-ServerActionsTestOtherB-server-694037075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-694037075',id=115,image_ref='a83442a1-fb28-462f-8936-713084bd46ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-bqwpho60',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member',shelved_at='2025-11-29T08:16:00.206370',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='a83442a1-fb28-462f-8936-713084bd46ef'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=6033af89-d6d3-45c5-bf88-b2f17800f12e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.666 226310 DEBUG nova.network.os_vif_util [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.667 226310 DEBUG nova.network.os_vif_util [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.670 226310 DEBUG nova.objects.instance [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.696 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <uuid>6033af89-d6d3-45c5-bf88-b2f17800f12e</uuid>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <name>instance-00000073</name>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerActionsTestOtherB-server-694037075</nova:name>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:16:16</nova:creationTime>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="a83442a1-fb28-462f-8936-713084bd46ef"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <nova:port uuid="a221d286-cb0f-41fd-9997-b1687a875e0e">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <entry name="serial">6033af89-d6d3-45c5-bf88-b2f17800f12e</entry>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <entry name="uuid">6033af89-d6d3-45c5-bf88-b2f17800f12e</entry>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/6033af89-d6d3-45c5-bf88-b2f17800f12e_disk">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5e:cf:8d"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <target dev="tapa221d286-cb"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/console.log" append="off"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:16:17 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:16:17 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:16:17 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:16:17 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.697 226310 DEBUG nova.compute.manager [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Preparing to wait for external event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.698 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.699 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.699 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.701 226310 DEBUG nova.virt.libvirt.vif [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-694037075',display_name='tempest-ServerActionsTestOtherB-server-694037075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-694037075',id=115,image_ref='a83442a1-fb28-462f-8936-713084bd46ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-bqwpho60',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member',shelved_at='2025-11-29T08:16:00.206370',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='a83442a1-fb28-462f-8936-713084bd46ef'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=6033af89-d6d3-45c5-bf88-b2f17800f12e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.701 226310 DEBUG nova.network.os_vif_util [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.702 226310 DEBUG nova.network.os_vif_util [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.703 226310 DEBUG os_vif [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.704 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.705 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.706 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.711 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.712 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa221d286-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.713 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa221d286-cb, col_values=(('external_ids', {'iface-id': 'a221d286-cb0f-41fd-9997-b1687a875e0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:cf:8d', 'vm-uuid': '6033af89-d6d3-45c5-bf88-b2f17800f12e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.715 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:17 np0005539564 NetworkManager[48997]: <info>  [1764404177.7166] manager: (tapa221d286-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/219)
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.718 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.728 226310 INFO os_vif [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb')#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.797 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.798 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.798 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:5e:cf:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.799 226310 INFO nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Using config drive#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.837 226310 DEBUG nova.storage.rbd_utils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.862 226310 DEBUG nova.objects.instance [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:17 np0005539564 nova_compute[226295]: 2025-11-29 08:16:17.932 226310 DEBUG nova.objects.instance [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'keypairs' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:18.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:18 np0005539564 nova_compute[226295]: 2025-11-29 08:16:18.985 226310 INFO nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Creating config drive at /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config#033[00m
Nov 29 03:16:18 np0005539564 nova_compute[226295]: 2025-11-29 08:16:18.997 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1cqmj4_f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.170 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1cqmj4_f" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.208 226310 DEBUG nova.storage.rbd_utils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.213 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.427 226310 DEBUG oslo_concurrency.processutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config 6033af89-d6d3-45c5-bf88-b2f17800f12e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.428 226310 INFO nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Deleting local config drive /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:16:19 np0005539564 kernel: tapa221d286-cb: entered promiscuous mode
Nov 29 03:16:19 np0005539564 NetworkManager[48997]: <info>  [1764404179.5104] manager: (tapa221d286-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Nov 29 03:16:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:19Z|00435|binding|INFO|Claiming lport a221d286-cb0f-41fd-9997-b1687a875e0e for this chassis.
Nov 29 03:16:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:19Z|00436|binding|INFO|a221d286-cb0f-41fd-9997-b1687a875e0e: Claiming fa:16:3e:5e:cf:8d 10.100.0.10
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.511 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.520 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.523 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.531 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 NetworkManager[48997]: <info>  [1764404179.5331] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 29 03:16:19 np0005539564 NetworkManager[48997]: <info>  [1764404179.5364] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.537 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:cf:8d 10.100.0.10'], port_security=['fa:16:3e:5e:cf:8d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6033af89-d6d3-45c5-bf88-b2f17800f12e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a221d286-cb0f-41fd-9997-b1687a875e0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.539 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a221d286-cb0f-41fd-9997-b1687a875e0e in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 bound to our chassis#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.542 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.557 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[84f9d335-ff84-4d37-ab66-84d54c4c00c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.559 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d5b8c11-b1 in ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:16:19 np0005539564 systemd-machined[190128]: New machine qemu-52-instance-00000073.
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.562 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d5b8c11-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.562 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4edff85a-dd99-4084-88a7-1706de02c24b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.563 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f4905fb3-2898-405e-8fc6-adb8a4b56055]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.579 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[37b48ef4-be2e-4490-98f4-09a1444bf7fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 systemd[1]: Started Virtual Machine qemu-52-instance-00000073.
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.606 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[924f6a5d-c184-435d-b5d2-246db9aafbd1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 systemd-udevd[271531]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:19 np0005539564 NetworkManager[48997]: <info>  [1764404179.6432] device (tapa221d286-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:19 np0005539564 NetworkManager[48997]: <info>  [1764404179.6444] device (tapa221d286-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.647 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[96c9f14a-c8ed-4f67-8f88-81a3eca66e48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 NetworkManager[48997]: <info>  [1764404179.6577] manager: (tap4d5b8c11-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Nov 29 03:16:19 np0005539564 systemd-udevd[271537]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.657 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[55adf5c9-05b4-4b78-a7ed-a8f1391bdfcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.706 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ddbc29-6e59-4236-89ea-d3262f57c117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.709 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0acfa99e-e9c7-427f-8c3c-d0d633de4264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 NetworkManager[48997]: <info>  [1764404179.7358] device (tap4d5b8c11-b0): carrier: link connected
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.739 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[302c8112-dddf-4781-955c-46800e7da8be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.768 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.770 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[745f4068-9972-42ff-9c08-9de232ad59aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715441, 'reachable_time': 37383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271560, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:19Z|00437|binding|INFO|Setting lport a221d286-cb0f-41fd-9997-b1687a875e0e ovn-installed in OVS
Nov 29 03:16:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:19Z|00438|binding|INFO|Setting lport a221d286-cb0f-41fd-9997-b1687a875e0e up in Southbound
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.781 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.790 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b32429-ba45-4005-8ef8-f4ba80221e6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:6d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 715441, 'tstamp': 715441}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271561, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.812 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[04e43245-0d7a-4d60-a17f-42071da3b4f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715441, 'reachable_time': 37383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271562, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.861 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[734f5502-c690-42a2-98ed-793c145944c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.954 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[96fe15e2-2389-4a87-8a17-c7e984add171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.956 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.956 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.956 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:19 np0005539564 kernel: tap4d5b8c11-b0: entered promiscuous mode
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.959 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 NetworkManager[48997]: <info>  [1764404179.9601] manager: (tap4d5b8c11-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.962 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.965 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:19Z|00439|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539564 nova_compute[226295]: 2025-11-29 08:16:19.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:19.999 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:20.001 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ae16791c-43d6-491a-b06c-bfdaca3257ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:20.002 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:16:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:20.003 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'env', 'PROCESS_TAG=haproxy-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d5b8c11-b69e-4a74-846b-03943fb29a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:16:20 np0005539564 nova_compute[226295]: 2025-11-29 08:16:20.113 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404180.1126776, 6033af89-d6d3-45c5-bf88-b2f17800f12e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:20 np0005539564 nova_compute[226295]: 2025-11-29 08:16:20.114 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:20 np0005539564 nova_compute[226295]: 2025-11-29 08:16:20.135 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:20 np0005539564 nova_compute[226295]: 2025-11-29 08:16:20.139 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404180.1130295, 6033af89-d6d3-45c5-bf88-b2f17800f12e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:20 np0005539564 nova_compute[226295]: 2025-11-29 08:16:20.139 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:16:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:20.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:20 np0005539564 nova_compute[226295]: 2025-11-29 08:16:20.170 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:20 np0005539564 nova_compute[226295]: 2025-11-29 08:16:20.174 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:20 np0005539564 nova_compute[226295]: 2025-11-29 08:16:20.201 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:20 np0005539564 podman[271636]: 2025-11-29 08:16:20.42534894 +0000 UTC m=+0.062292481 container create 4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:16:20 np0005539564 systemd[1]: Started libpod-conmon-4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac.scope.
Nov 29 03:16:20 np0005539564 podman[271636]: 2025-11-29 08:16:20.392076992 +0000 UTC m=+0.029020523 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:16:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:20 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:16:20 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/165bc8892401317b947542177cd57b1dd10fae54cf9b315c4b719d0b9e909bde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:20 np0005539564 podman[271636]: 2025-11-29 08:16:20.518631177 +0000 UTC m=+0.155574678 container init 4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:16:20 np0005539564 podman[271636]: 2025-11-29 08:16:20.524144746 +0000 UTC m=+0.161088247 container start 4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:16:20 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[271651]: [NOTICE]   (271655) : New worker (271657) forked
Nov 29 03:16:20 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[271651]: [NOTICE]   (271655) : Loading success.
Nov 29 03:16:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.936 226310 DEBUG nova.compute.manager [req-3a217c93-2cd9-439a-9eae-8307c749dc54 req-e37639ca-f494-4b68-8e5c-e7808b990e25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.937 226310 DEBUG oslo_concurrency.lockutils [req-3a217c93-2cd9-439a-9eae-8307c749dc54 req-e37639ca-f494-4b68-8e5c-e7808b990e25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.937 226310 DEBUG oslo_concurrency.lockutils [req-3a217c93-2cd9-439a-9eae-8307c749dc54 req-e37639ca-f494-4b68-8e5c-e7808b990e25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.938 226310 DEBUG oslo_concurrency.lockutils [req-3a217c93-2cd9-439a-9eae-8307c749dc54 req-e37639ca-f494-4b68-8e5c-e7808b990e25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.938 226310 DEBUG nova.compute.manager [req-3a217c93-2cd9-439a-9eae-8307c749dc54 req-e37639ca-f494-4b68-8e5c-e7808b990e25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Processing event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.939 226310 DEBUG nova.compute.manager [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.945 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404181.9453785, 6033af89-d6d3-45c5-bf88-b2f17800f12e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.946 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.950 226310 DEBUG nova.virt.libvirt.driver [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.954 226310 INFO nova.virt.libvirt.driver [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance spawned successfully.#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.976 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:21 np0005539564 nova_compute[226295]: 2025-11-29 08:16:21.981 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:22 np0005539564 nova_compute[226295]: 2025-11-29 08:16:22.016 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:22 np0005539564 nova_compute[226295]: 2025-11-29 08:16:22.259 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:22 np0005539564 nova_compute[226295]: 2025-11-29 08:16:22.754 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e302 e302: 3 total, 3 up, 3 in
Nov 29 03:16:23 np0005539564 nova_compute[226295]: 2025-11-29 08:16:23.377 226310 DEBUG nova.compute.manager [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:23 np0005539564 nova_compute[226295]: 2025-11-29 08:16:23.450 226310 DEBUG oslo_concurrency.lockutils [None req-c2c96530-425f-41bc-b75f-76866b1c7a40 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 13.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:24 np0005539564 nova_compute[226295]: 2025-11-29 08:16:24.080 226310 DEBUG nova.compute.manager [req-a8394c5e-a56e-48f7-be6e-dacfd652e057 req-e6eaa235-6495-4f83-8625-be713be07e18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:24 np0005539564 nova_compute[226295]: 2025-11-29 08:16:24.080 226310 DEBUG oslo_concurrency.lockutils [req-a8394c5e-a56e-48f7-be6e-dacfd652e057 req-e6eaa235-6495-4f83-8625-be713be07e18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:24 np0005539564 nova_compute[226295]: 2025-11-29 08:16:24.081 226310 DEBUG oslo_concurrency.lockutils [req-a8394c5e-a56e-48f7-be6e-dacfd652e057 req-e6eaa235-6495-4f83-8625-be713be07e18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:24 np0005539564 nova_compute[226295]: 2025-11-29 08:16:24.081 226310 DEBUG oslo_concurrency.lockutils [req-a8394c5e-a56e-48f7-be6e-dacfd652e057 req-e6eaa235-6495-4f83-8625-be713be07e18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:24 np0005539564 nova_compute[226295]: 2025-11-29 08:16:24.082 226310 DEBUG nova.compute.manager [req-a8394c5e-a56e-48f7-be6e-dacfd652e057 req-e6eaa235-6495-4f83-8625-be713be07e18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] No waiting events found dispatching network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:24 np0005539564 nova_compute[226295]: 2025-11-29 08:16:24.082 226310 WARNING nova.compute.manager [req-a8394c5e-a56e-48f7-be6e-dacfd652e057 req-e6eaa235-6495-4f83-8625-be713be07e18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received unexpected event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:16:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e303 e303: 3 total, 3 up, 3 in
Nov 29 03:16:27 np0005539564 nova_compute[226295]: 2025-11-29 08:16:27.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:27 np0005539564 nova_compute[226295]: 2025-11-29 08:16:27.758 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:28.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.483 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.483 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.484 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.484 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.484 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.485 226310 INFO nova.compute.manager [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Terminating instance#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.486 226310 DEBUG nova.compute.manager [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:16:29 np0005539564 kernel: tapa221d286-cb (unregistering): left promiscuous mode
Nov 29 03:16:29 np0005539564 NetworkManager[48997]: <info>  [1764404189.5295] device (tapa221d286-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.541 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:29Z|00440|binding|INFO|Releasing lport a221d286-cb0f-41fd-9997-b1687a875e0e from this chassis (sb_readonly=0)
Nov 29 03:16:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:29Z|00441|binding|INFO|Setting lport a221d286-cb0f-41fd-9997-b1687a875e0e down in Southbound
Nov 29 03:16:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:29Z|00442|binding|INFO|Removing iface tapa221d286-cb ovn-installed in OVS
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.545 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.554 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:cf:8d 10.100.0.10'], port_security=['fa:16:3e:5e:cf:8d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6033af89-d6d3-45c5-bf88-b2f17800f12e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a221d286-cb0f-41fd-9997-b1687a875e0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.557 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a221d286-cb0f-41fd-9997-b1687a875e0e in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.558 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d5b8c11-b69e-4a74-846b-03943fb29a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.559 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3aabb0e8-e6f5-4eaf-8d2f-e2f52a79a4f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.560 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace which is not needed anymore#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.562 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539564 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 29 03:16:29 np0005539564 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000073.scope: Consumed 8.667s CPU time.
Nov 29 03:16:29 np0005539564 systemd-machined[190128]: Machine qemu-52-instance-00000073 terminated.
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.725 226310 INFO nova.virt.libvirt.driver [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Instance destroyed successfully.#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.726 226310 DEBUG nova.objects.instance [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 6033af89-d6d3-45c5-bf88-b2f17800f12e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.742 226310 DEBUG nova.virt.libvirt.vif [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-694037075',display_name='tempest-ServerActionsTestOtherB-server-694037075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-694037075',id=115,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-bqwpho60',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=6033af89-d6d3-45c5-bf88-b2f17800f12e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.743 226310 DEBUG nova.network.os_vif_util [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "a221d286-cb0f-41fd-9997-b1687a875e0e", "address": "fa:16:3e:5e:cf:8d", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa221d286-cb", "ovs_interfaceid": "a221d286-cb0f-41fd-9997-b1687a875e0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.744 226310 DEBUG nova.network.os_vif_util [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.748 226310 DEBUG os_vif [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.751 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[271651]: [NOTICE]   (271655) : haproxy version is 2.8.14-c23fe91
Nov 29 03:16:29 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[271651]: [NOTICE]   (271655) : path to executable is /usr/sbin/haproxy
Nov 29 03:16:29 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[271651]: [WARNING]  (271655) : Exiting Master process...
Nov 29 03:16:29 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[271651]: [WARNING]  (271655) : Exiting Master process...
Nov 29 03:16:29 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[271651]: [ALERT]    (271655) : Current worker (271657) exited with code 143 (Terminated)
Nov 29 03:16:29 np0005539564 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[271651]: [WARNING]  (271655) : All workers exited. Exiting... (0)
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.752 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa221d286-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.754 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.755 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539564 systemd[1]: libpod-4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac.scope: Deactivated successfully.
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.758 226310 INFO os_vif [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:cf:8d,bridge_name='br-int',has_traffic_filtering=True,id=a221d286-cb0f-41fd-9997-b1687a875e0e,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa221d286-cb')#033[00m
Nov 29 03:16:29 np0005539564 podman[271692]: 2025-11-29 08:16:29.763419075 +0000 UTC m=+0.060895794 container died 4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:16:29 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac-userdata-shm.mount: Deactivated successfully.
Nov 29 03:16:29 np0005539564 systemd[1]: var-lib-containers-storage-overlay-165bc8892401317b947542177cd57b1dd10fae54cf9b315c4b719d0b9e909bde-merged.mount: Deactivated successfully.
Nov 29 03:16:29 np0005539564 podman[271692]: 2025-11-29 08:16:29.808313546 +0000 UTC m=+0.105790245 container cleanup 4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:16:29 np0005539564 systemd[1]: libpod-conmon-4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac.scope: Deactivated successfully.
Nov 29 03:16:29 np0005539564 podman[271751]: 2025-11-29 08:16:29.876263609 +0000 UTC m=+0.044301157 container remove 4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.882 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ffcce64e-8895-4306-b14c-71918f5fb862]: (4, ('Sat Nov 29 08:16:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac)\n4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac\nSat Nov 29 08:16:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac)\n4d6a8c60751241a5ec7602faefdff4ad0551842e18ae00c14e3cd2cf40fab0ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.884 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a35440-1cef-416c-abe2-fb3e71c10c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.885 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.887 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539564 kernel: tap4d5b8c11-b0: left promiscuous mode
Nov 29 03:16:29 np0005539564 nova_compute[226295]: 2025-11-29 08:16:29.909 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.913 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d47521-374a-402a-8b3c-ab33b1b59923]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.927 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[067f6c09-78ec-47e7-a27e-d28d18ba1ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.928 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3e296e50-10d8-4592-9b5e-d3c561447112]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.953 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2769afd6-f722-4f37-9f8f-6f135813ac73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715432, 'reachable_time': 31043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271765, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539564 systemd[1]: run-netns-ovnmeta\x2d4d5b8c11\x2db69e\x2d4a74\x2d846b\x2d03943fb29a81.mount: Deactivated successfully.
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.956 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:16:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:29.956 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d29d3897-7b2b-4271-802f-5331cf2e5d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:30.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:30 np0005539564 nova_compute[226295]: 2025-11-29 08:16:30.248 226310 INFO nova.virt.libvirt.driver [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Deleting instance files /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e_del#033[00m
Nov 29 03:16:30 np0005539564 nova_compute[226295]: 2025-11-29 08:16:30.249 226310 INFO nova.virt.libvirt.driver [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Deletion of /var/lib/nova/instances/6033af89-d6d3-45c5-bf88-b2f17800f12e_del complete#033[00m
Nov 29 03:16:30 np0005539564 nova_compute[226295]: 2025-11-29 08:16:30.418 226310 INFO nova.compute.manager [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:16:30 np0005539564 nova_compute[226295]: 2025-11-29 08:16:30.418 226310 DEBUG oslo.service.loopingcall [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:16:30 np0005539564 nova_compute[226295]: 2025-11-29 08:16:30.420 226310 DEBUG nova.compute.manager [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:16:30 np0005539564 nova_compute[226295]: 2025-11-29 08:16:30.420 226310 DEBUG nova.network.neutron [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:16:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.046 226310 DEBUG nova.compute.manager [req-4860ace6-188a-4acb-9df8-59248af723c4 req-7caf99d5-5d46-4873-9c29-66414d684e97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-unplugged-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.047 226310 DEBUG oslo_concurrency.lockutils [req-4860ace6-188a-4acb-9df8-59248af723c4 req-7caf99d5-5d46-4873-9c29-66414d684e97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.048 226310 DEBUG oslo_concurrency.lockutils [req-4860ace6-188a-4acb-9df8-59248af723c4 req-7caf99d5-5d46-4873-9c29-66414d684e97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.048 226310 DEBUG oslo_concurrency.lockutils [req-4860ace6-188a-4acb-9df8-59248af723c4 req-7caf99d5-5d46-4873-9c29-66414d684e97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.049 226310 DEBUG nova.compute.manager [req-4860ace6-188a-4acb-9df8-59248af723c4 req-7caf99d5-5d46-4873-9c29-66414d684e97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] No waiting events found dispatching network-vif-unplugged-a221d286-cb0f-41fd-9997-b1687a875e0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.049 226310 DEBUG nova.compute.manager [req-4860ace6-188a-4acb-9df8-59248af723c4 req-7caf99d5-5d46-4873-9c29-66414d684e97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-unplugged-a221d286-cb0f-41fd-9997-b1687a875e0e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:16:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.837 226310 DEBUG nova.network.neutron [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.868 226310 INFO nova.compute.manager [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Took 1.45 seconds to deallocate network for instance.#033[00m
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.927 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:31 np0005539564 nova_compute[226295]: 2025-11-29 08:16:31.928 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:32 np0005539564 nova_compute[226295]: 2025-11-29 08:16:32.014 226310 DEBUG oslo_concurrency.processutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:32.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:32 np0005539564 nova_compute[226295]: 2025-11-29 08:16:32.266 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 e304: 3 total, 3 up, 3 in
Nov 29 03:16:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3846744555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:32 np0005539564 nova_compute[226295]: 2025-11-29 08:16:32.505 226310 DEBUG oslo_concurrency.processutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:32 np0005539564 nova_compute[226295]: 2025-11-29 08:16:32.513 226310 DEBUG nova.compute.provider_tree [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:32 np0005539564 nova_compute[226295]: 2025-11-29 08:16:32.533 226310 DEBUG nova.scheduler.client.report [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:32 np0005539564 nova_compute[226295]: 2025-11-29 08:16:32.564 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:32 np0005539564 nova_compute[226295]: 2025-11-29 08:16:32.604 226310 INFO nova.scheduler.client.report [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Deleted allocations for instance 6033af89-d6d3-45c5-bf88-b2f17800f12e#033[00m
Nov 29 03:16:32 np0005539564 nova_compute[226295]: 2025-11-29 08:16:32.689 226310 DEBUG oslo_concurrency.lockutils [None req-026fcac8-fe35-4692-8247-080d3c97c3c6 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:33 np0005539564 nova_compute[226295]: 2025-11-29 08:16:33.167 226310 DEBUG nova.compute.manager [req-10fc3cee-bf12-471b-b85f-4ad88a08ef80 req-65bf7d36-61a7-4c5a-a0f7-d931c6e5a03b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:33 np0005539564 nova_compute[226295]: 2025-11-29 08:16:33.168 226310 DEBUG oslo_concurrency.lockutils [req-10fc3cee-bf12-471b-b85f-4ad88a08ef80 req-65bf7d36-61a7-4c5a-a0f7-d931c6e5a03b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:33 np0005539564 nova_compute[226295]: 2025-11-29 08:16:33.168 226310 DEBUG oslo_concurrency.lockutils [req-10fc3cee-bf12-471b-b85f-4ad88a08ef80 req-65bf7d36-61a7-4c5a-a0f7-d931c6e5a03b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:33 np0005539564 nova_compute[226295]: 2025-11-29 08:16:33.169 226310 DEBUG oslo_concurrency.lockutils [req-10fc3cee-bf12-471b-b85f-4ad88a08ef80 req-65bf7d36-61a7-4c5a-a0f7-d931c6e5a03b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6033af89-d6d3-45c5-bf88-b2f17800f12e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:33 np0005539564 nova_compute[226295]: 2025-11-29 08:16:33.169 226310 DEBUG nova.compute.manager [req-10fc3cee-bf12-471b-b85f-4ad88a08ef80 req-65bf7d36-61a7-4c5a-a0f7-d931c6e5a03b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] No waiting events found dispatching network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:33 np0005539564 nova_compute[226295]: 2025-11-29 08:16:33.170 226310 WARNING nova.compute.manager [req-10fc3cee-bf12-471b-b85f-4ad88a08ef80 req-65bf7d36-61a7-4c5a-a0f7-d931c6e5a03b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received unexpected event network-vif-plugged-a221d286-cb0f-41fd-9997-b1687a875e0e for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:16:33 np0005539564 nova_compute[226295]: 2025-11-29 08:16:33.682 226310 DEBUG nova.compute.manager [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Received event network-vif-deleted-a221d286-cb0f-41fd-9997-b1687a875e0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:34.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:34 np0005539564 nova_compute[226295]: 2025-11-29 08:16:34.757 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539564 nova_compute[226295]: 2025-11-29 08:16:35.771 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:36.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:36.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:36 np0005539564 podman[271793]: 2025-11-29 08:16:36.56748879 +0000 UTC m=+0.096461133 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 03:16:36 np0005539564 podman[271794]: 2025-11-29 08:16:36.574308084 +0000 UTC m=+0.094647395 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:16:36 np0005539564 podman[271792]: 2025-11-29 08:16:36.605340251 +0000 UTC m=+0.143547013 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.171 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.171 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.216 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.268 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.303 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.304 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.314 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.315 226310 INFO nova.compute.claims [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.482 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:37.917 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:37.918 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:16:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:37.919 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4158594648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.978 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:37 np0005539564 nova_compute[226295]: 2025-11-29 08:16:37.987 226310 DEBUG nova.compute.provider_tree [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.006 226310 DEBUG nova.scheduler.client.report [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.026 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.027 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.074 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.075 226310 DEBUG nova.network.neutron [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.096 226310 INFO nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.111 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:16:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:38.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.235 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.236 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.236 226310 INFO nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Creating image(s)#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.265 226310 DEBUG nova.storage.rbd_utils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.297 226310 DEBUG nova.storage.rbd_utils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.331 226310 DEBUG nova.storage.rbd_utils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.336 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.374 226310 DEBUG nova.policy [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bb4a89eea4e4166a7a1c5e3135cb182', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6025758b69854406b221c47d9ef59dea', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.414 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.415 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.415 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.416 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.448 226310 DEBUG nova.storage.rbd_utils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.452 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:38.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.785 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.860 226310 DEBUG nova.storage.rbd_utils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] resizing rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.982 226310 DEBUG nova.objects.instance [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'migration_context' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.997 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.998 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Ensure instance console log exists: /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:38 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.998 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:39 np0005539564 nova_compute[226295]: 2025-11-29 08:16:38.999 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:39 np0005539564 nova_compute[226295]: 2025-11-29 08:16:39.000 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:39 np0005539564 nova_compute[226295]: 2025-11-29 08:16:39.706 226310 DEBUG nova.network.neutron [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Successfully created port: 2ff764b3-2d67-48de-9969-8f1890b429c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:39 np0005539564 nova_compute[226295]: 2025-11-29 08:16:39.762 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:40.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:40.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.042 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.152 226310 DEBUG nova.network.neutron [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Successfully updated port: 2ff764b3-2d67-48de-9969-8f1890b429c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.176 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.177 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquired lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.177 226310 DEBUG nova.network.neutron [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.243 226310 DEBUG nova.compute.manager [req-de68ee1b-2c91-4f6e-bce8-3421e4f6ba12 req-109a7860-05a9-49e3-9638-6c6af28b7d74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-changed-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.244 226310 DEBUG nova.compute.manager [req-de68ee1b-2c91-4f6e-bce8-3421e4f6ba12 req-109a7860-05a9-49e3-9638-6c6af28b7d74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Refreshing instance network info cache due to event network-changed-2ff764b3-2d67-48de-9969-8f1890b429c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.244 226310 DEBUG oslo_concurrency.lockutils [req-de68ee1b-2c91-4f6e-bce8-3421e4f6ba12 req-109a7860-05a9-49e3-9638-6c6af28b7d74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:41 np0005539564 nova_compute[226295]: 2025-11-29 08:16:41.348 226310 DEBUG nova.network.neutron [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:42.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.234 226310 DEBUG nova.network.neutron [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Updating instance_info_cache with network_info: [{"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.259 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Releasing lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.260 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance network_info: |[{"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.261 226310 DEBUG oslo_concurrency.lockutils [req-de68ee1b-2c91-4f6e-bce8-3421e4f6ba12 req-109a7860-05a9-49e3-9638-6c6af28b7d74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.261 226310 DEBUG nova.network.neutron [req-de68ee1b-2c91-4f6e-bce8-3421e4f6ba12 req-109a7860-05a9-49e3-9638-6c6af28b7d74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Refreshing network info cache for port 2ff764b3-2d67-48de-9969-8f1890b429c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.266 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Start _get_guest_xml network_info=[{"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.271 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.284 226310 WARNING nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.292 226310 DEBUG nova.virt.libvirt.host [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.293 226310 DEBUG nova.virt.libvirt.host [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.301 226310 DEBUG nova.virt.libvirt.host [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.301 226310 DEBUG nova.virt.libvirt.host [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.303 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.303 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.304 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.304 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.304 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.304 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.305 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.305 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.305 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.305 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.306 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.306 226310 DEBUG nova.virt.hardware [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.310 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:42.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/725989067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.804 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.846 226310 DEBUG nova.storage.rbd_utils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:42 np0005539564 nova_compute[226295]: 2025-11-29 08:16:42.852 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.201 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1791800583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.352 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.354 226310 DEBUG nova.virt.libvirt.vif [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1303601308',display_name='tempest-ServerRescueTestJSON-server-1303601308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1303601308',id=122,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6025758b69854406b221c47d9ef59dea',ramdisk_id='',reservation_id='r-pgr069bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-597299129',owner_user_name='tempest-ServerRescueTestJSON-597299129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:38Z,user_data=None,user_id='7bb4a89eea4e4166a7a1c5e3135cb182',uuid=e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.354 226310 DEBUG nova.network.os_vif_util [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converting VIF {"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.355 226310 DEBUG nova.network.os_vif_util [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:f9:6a,bridge_name='br-int',has_traffic_filtering=True,id=2ff764b3-2d67-48de-9969-8f1890b429c4,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff764b3-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.357 226310 DEBUG nova.objects.instance [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'pci_devices' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.392 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <uuid>e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3</uuid>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <name>instance-0000007a</name>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerRescueTestJSON-server-1303601308</nova:name>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:16:42</nova:creationTime>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <nova:user uuid="7bb4a89eea4e4166a7a1c5e3135cb182">tempest-ServerRescueTestJSON-597299129-project-member</nova:user>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <nova:project uuid="6025758b69854406b221c47d9ef59dea">tempest-ServerRescueTestJSON-597299129</nova:project>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <nova:port uuid="2ff764b3-2d67-48de-9969-8f1890b429c4">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <entry name="serial">e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3</entry>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <entry name="uuid">e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3</entry>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:65:f9:6a"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <target dev="tap2ff764b3-2d"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/console.log" append="off"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:16:43 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:16:43 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:16:43 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:16:43 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.394 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Preparing to wait for external event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.394 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.394 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.395 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.396 226310 DEBUG nova.virt.libvirt.vif [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1303601308',display_name='tempest-ServerRescueTestJSON-server-1303601308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1303601308',id=122,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6025758b69854406b221c47d9ef59dea',ramdisk_id='',reservation_id='r-pgr069bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-597299129',owner_user_name='tempest-ServerRescueTestJSON-597299129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:38Z,user_data=None,user_id='7bb4a89eea4e4166a7a1c5e3135cb182',uuid=e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.396 226310 DEBUG nova.network.os_vif_util [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converting VIF {"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.397 226310 DEBUG nova.network.os_vif_util [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:f9:6a,bridge_name='br-int',has_traffic_filtering=True,id=2ff764b3-2d67-48de-9969-8f1890b429c4,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff764b3-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.397 226310 DEBUG os_vif [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:f9:6a,bridge_name='br-int',has_traffic_filtering=True,id=2ff764b3-2d67-48de-9969-8f1890b429c4,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff764b3-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.398 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.398 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.399 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.403 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.403 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ff764b3-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.404 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ff764b3-2d, col_values=(('external_ids', {'iface-id': '2ff764b3-2d67-48de-9969-8f1890b429c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:f9:6a', 'vm-uuid': 'e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.406 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539564 NetworkManager[48997]: <info>  [1764404203.4079] manager: (tap2ff764b3-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.410 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.441 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.442 226310 INFO os_vif [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:f9:6a,bridge_name='br-int',has_traffic_filtering=True,id=2ff764b3-2d67-48de-9969-8f1890b429c4,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff764b3-2d')#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.452 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.505 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.506 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.506 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No VIF found with MAC fa:16:3e:65:f9:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.507 226310 INFO nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Using config drive#033[00m
Nov 29 03:16:43 np0005539564 nova_compute[226295]: 2025-11-29 08:16:43.544 226310 DEBUG nova.storage.rbd_utils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.049 226310 INFO nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Creating config drive at /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.059 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr3z9j3f_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.125 226310 DEBUG nova.network.neutron [req-de68ee1b-2c91-4f6e-bce8-3421e4f6ba12 req-109a7860-05a9-49e3-9638-6c6af28b7d74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Updated VIF entry in instance network info cache for port 2ff764b3-2d67-48de-9969-8f1890b429c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.127 226310 DEBUG nova.network.neutron [req-de68ee1b-2c91-4f6e-bce8-3421e4f6ba12 req-109a7860-05a9-49e3-9638-6c6af28b7d74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Updating instance_info_cache with network_info: [{"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.150 226310 DEBUG oslo_concurrency.lockutils [req-de68ee1b-2c91-4f6e-bce8-3421e4f6ba12 req-109a7860-05a9-49e3-9638-6c6af28b7d74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.202 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr3z9j3f_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.237 226310 DEBUG nova.storage.rbd_utils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.241 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.413 226310 DEBUG oslo_concurrency.processutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.414 226310 INFO nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Deleting local config drive /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:16:44 np0005539564 kernel: tap2ff764b3-2d: entered promiscuous mode
Nov 29 03:16:44 np0005539564 NetworkManager[48997]: <info>  [1764404204.4666] manager: (tap2ff764b3-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Nov 29 03:16:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:44Z|00443|binding|INFO|Claiming lport 2ff764b3-2d67-48de-9969-8f1890b429c4 for this chassis.
Nov 29 03:16:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:44Z|00444|binding|INFO|2ff764b3-2d67-48de-9969-8f1890b429c4: Claiming fa:16:3e:65:f9:6a 10.100.0.14
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.469 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:44.479 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f9:6a 10.100.0.14'], port_security=['fa:16:3e:65:f9:6a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2ff764b3-2d67-48de-9969-8f1890b429c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:44.481 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff764b3-2d67-48de-9969-8f1890b429c4 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 bound to our chassis#033[00m
Nov 29 03:16:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:44.482 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:16:44.483 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[771647c8-c204-4460-ae00-449487b2329d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:44 np0005539564 systemd-udevd[272179]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:44 np0005539564 systemd-machined[190128]: New machine qemu-53-instance-0000007a.
Nov 29 03:16:44 np0005539564 NetworkManager[48997]: <info>  [1764404204.5163] device (tap2ff764b3-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:44 np0005539564 NetworkManager[48997]: <info>  [1764404204.5170] device (tap2ff764b3-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:44.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:44 np0005539564 systemd[1]: Started Virtual Machine qemu-53-instance-0000007a.
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.542 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:44Z|00445|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 ovn-installed in OVS
Nov 29 03:16:44 np0005539564 ovn_controller[130591]: 2025-11-29T08:16:44Z|00446|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 up in Southbound
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.549 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.723 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404189.7224624, 6033af89-d6d3-45c5-bf88-b2f17800f12e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.724 226310 INFO nova.compute.manager [-] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.745 226310 DEBUG nova.compute.manager [None req-659392f6-1dc0-45d5-9db5-1c0916521266 - - - - - -] [instance: 6033af89-d6d3-45c5-bf88-b2f17800f12e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.777 226310 DEBUG nova.compute.manager [req-90b03b8c-edb9-4bb7-8ebf-7da2be20fb16 req-ec7d82a6-2798-460b-8750-80d751bfa72a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.778 226310 DEBUG oslo_concurrency.lockutils [req-90b03b8c-edb9-4bb7-8ebf-7da2be20fb16 req-ec7d82a6-2798-460b-8750-80d751bfa72a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.779 226310 DEBUG oslo_concurrency.lockutils [req-90b03b8c-edb9-4bb7-8ebf-7da2be20fb16 req-ec7d82a6-2798-460b-8750-80d751bfa72a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.779 226310 DEBUG oslo_concurrency.lockutils [req-90b03b8c-edb9-4bb7-8ebf-7da2be20fb16 req-ec7d82a6-2798-460b-8750-80d751bfa72a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:44 np0005539564 nova_compute[226295]: 2025-11-29 08:16:44.779 226310 DEBUG nova.compute.manager [req-90b03b8c-edb9-4bb7-8ebf-7da2be20fb16 req-ec7d82a6-2798-460b-8750-80d751bfa72a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Processing event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.060 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404205.0598154, e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.061 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.064 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.069 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.073 226310 INFO nova.virt.libvirt.driver [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance spawned successfully.#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.073 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.105 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.111 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.111 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.112 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.112 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.113 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.113 226310 DEBUG nova.virt.libvirt.driver [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.118 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.157 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.158 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404205.060299, e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.158 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.180 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.184 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404205.0677829, e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.185 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.187 226310 INFO nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Took 6.95 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.188 226310 DEBUG nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.201 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.205 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.235 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.246 226310 INFO nova.compute.manager [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Took 7.96 seconds to build instance.#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.266 226310 DEBUG oslo_concurrency.lockutils [None req-4b45bf81-78f4-498c-b0fb-11ca5bdc038d 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:45 np0005539564 nova_compute[226295]: 2025-11-29 08:16:45.391 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:46.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.271 226310 INFO nova.compute.manager [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Rescuing#033[00m
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.272 226310 DEBUG oslo_concurrency.lockutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.272 226310 DEBUG oslo_concurrency.lockutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquired lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.273 226310 DEBUG nova.network.neutron [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:46.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.912 226310 DEBUG nova.compute.manager [req-4c61b485-c4a1-4afc-8e7c-b4d2a2f9784f req-8a3d0d91-f857-430c-bd8e-ba8b418922bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.913 226310 DEBUG oslo_concurrency.lockutils [req-4c61b485-c4a1-4afc-8e7c-b4d2a2f9784f req-8a3d0d91-f857-430c-bd8e-ba8b418922bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.913 226310 DEBUG oslo_concurrency.lockutils [req-4c61b485-c4a1-4afc-8e7c-b4d2a2f9784f req-8a3d0d91-f857-430c-bd8e-ba8b418922bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.914 226310 DEBUG oslo_concurrency.lockutils [req-4c61b485-c4a1-4afc-8e7c-b4d2a2f9784f req-8a3d0d91-f857-430c-bd8e-ba8b418922bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.914 226310 DEBUG nova.compute.manager [req-4c61b485-c4a1-4afc-8e7c-b4d2a2f9784f req-8a3d0d91-f857-430c-bd8e-ba8b418922bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:46 np0005539564 nova_compute[226295]: 2025-11-29 08:16:46.914 226310 WARNING nova.compute.manager [req-4c61b485-c4a1-4afc-8e7c-b4d2a2f9784f req-8a3d0d91-f857-430c-bd8e-ba8b418922bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:47 np0005539564 nova_compute[226295]: 2025-11-29 08:16:47.274 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:47 np0005539564 nova_compute[226295]: 2025-11-29 08:16:47.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:48 np0005539564 nova_compute[226295]: 2025-11-29 08:16:48.034 226310 DEBUG nova.network.neutron [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Updating instance_info_cache with network_info: [{"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:48 np0005539564 nova_compute[226295]: 2025-11-29 08:16:48.058 226310 DEBUG oslo_concurrency.lockutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Releasing lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:48.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:48 np0005539564 nova_compute[226295]: 2025-11-29 08:16:48.407 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:48.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:48 np0005539564 nova_compute[226295]: 2025-11-29 08:16:48.679 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:16:49 np0005539564 nova_compute[226295]: 2025-11-29 08:16:49.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:50.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:50 np0005539564 nova_compute[226295]: 2025-11-29 08:16:50.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:50 np0005539564 nova_compute[226295]: 2025-11-29 08:16:50.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:16:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:50.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:52.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:52 np0005539564 nova_compute[226295]: 2025-11-29 08:16:52.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:52 np0005539564 nova_compute[226295]: 2025-11-29 08:16:52.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:52 np0005539564 nova_compute[226295]: 2025-11-29 08:16:52.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:16:52 np0005539564 nova_compute[226295]: 2025-11-29 08:16:52.379 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:16:52 np0005539564 nova_compute[226295]: 2025-11-29 08:16:52.380 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:52 np0005539564 nova_compute[226295]: 2025-11-29 08:16:52.380 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:16:52 np0005539564 nova_compute[226295]: 2025-11-29 08:16:52.400 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:16:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:52.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:53 np0005539564 nova_compute[226295]: 2025-11-29 08:16:53.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:53 np0005539564 nova_compute[226295]: 2025-11-29 08:16:53.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:53 np0005539564 nova_compute[226295]: 2025-11-29 08:16:53.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:16:53 np0005539564 nova_compute[226295]: 2025-11-29 08:16:53.410 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:55 np0005539564 nova_compute[226295]: 2025-11-29 08:16:55.364 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:55 np0005539564 nova_compute[226295]: 2025-11-29 08:16:55.365 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:56.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:56.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:57 np0005539564 nova_compute[226295]: 2025-11-29 08:16:57.307 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:58.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.387 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.388 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.388 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.389 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.389 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.469 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:16:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:58.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.736 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:16:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2087445897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:58 np0005539564 nova_compute[226295]: 2025-11-29 08:16:58.919 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.012 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.012 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.237 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.238 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4237MB free_disk=20.891677856445312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.238 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.238 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.422 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.422 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.422 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.506 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2946922595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.987 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:59 np0005539564 nova_compute[226295]: 2025-11-29 08:16:59.994 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:00 np0005539564 nova_compute[226295]: 2025-11-29 08:17:00.013 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:00 np0005539564 nova_compute[226295]: 2025-11-29 08:17:00.041 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:17:00 np0005539564 nova_compute[226295]: 2025-11-29 08:17:00.042 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:00.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:01 np0005539564 nova_compute[226295]: 2025-11-29 08:17:01.754 226310 INFO nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:17:01 np0005539564 kernel: tap2ff764b3-2d (unregistering): left promiscuous mode
Nov 29 03:17:01 np0005539564 NetworkManager[48997]: <info>  [1764404221.8635] device (tap2ff764b3-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:01 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:01Z|00447|binding|INFO|Releasing lport 2ff764b3-2d67-48de-9969-8f1890b429c4 from this chassis (sb_readonly=0)
Nov 29 03:17:01 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:01Z|00448|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 down in Southbound
Nov 29 03:17:01 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:01Z|00449|binding|INFO|Removing iface tap2ff764b3-2d ovn-installed in OVS
Nov 29 03:17:01 np0005539564 nova_compute[226295]: 2025-11-29 08:17:01.872 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:01.887 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f9:6a 10.100.0.14'], port_security=['fa:16:3e:65:f9:6a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2ff764b3-2d67-48de-9969-8f1890b429c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:01.888 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff764b3-2d67-48de-9969-8f1890b429c4 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 unbound from our chassis#033[00m
Nov 29 03:17:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:01.889 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:17:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:01.890 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[05c6de9a-397d-4a32-9ae1-223185960123]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:01 np0005539564 nova_compute[226295]: 2025-11-29 08:17:01.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:01 np0005539564 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 29 03:17:01 np0005539564 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000007a.scope: Consumed 14.656s CPU time.
Nov 29 03:17:01 np0005539564 systemd-machined[190128]: Machine qemu-53-instance-0000007a terminated.
Nov 29 03:17:01 np0005539564 nova_compute[226295]: 2025-11-29 08:17:01.986 226310 INFO nova.virt.libvirt.driver [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance destroyed successfully.#033[00m
Nov 29 03:17:01 np0005539564 nova_compute[226295]: 2025-11-29 08:17:01.987 226310 DEBUG nova.objects.instance [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'numa_topology' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.009 226310 INFO nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Attempting rescue#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.010 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.016 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.016 226310 INFO nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Creating image(s)#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.041 226310 DEBUG nova.storage.rbd_utils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.044 226310 DEBUG nova.objects.instance [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'trusted_certs' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.089 226310 DEBUG nova.storage.rbd_utils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.129 226310 DEBUG nova.storage.rbd_utils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.134 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.175 226310 DEBUG nova.compute.manager [req-52a13d20-e448-4735-a070-3892c221467b req-657abf93-0cf2-40ad-aeb7-3b730e829d95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.176 226310 DEBUG oslo_concurrency.lockutils [req-52a13d20-e448-4735-a070-3892c221467b req-657abf93-0cf2-40ad-aeb7-3b730e829d95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.177 226310 DEBUG oslo_concurrency.lockutils [req-52a13d20-e448-4735-a070-3892c221467b req-657abf93-0cf2-40ad-aeb7-3b730e829d95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.177 226310 DEBUG oslo_concurrency.lockutils [req-52a13d20-e448-4735-a070-3892c221467b req-657abf93-0cf2-40ad-aeb7-3b730e829d95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.178 226310 DEBUG nova.compute.manager [req-52a13d20-e448-4735-a070-3892c221467b req-657abf93-0cf2-40ad-aeb7-3b730e829d95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.178 226310 WARNING nova.compute.manager [req-52a13d20-e448-4735-a070-3892c221467b req-657abf93-0cf2-40ad-aeb7-3b730e829d95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.210 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.210 226310 DEBUG oslo_concurrency.lockutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.211 226310 DEBUG oslo_concurrency.lockutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.212 226310 DEBUG oslo_concurrency.lockutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:02.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.253 226310 DEBUG nova.storage.rbd_utils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.258 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.348 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:02.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.577 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.578 226310 DEBUG nova.objects.instance [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'migration_context' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.593 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.594 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Start _get_guest_xml network_info=[{"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1392053617-network", "vif_mac": "fa:16:3e:65:f9:6a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.595 226310 DEBUG nova.objects.instance [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'resources' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.613 226310 WARNING nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.619 226310 DEBUG nova.virt.libvirt.host [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.620 226310 DEBUG nova.virt.libvirt.host [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.624 226310 DEBUG nova.virt.libvirt.host [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.625 226310 DEBUG nova.virt.libvirt.host [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.627 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.627 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.627 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.628 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.628 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.628 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.628 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.629 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.629 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.629 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.629 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.629 226310 DEBUG nova.virt.hardware [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.630 226310 DEBUG nova.objects.instance [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'vcpu_model' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:02 np0005539564 nova_compute[226295]: 2025-11-29 08:17:02.670 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/223840700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:03 np0005539564 nova_compute[226295]: 2025-11-29 08:17:03.204 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:03 np0005539564 nova_compute[226295]: 2025-11-29 08:17:03.206 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:03 np0005539564 nova_compute[226295]: 2025-11-29 08:17:03.511 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/859391372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:03 np0005539564 nova_compute[226295]: 2025-11-29 08:17:03.708 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:03 np0005539564 nova_compute[226295]: 2025-11-29 08:17:03.710 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:03.731 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:03.731 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:03.731 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:04 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/319407602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:04.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.242 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.244 226310 DEBUG nova.virt.libvirt.vif [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1303601308',display_name='tempest-ServerRescueTestJSON-server-1303601308',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1303601308',id=122,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6025758b69854406b221c47d9ef59dea',ramdisk_id='',reservation_id='r-pgr069bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-597299129',owner_user_name='tempest-ServerRescueTestJSON-597299129-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:45Z,user_data=None,user_id='7bb4a89eea4e4166a7a1c5e3135cb182',uuid=e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1392053617-network", "vif_mac": "fa:16:3e:65:f9:6a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.245 226310 DEBUG nova.network.os_vif_util [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converting VIF {"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1392053617-network", "vif_mac": "fa:16:3e:65:f9:6a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.246 226310 DEBUG nova.network.os_vif_util [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:f9:6a,bridge_name='br-int',has_traffic_filtering=True,id=2ff764b3-2d67-48de-9969-8f1890b429c4,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff764b3-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.247 226310 DEBUG nova.objects.instance [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'pci_devices' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.300 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <uuid>e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3</uuid>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <name>instance-0000007a</name>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerRescueTestJSON-server-1303601308</nova:name>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:17:02</nova:creationTime>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <nova:user uuid="7bb4a89eea4e4166a7a1c5e3135cb182">tempest-ServerRescueTestJSON-597299129-project-member</nova:user>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <nova:project uuid="6025758b69854406b221c47d9ef59dea">tempest-ServerRescueTestJSON-597299129</nova:project>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <nova:port uuid="2ff764b3-2d67-48de-9969-8f1890b429c4">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <entry name="serial">e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3</entry>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <entry name="uuid">e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3</entry>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.rescue">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config.rescue">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:65:f9:6a"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <target dev="tap2ff764b3-2d"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/console.log" append="off"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:17:04 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:17:04 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:17:04 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:17:04 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.309 226310 INFO nova.virt.libvirt.driver [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance destroyed successfully.#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.336 226310 DEBUG nova.compute.manager [req-20449445-ba55-4005-bc7d-e022920e2ad6 req-79495578-f69e-4ce4-bba7-3635e1ea014c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.336 226310 DEBUG oslo_concurrency.lockutils [req-20449445-ba55-4005-bc7d-e022920e2ad6 req-79495578-f69e-4ce4-bba7-3635e1ea014c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.337 226310 DEBUG oslo_concurrency.lockutils [req-20449445-ba55-4005-bc7d-e022920e2ad6 req-79495578-f69e-4ce4-bba7-3635e1ea014c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.337 226310 DEBUG oslo_concurrency.lockutils [req-20449445-ba55-4005-bc7d-e022920e2ad6 req-79495578-f69e-4ce4-bba7-3635e1ea014c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.338 226310 DEBUG nova.compute.manager [req-20449445-ba55-4005-bc7d-e022920e2ad6 req-79495578-f69e-4ce4-bba7-3635e1ea014c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.338 226310 WARNING nova.compute.manager [req-20449445-ba55-4005-bc7d-e022920e2ad6 req-79495578-f69e-4ce4-bba7-3635e1ea014c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.377 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.378 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.378 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.378 226310 DEBUG nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No VIF found with MAC fa:16:3e:65:f9:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.379 226310 INFO nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Using config drive#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.409 226310 DEBUG nova.storage.rbd_utils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.441 226310 DEBUG nova.objects.instance [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'ec2_ids' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.484 226310 DEBUG nova.objects.instance [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'keypairs' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:04.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.862 226310 INFO nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Creating config drive at /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config.rescue#033[00m
Nov 29 03:17:04 np0005539564 nova_compute[226295]: 2025-11-29 08:17:04.873 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq399fi7k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:05 np0005539564 nova_compute[226295]: 2025-11-29 08:17:05.023 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq399fi7k" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:05 np0005539564 nova_compute[226295]: 2025-11-29 08:17:05.062 226310 DEBUG nova.storage.rbd_utils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:05 np0005539564 nova_compute[226295]: 2025-11-29 08:17:05.069 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config.rescue e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:05 np0005539564 nova_compute[226295]: 2025-11-29 08:17:05.267 226310 DEBUG oslo_concurrency.processutils [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config.rescue e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:05 np0005539564 nova_compute[226295]: 2025-11-29 08:17:05.272 226310 INFO nova.virt.libvirt.driver [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Deleting local config drive /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:17:05 np0005539564 kernel: tap2ff764b3-2d: entered promiscuous mode
Nov 29 03:17:05 np0005539564 NetworkManager[48997]: <info>  [1764404225.3369] manager: (tap2ff764b3-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Nov 29 03:17:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:05Z|00450|binding|INFO|Claiming lport 2ff764b3-2d67-48de-9969-8f1890b429c4 for this chassis.
Nov 29 03:17:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:05Z|00451|binding|INFO|2ff764b3-2d67-48de-9969-8f1890b429c4: Claiming fa:16:3e:65:f9:6a 10.100.0.14
Nov 29 03:17:05 np0005539564 nova_compute[226295]: 2025-11-29 08:17:05.342 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:05.355 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f9:6a 10.100.0.14'], port_security=['fa:16:3e:65:f9:6a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '5', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2ff764b3-2d67-48de-9969-8f1890b429c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:05.357 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff764b3-2d67-48de-9969-8f1890b429c4 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 bound to our chassis#033[00m
Nov 29 03:17:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:05Z|00452|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 up in Southbound
Nov 29 03:17:05 np0005539564 nova_compute[226295]: 2025-11-29 08:17:05.357 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:05Z|00453|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 ovn-installed in OVS
Nov 29 03:17:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:05.358 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:17:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:05.358 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c3253a-e0fd-4026-ad07-4e693c662487]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:05 np0005539564 nova_compute[226295]: 2025-11-29 08:17:05.361 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:05 np0005539564 systemd-udevd[272528]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:05 np0005539564 systemd-machined[190128]: New machine qemu-54-instance-0000007a.
Nov 29 03:17:05 np0005539564 NetworkManager[48997]: <info>  [1764404225.3872] device (tap2ff764b3-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:05 np0005539564 NetworkManager[48997]: <info>  [1764404225.3882] device (tap2ff764b3-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:05 np0005539564 systemd[1]: Started Virtual Machine qemu-54-instance-0000007a.
Nov 29 03:17:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.464 226310 DEBUG nova.compute.manager [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.465 226310 DEBUG oslo_concurrency.lockutils [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.466 226310 DEBUG oslo_concurrency.lockutils [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.466 226310 DEBUG oslo_concurrency.lockutils [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.466 226310 DEBUG nova.compute.manager [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.466 226310 WARNING nova.compute.manager [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.467 226310 DEBUG nova.compute.manager [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.467 226310 DEBUG oslo_concurrency.lockutils [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.467 226310 DEBUG oslo_concurrency.lockutils [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.467 226310 DEBUG oslo_concurrency.lockutils [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.467 226310 DEBUG nova.compute.manager [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.468 226310 WARNING nova.compute.manager [req-29787a0a-0dd1-458c-93ce-7f481293c293 req-d7514b8b-80d1-4f0d-9d90-17c1b8175fe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:17:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:06.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:06 np0005539564 podman[272615]: 2025-11-29 08:17:06.73804159 +0000 UTC m=+0.065523599 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:17:06 np0005539564 podman[272598]: 2025-11-29 08:17:06.770820574 +0000 UTC m=+0.098740254 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:17:06 np0005539564 podman[272613]: 2025-11-29 08:17:06.775172692 +0000 UTC m=+0.103597086 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.804 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.805 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404226.8042421, e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.805 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.809 226310 DEBUG nova.compute.manager [None req-ad36cb7a-a833-42fc-9449-e3e55b0305e8 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.848 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.852 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.874 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.874 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404226.8045056, e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.875 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.898 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:06 np0005539564 nova_compute[226295]: 2025-11-29 08:17:06.902 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:07 np0005539564 nova_compute[226295]: 2025-11-29 08:17:07.350 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:17:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:17:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:08.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:08 np0005539564 nova_compute[226295]: 2025-11-29 08:17:08.273 226310 INFO nova.compute.manager [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Unrescuing#033[00m
Nov 29 03:17:08 np0005539564 nova_compute[226295]: 2025-11-29 08:17:08.274 226310 DEBUG oslo_concurrency.lockutils [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:08 np0005539564 nova_compute[226295]: 2025-11-29 08:17:08.274 226310 DEBUG oslo_concurrency.lockutils [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquired lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:08 np0005539564 nova_compute[226295]: 2025-11-29 08:17:08.275 226310 DEBUG nova.network.neutron [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:08 np0005539564 nova_compute[226295]: 2025-11-29 08:17:08.514 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:08.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.086 226310 DEBUG nova.network.neutron [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Updating instance_info_cache with network_info: [{"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.102 226310 DEBUG oslo_concurrency.lockutils [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Releasing lock "refresh_cache-e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.103 226310 DEBUG nova.objects.instance [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'flavor' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:10 np0005539564 kernel: tap2ff764b3-2d (unregistering): left promiscuous mode
Nov 29 03:17:10 np0005539564 NetworkManager[48997]: <info>  [1764404230.1808] device (tap2ff764b3-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.191 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:10Z|00454|binding|INFO|Releasing lport 2ff764b3-2d67-48de-9969-8f1890b429c4 from this chassis (sb_readonly=0)
Nov 29 03:17:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:10Z|00455|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 down in Southbound
Nov 29 03:17:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:10Z|00456|binding|INFO|Removing iface tap2ff764b3-2d ovn-installed in OVS
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.194 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:10.199 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f9:6a 10.100.0.14'], port_security=['fa:16:3e:65:f9:6a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '6', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2ff764b3-2d67-48de-9969-8f1890b429c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:10.201 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff764b3-2d67-48de-9969-8f1890b429c4 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 unbound from our chassis#033[00m
Nov 29 03:17:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:10.203 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:17:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:10.205 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d2052a-2fb6-4ed6-83d2-e7d156b55673]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:10.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:10 np0005539564 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 29 03:17:10 np0005539564 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000007a.scope: Consumed 4.799s CPU time.
Nov 29 03:17:10 np0005539564 systemd-machined[190128]: Machine qemu-54-instance-0000007a terminated.
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.371 226310 INFO nova.virt.libvirt.driver [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance destroyed successfully.#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.372 226310 DEBUG nova.objects.instance [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'numa_topology' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:10 np0005539564 kernel: tap2ff764b3-2d: entered promiscuous mode
Nov 29 03:17:10 np0005539564 systemd-udevd[272794]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:10 np0005539564 NetworkManager[48997]: <info>  [1764404230.4873] manager: (tap2ff764b3-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.489 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:10Z|00457|binding|INFO|Claiming lport 2ff764b3-2d67-48de-9969-8f1890b429c4 for this chassis.
Nov 29 03:17:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:10Z|00458|binding|INFO|2ff764b3-2d67-48de-9969-8f1890b429c4: Claiming fa:16:3e:65:f9:6a 10.100.0.14
Nov 29 03:17:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:10.496 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f9:6a 10.100.0.14'], port_security=['fa:16:3e:65:f9:6a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '6', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2ff764b3-2d67-48de-9969-8f1890b429c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:10 np0005539564 NetworkManager[48997]: <info>  [1764404230.5000] device (tap2ff764b3-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:10.499 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff764b3-2d67-48de-9969-8f1890b429c4 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 bound to our chassis#033[00m
Nov 29 03:17:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:10.500 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:17:10 np0005539564 NetworkManager[48997]: <info>  [1764404230.5014] device (tap2ff764b3-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:10.501 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd797b8-8330-4865-8dac-377a348c87af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:10Z|00459|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 up in Southbound
Nov 29 03:17:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:10Z|00460|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 ovn-installed in OVS
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.523 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.525 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.527 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539564 systemd-machined[190128]: New machine qemu-55-instance-0000007a.
Nov 29 03:17:10 np0005539564 systemd[1]: Started Virtual Machine qemu-55-instance-0000007a.
Nov 29 03:17:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.790 226310 DEBUG nova.compute.manager [req-8d671274-53f7-480f-a7f0-81abc73b3136 req-f9c5635f-8b7e-4415-9df6-c36327858dd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.790 226310 DEBUG oslo_concurrency.lockutils [req-8d671274-53f7-480f-a7f0-81abc73b3136 req-f9c5635f-8b7e-4415-9df6-c36327858dd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.791 226310 DEBUG oslo_concurrency.lockutils [req-8d671274-53f7-480f-a7f0-81abc73b3136 req-f9c5635f-8b7e-4415-9df6-c36327858dd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.791 226310 DEBUG oslo_concurrency.lockutils [req-8d671274-53f7-480f-a7f0-81abc73b3136 req-f9c5635f-8b7e-4415-9df6-c36327858dd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.791 226310 DEBUG nova.compute.manager [req-8d671274-53f7-480f-a7f0-81abc73b3136 req-f9c5635f-8b7e-4415-9df6-c36327858dd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:10 np0005539564 nova_compute[226295]: 2025-11-29 08:17:10.791 226310 WARNING nova.compute.manager [req-8d671274-53f7-480f-a7f0-81abc73b3136 req-f9c5635f-8b7e-4415-9df6-c36327858dd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:17:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.040 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.041 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404232.0403802, e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.042 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.086 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.090 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.123 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.123 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404232.0451303, e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.124 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.143 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.148 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.171 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:17:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:12.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.352 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:12 np0005539564 nova_compute[226295]: 2025-11-29 08:17:12.519 226310 DEBUG nova.compute.manager [None req-a8f238c2-7e6d-4f8b-84ad-d10997344863 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.107 226310 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.107 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.108 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.108 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.108 226310 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.108 226310 WARNING nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.109 226310 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.109 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.109 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.110 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.110 226310 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.110 226310 WARNING nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.110 226310 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.110 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.111 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.111 226310 DEBUG oslo_concurrency.lockutils [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.111 226310 DEBUG nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.112 226310 WARNING nova.compute.manager [req-22cdedbf-cf11-42ee-9978-5b35fc09a08c req-5608d872-e4ca-4a6a-a43e-57d40e444564 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:17:13 np0005539564 nova_compute[226295]: 2025-11-29 08:17:13.516 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:14.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.592 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.593 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.594 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.595 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.595 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.597 226310 INFO nova.compute.manager [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Terminating instance#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.598 226310 DEBUG nova.compute.manager [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:17:14 np0005539564 kernel: tap2ff764b3-2d (unregistering): left promiscuous mode
Nov 29 03:17:14 np0005539564 NetworkManager[48997]: <info>  [1764404234.6392] device (tap2ff764b3-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.645 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:14Z|00461|binding|INFO|Releasing lport 2ff764b3-2d67-48de-9969-8f1890b429c4 from this chassis (sb_readonly=0)
Nov 29 03:17:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:14Z|00462|binding|INFO|Setting lport 2ff764b3-2d67-48de-9969-8f1890b429c4 down in Southbound
Nov 29 03:17:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:14Z|00463|binding|INFO|Removing iface tap2ff764b3-2d ovn-installed in OVS
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.648 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:14.657 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f9:6a 10.100.0.14'], port_security=['fa:16:3e:65:f9:6a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '8', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2ff764b3-2d67-48de-9969-8f1890b429c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:14.659 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff764b3-2d67-48de-9969-8f1890b429c4 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 unbound from our chassis#033[00m
Nov 29 03:17:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:14.660 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:17:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:14.661 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[79bddc36-2ec8-4268-9702-218f28e6678e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.682 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539564 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 29 03:17:14 np0005539564 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007a.scope: Consumed 4.216s CPU time.
Nov 29 03:17:14 np0005539564 systemd-machined[190128]: Machine qemu-55-instance-0000007a terminated.
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.828 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.837 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.842 226310 INFO nova.virt.libvirt.driver [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Instance destroyed successfully.#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.842 226310 DEBUG nova.objects.instance [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'resources' on Instance uuid e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.860 226310 DEBUG nova.virt.libvirt.vif [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1303601308',display_name='tempest-ServerRescueTestJSON-server-1303601308',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1303601308',id=122,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6025758b69854406b221c47d9ef59dea',ramdisk_id='',reservation_id='r-pgr069bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-597299129',owner_user_name='tempest-ServerRescueTestJSON-597299129-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:12Z,user_data=None,user_id='7bb4a89eea4e4166a7a1c5e3135cb182',uuid=e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.860 226310 DEBUG nova.network.os_vif_util [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converting VIF {"id": "2ff764b3-2d67-48de-9969-8f1890b429c4", "address": "fa:16:3e:65:f9:6a", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff764b3-2d", "ovs_interfaceid": "2ff764b3-2d67-48de-9969-8f1890b429c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.861 226310 DEBUG nova.network.os_vif_util [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:f9:6a,bridge_name='br-int',has_traffic_filtering=True,id=2ff764b3-2d67-48de-9969-8f1890b429c4,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff764b3-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.861 226310 DEBUG os_vif [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:f9:6a,bridge_name='br-int',has_traffic_filtering=True,id=2ff764b3-2d67-48de-9969-8f1890b429c4,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff764b3-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.864 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.864 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ff764b3-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.867 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539564 nova_compute[226295]: 2025-11-29 08:17:14.871 226310 INFO os_vif [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:f9:6a,bridge_name='br-int',has_traffic_filtering=True,id=2ff764b3-2d67-48de-9969-8f1890b429c4,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff764b3-2d')#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.019 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:15.019 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:15.021 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.212 226310 DEBUG nova.compute.manager [req-1afa91af-641b-4567-abc4-7b35377da3cb req-95a49c19-08d2-475d-a539-45543d124cc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.212 226310 DEBUG oslo_concurrency.lockutils [req-1afa91af-641b-4567-abc4-7b35377da3cb req-95a49c19-08d2-475d-a539-45543d124cc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.213 226310 DEBUG oslo_concurrency.lockutils [req-1afa91af-641b-4567-abc4-7b35377da3cb req-95a49c19-08d2-475d-a539-45543d124cc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.213 226310 DEBUG oslo_concurrency.lockutils [req-1afa91af-641b-4567-abc4-7b35377da3cb req-95a49c19-08d2-475d-a539-45543d124cc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.214 226310 DEBUG nova.compute.manager [req-1afa91af-641b-4567-abc4-7b35377da3cb req-95a49c19-08d2-475d-a539-45543d124cc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.215 226310 DEBUG nova.compute.manager [req-1afa91af-641b-4567-abc4-7b35377da3cb req-95a49c19-08d2-475d-a539-45543d124cc0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-unplugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.296 226310 INFO nova.virt.libvirt.driver [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Deleting instance files /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_del#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.297 226310 INFO nova.virt.libvirt.driver [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Deletion of /var/lib/nova/instances/e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3_del complete#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.363 226310 INFO nova.compute.manager [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.364 226310 DEBUG oslo.service.loopingcall [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.364 226310 DEBUG nova.compute.manager [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:17:15 np0005539564 nova_compute[226295]: 2025-11-29 08:17:15.364 226310 DEBUG nova.network.neutron [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:17:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:16.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.315 226310 DEBUG nova.network.neutron [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.341 226310 INFO nova.compute.manager [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Took 0.98 seconds to deallocate network for instance.#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.348 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.383 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.409 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.410 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.488 226310 DEBUG oslo_concurrency.processutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.530 226310 DEBUG nova.compute.manager [req-1b1b17d0-e6b3-4c3e-9026-30a21a0c1c09 req-dc6ccc33-f981-4531-a9cb-ac4d020d16ce 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-deleted-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:16 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3727278614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.929 226310 DEBUG oslo_concurrency.processutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.936 226310 DEBUG nova.compute.provider_tree [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.956 226310 DEBUG nova.scheduler.client.report [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:16 np0005539564 nova_compute[226295]: 2025-11-29 08:17:16.984 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.019 226310 INFO nova.scheduler.client.report [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Deleted allocations for instance e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.123 226310 DEBUG oslo_concurrency.lockutils [None req-613a2d63-5a9f-4e33-a26e-91d8cc0ae829 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.329 226310 DEBUG nova.compute.manager [req-05fae664-519a-4064-b074-445f5028504f req-ab8784d7-249c-4c22-adba-eb8432ee9c16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.330 226310 DEBUG oslo_concurrency.lockutils [req-05fae664-519a-4064-b074-445f5028504f req-ab8784d7-249c-4c22-adba-eb8432ee9c16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.331 226310 DEBUG oslo_concurrency.lockutils [req-05fae664-519a-4064-b074-445f5028504f req-ab8784d7-249c-4c22-adba-eb8432ee9c16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.332 226310 DEBUG oslo_concurrency.lockutils [req-05fae664-519a-4064-b074-445f5028504f req-ab8784d7-249c-4c22-adba-eb8432ee9c16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.333 226310 DEBUG nova.compute.manager [req-05fae664-519a-4064-b074-445f5028504f req-ab8784d7-249c-4c22-adba-eb8432ee9c16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] No waiting events found dispatching network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.333 226310 WARNING nova.compute.manager [req-05fae664-519a-4064-b074-445f5028504f req-ab8784d7-249c-4c22-adba-eb8432ee9c16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Received unexpected event network-vif-plugged-2ff764b3-2d67-48de-9969-8f1890b429c4 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:17:17 np0005539564 nova_compute[226295]: 2025-11-29 08:17:17.356 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:18.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:18.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:19 np0005539564 nova_compute[226295]: 2025-11-29 08:17:19.868 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:20.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:21.024 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:22.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:22 np0005539564 nova_compute[226295]: 2025-11-29 08:17:22.383 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:22.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:22 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:17:22 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:17:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:24.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:24 np0005539564 nova_compute[226295]: 2025-11-29 08:17:24.873 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:26.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:26.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:26 np0005539564 nova_compute[226295]: 2025-11-29 08:17:26.654 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:27 np0005539564 nova_compute[226295]: 2025-11-29 08:17:27.390 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:17:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3702248800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:17:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:17:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3702248800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:17:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:28.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:28.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:29 np0005539564 nova_compute[226295]: 2025-11-29 08:17:29.840 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404234.8387399, e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:29 np0005539564 nova_compute[226295]: 2025-11-29 08:17:29.840 226310 INFO nova.compute.manager [-] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:17:29 np0005539564 nova_compute[226295]: 2025-11-29 08:17:29.879 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:29 np0005539564 nova_compute[226295]: 2025-11-29 08:17:29.891 226310 DEBUG nova.compute.manager [None req-3dd5193c-0969-4046-a669-cef8fdc09f0e - - - - - -] [instance: e273a2b5-83ad-4d83-9f3f-40ae3fc9d6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:30.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:30.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:32.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:32 np0005539564 nova_compute[226295]: 2025-11-29 08:17:32.395 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:34.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:34.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:34 np0005539564 nova_compute[226295]: 2025-11-29 08:17:34.884 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:36.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:36.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:37 np0005539564 nova_compute[226295]: 2025-11-29 08:17:37.398 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:37 np0005539564 podman[273002]: 2025-11-29 08:17:37.549241291 +0000 UTC m=+0.094534971 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 03:17:37 np0005539564 podman[273001]: 2025-11-29 08:17:37.560639339 +0000 UTC m=+0.113850573 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:17:37 np0005539564 podman[273003]: 2025-11-29 08:17:37.580612457 +0000 UTC m=+0.115342832 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:17:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:38.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:38.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.231 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.231 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.253 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.253 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.254 226310 DEBUG nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.287 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.367 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.367 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.374 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.374 226310 INFO nova.compute.claims [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.378 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.527 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:39 np0005539564 nova_compute[226295]: 2025-11-29 08:17:39.888 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2132206677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.008 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.015 226310 DEBUG nova.compute.provider_tree [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.038 226310 DEBUG nova.scheduler.client.report [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.069 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.070 226310 DEBUG nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.073 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.078 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.079 226310 INFO nova.compute.claims [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.149 226310 DEBUG nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.177 226310 INFO nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.200 226310 DEBUG nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.245 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:40.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.336 226310 DEBUG nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.340 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.340 226310 INFO nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Creating image(s)#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.385 226310 DEBUG nova.storage.rbd_utils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.420 226310 DEBUG nova.storage.rbd_utils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.451 226310 DEBUG nova.storage.rbd_utils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.455 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.534 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.536 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.536 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.537 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.569 226310 DEBUG nova.storage.rbd_utils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.572 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:40.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3468862175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.736 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.747 226310 DEBUG nova.compute.provider_tree [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.776 226310 DEBUG nova.scheduler.client.report [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.816 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.817 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.887 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:40 np0005539564 nova_compute[226295]: 2025-11-29 08:17:40.993 226310 DEBUG nova.storage.rbd_utils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] resizing rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.046 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.046 226310 DEBUG nova.network.neutron [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.070 226310 INFO nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.112 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.118 226310 DEBUG nova.objects.instance [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'migration_context' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.133 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.134 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Ensure instance console log exists: /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.134 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.134 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.135 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.136 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.141 226310 WARNING nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.146 226310 DEBUG nova.virt.libvirt.host [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.147 226310 DEBUG nova.virt.libvirt.host [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.150 226310 DEBUG nova.virt.libvirt.host [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.150 226310 DEBUG nova.virt.libvirt.host [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.151 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.151 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.152 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.152 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.152 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.153 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.153 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.153 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.153 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.154 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.154 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.154 226310 DEBUG nova.virt.hardware [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.157 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.236 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.238 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.239 226310 INFO nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Creating image(s)#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.266 226310 DEBUG nova.storage.rbd_utils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.303 226310 DEBUG nova.storage.rbd_utils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.336 226310 DEBUG nova.storage.rbd_utils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.340 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.397 226310 DEBUG nova.policy [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '504bc6adabad4f7d8c17b0438c4d9be7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.422 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.423 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.424 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.424 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.454 226310 DEBUG nova.storage.rbd_utils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.459 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2859895107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.616 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.659 226310 DEBUG nova.storage.rbd_utils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.665 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.820 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:41 np0005539564 nova_compute[226295]: 2025-11-29 08:17:41.911 226310 DEBUG nova.storage.rbd_utils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] resizing rbd image 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.056 226310 DEBUG nova.objects.instance [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b082cd2-d1d3-4577-be0a-30b9256a223e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.089 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.090 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Ensure instance console log exists: /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.091 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.091 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.092 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.121 226310 DEBUG nova.network.neutron [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Successfully created port: 81ca5375-8e5a-46e3-9340-e5375e00d3e6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:17:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/712387107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.161 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.162 226310 DEBUG nova.objects.instance [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.177 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <uuid>af25c08f-ae92-4cf1-8fac-4374bcbc6614</uuid>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <name>instance-0000007f</name>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerShowV254Test-server-1944351406</nova:name>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:17:41</nova:creationTime>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <nova:user uuid="942fc293eca343c5968f74e14b91d515">tempest-ServerShowV254Test-611860325-project-member</nova:user>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <nova:project uuid="2f97cf99a149454eb333842c8e9713b0">tempest-ServerShowV254Test-611860325</nova:project>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <entry name="serial">af25c08f-ae92-4cf1-8fac-4374bcbc6614</entry>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <entry name="uuid">af25c08f-ae92-4cf1-8fac-4374bcbc6614</entry>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/console.log" append="off"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:17:42 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:17:42 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:17:42 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:17:42 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.238 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.238 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.239 226310 INFO nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Using config drive#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.264 226310 DEBUG nova.storage.rbd_utils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.399 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.503 226310 INFO nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Creating config drive at /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.511 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgl3fu56s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.669 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgl3fu56s" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:42.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.711 226310 DEBUG nova.storage.rbd_utils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.715 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.914 226310 DEBUG oslo_concurrency.processutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:42 np0005539564 nova_compute[226295]: 2025-11-29 08:17:42.915 226310 INFO nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Deleting local config drive /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config because it was imported into RBD.#033[00m
Nov 29 03:17:43 np0005539564 systemd-machined[190128]: New machine qemu-56-instance-0000007f.
Nov 29 03:17:43 np0005539564 systemd[1]: Started Virtual Machine qemu-56-instance-0000007f.
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.057 226310 DEBUG nova.network.neutron [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Successfully updated port: 81ca5375-8e5a-46e3-9340-e5375e00d3e6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.081 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.082 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquired lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.083 226310 DEBUG nova.network.neutron [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.277 226310 DEBUG nova.compute.manager [req-f1592e38-90cc-4336-9a99-b5e3b932cb74 req-228c2597-2df0-4df8-beb2-f1a3c8cf2626 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received event network-changed-81ca5375-8e5a-46e3-9340-e5375e00d3e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.277 226310 DEBUG nova.compute.manager [req-f1592e38-90cc-4336-9a99-b5e3b932cb74 req-228c2597-2df0-4df8-beb2-f1a3c8cf2626 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Refreshing instance network info cache due to event network-changed-81ca5375-8e5a-46e3-9340-e5375e00d3e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.277 226310 DEBUG oslo_concurrency.lockutils [req-f1592e38-90cc-4336-9a99-b5e3b932cb74 req-228c2597-2df0-4df8-beb2-f1a3c8cf2626 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.302 226310 DEBUG nova.network.neutron [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.676 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404263.6755154, af25c08f-ae92-4cf1-8fac-4374bcbc6614 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.677 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.681 226310 DEBUG nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.681 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.685 226310 INFO nova.virt.libvirt.driver [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance spawned successfully.#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.685 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.734 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.742 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.746 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.747 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.748 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.748 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.749 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.749 226310 DEBUG nova.virt.libvirt.driver [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.775 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.776 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404263.6776147, af25c08f-ae92-4cf1-8fac-4374bcbc6614 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.776 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.952 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.957 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.996 226310 INFO nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Took 3.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:17:43 np0005539564 nova_compute[226295]: 2025-11-29 08:17:43.996 226310 DEBUG nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:44 np0005539564 nova_compute[226295]: 2025-11-29 08:17:44.005 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:44 np0005539564 nova_compute[226295]: 2025-11-29 08:17:44.157 226310 INFO nova.compute.manager [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Took 4.82 seconds to build instance.#033[00m
Nov 29 03:17:44 np0005539564 nova_compute[226295]: 2025-11-29 08:17:44.236 226310 DEBUG oslo_concurrency.lockutils [None req-b441c307-a1af-4c4a-8865-32675d37bb25 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:44.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:44.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:44 np0005539564 nova_compute[226295]: 2025-11-29 08:17:44.894 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.539 226310 DEBUG nova.network.neutron [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Updating instance_info_cache with network_info: [{"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.621 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Releasing lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.622 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Instance network_info: |[{"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.623 226310 DEBUG oslo_concurrency.lockutils [req-f1592e38-90cc-4336-9a99-b5e3b932cb74 req-228c2597-2df0-4df8-beb2-f1a3c8cf2626 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.624 226310 DEBUG nova.network.neutron [req-f1592e38-90cc-4336-9a99-b5e3b932cb74 req-228c2597-2df0-4df8-beb2-f1a3c8cf2626 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Refreshing network info cache for port 81ca5375-8e5a-46e3-9340-e5375e00d3e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.628 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Start _get_guest_xml network_info=[{"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.633 226310 WARNING nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.638 226310 DEBUG nova.virt.libvirt.host [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.639 226310 DEBUG nova.virt.libvirt.host [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.642 226310 DEBUG nova.virt.libvirt.host [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.643 226310 DEBUG nova.virt.libvirt.host [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.644 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.645 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.646 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.646 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.647 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.647 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.647 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.648 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.648 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.649 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.649 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.650 226310 DEBUG nova.virt.hardware [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:45 np0005539564 nova_compute[226295]: 2025-11-29 08:17:45.653 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3740720003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.126 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.160 226310 DEBUG nova.storage.rbd_utils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.166 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:46.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1151233783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.612 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.616 226310 DEBUG nova.virt.libvirt.vif [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2082719356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2082719356',id=128,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-laevdyrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:41Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=0b082cd2-d1d3-4577-be0a-30b9256a223e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.617 226310 DEBUG nova.network.os_vif_util [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.619 226310 DEBUG nova.network.os_vif_util [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:33,bridge_name='br-int',has_traffic_filtering=True,id=81ca5375-8e5a-46e3-9340-e5375e00d3e6,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ca5375-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.621 226310 DEBUG nova.objects.instance [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b082cd2-d1d3-4577-be0a-30b9256a223e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.650 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <uuid>0b082cd2-d1d3-4577-be0a-30b9256a223e</uuid>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <name>instance-00000080</name>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-2082719356</nova:name>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:17:45</nova:creationTime>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <nova:user uuid="504bc6adabad4f7d8c17b0438c4d9be7">tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member</nova:user>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <nova:project uuid="b9d4c81989d641678300c7a1c173a2c2">tempest-ServerBootFromVolumeStableRescueTest-1019923576</nova:project>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <nova:port uuid="81ca5375-8e5a-46e3-9340-e5375e00d3e6">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <entry name="serial">0b082cd2-d1d3-4577-be0a-30b9256a223e</entry>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <entry name="uuid">0b082cd2-d1d3-4577-be0a-30b9256a223e</entry>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/0b082cd2-d1d3-4577-be0a-30b9256a223e_disk">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/0b082cd2-d1d3-4577-be0a-30b9256a223e_disk.config">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:a1:48:33"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <target dev="tap81ca5375-8e"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e/console.log" append="off"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:17:46 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:17:46 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:17:46 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:17:46 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.662 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Preparing to wait for external event network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.663 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.663 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.664 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.665 226310 DEBUG nova.virt.libvirt.vif [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2082719356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2082719356',id=128,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-laevdyrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:41Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=0b082cd2-d1d3-4577-be0a-30b9256a223e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.666 226310 DEBUG nova.network.os_vif_util [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.667 226310 DEBUG nova.network.os_vif_util [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:33,bridge_name='br-int',has_traffic_filtering=True,id=81ca5375-8e5a-46e3-9340-e5375e00d3e6,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ca5375-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.668 226310 DEBUG os_vif [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:33,bridge_name='br-int',has_traffic_filtering=True,id=81ca5375-8e5a-46e3-9340-e5375e00d3e6,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ca5375-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.670 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.671 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.672 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.677 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.678 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81ca5375-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.679 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81ca5375-8e, col_values=(('external_ids', {'iface-id': '81ca5375-8e5a-46e3-9340-e5375e00d3e6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:48:33', 'vm-uuid': '0b082cd2-d1d3-4577-be0a-30b9256a223e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.682 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539564 NetworkManager[48997]: <info>  [1764404266.6836] manager: (tap81ca5375-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Nov 29 03:17:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:46.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.687 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.689 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.692 226310 INFO os_vif [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:33,bridge_name='br-int',has_traffic_filtering=True,id=81ca5375-8e5a-46e3-9340-e5375e00d3e6,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ca5375-8e')#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.776 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.777 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.778 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No VIF found with MAC fa:16:3e:a1:48:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.778 226310 INFO nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Using config drive#033[00m
Nov 29 03:17:46 np0005539564 nova_compute[226295]: 2025-11-29 08:17:46.803 226310 DEBUG nova.storage.rbd_utils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.309 226310 INFO nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Creating config drive at /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e/disk.config#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.319 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5jcctgrx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.364 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.365 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.400 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.464 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5jcctgrx" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.501 226310 DEBUG nova.storage.rbd_utils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.505 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e/disk.config 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.706 226310 DEBUG oslo_concurrency.processutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e/disk.config 0b082cd2-d1d3-4577-be0a-30b9256a223e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.707 226310 INFO nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Deleting local config drive /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:17:47 np0005539564 kernel: tap81ca5375-8e: entered promiscuous mode
Nov 29 03:17:47 np0005539564 NetworkManager[48997]: <info>  [1764404267.7660] manager: (tap81ca5375-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Nov 29 03:17:47 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:47Z|00464|binding|INFO|Claiming lport 81ca5375-8e5a-46e3-9340-e5375e00d3e6 for this chassis.
Nov 29 03:17:47 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:47Z|00465|binding|INFO|81ca5375-8e5a-46e3-9340-e5375e00d3e6: Claiming fa:16:3e:a1:48:33 10.100.0.14
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.766 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.774 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.784 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:48:33 10.100.0.14'], port_security=['fa:16:3e:a1:48:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0b082cd2-d1d3-4577-be0a-30b9256a223e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=81ca5375-8e5a-46e3-9340-e5375e00d3e6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.785 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 81ca5375-8e5a-46e3-9340-e5375e00d3e6 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 bound to our chassis#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.786 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.799 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[68a44246-e46b-4ea7-a6a1-bdf177e853f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.800 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c25940b-e1 in ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.802 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c25940b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.802 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f6dcbe25-ed60-4785-8e5a-0fabc9efa0ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.803 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21c13eb0-6d59-410e-acf8-f64e9d35ff65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 systemd-udevd[273752]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:47 np0005539564 NetworkManager[48997]: <info>  [1764404267.8197] device (tap81ca5375-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:47 np0005539564 NetworkManager[48997]: <info>  [1764404267.8218] device (tap81ca5375-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.822 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[541055c4-b0dd-45b2-80c1-8a1d51965b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 systemd-machined[190128]: New machine qemu-57-instance-00000080.
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.852 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2ac6f8-e3b7-420d-b5ff-37eba5905436]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 systemd[1]: Started Virtual Machine qemu-57-instance-00000080.
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.865 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:47 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:47Z|00466|binding|INFO|Setting lport 81ca5375-8e5a-46e3-9340-e5375e00d3e6 ovn-installed in OVS
Nov 29 03:17:47 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:47Z|00467|binding|INFO|Setting lport 81ca5375-8e5a-46e3-9340-e5375e00d3e6 up in Southbound
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.872 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.891 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[10403342-7a8a-4b14-9f0a-d48ad7b3bd7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 NetworkManager[48997]: <info>  [1764404267.8966] manager: (tap3c25940b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.897 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3c8849-cc28-42c1-93d3-0c95339e4e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.938 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[823a93ab-3ff3-4a04-aca3-dd3017e0cd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.940 226310 DEBUG nova.network.neutron [req-f1592e38-90cc-4336-9a99-b5e3b932cb74 req-228c2597-2df0-4df8-beb2-f1a3c8cf2626 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Updated VIF entry in instance network info cache for port 81ca5375-8e5a-46e3-9340-e5375e00d3e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.941 226310 DEBUG nova.network.neutron [req-f1592e38-90cc-4336-9a99-b5e3b932cb74 req-228c2597-2df0-4df8-beb2-f1a3c8cf2626 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Updating instance_info_cache with network_info: [{"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.944 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa76b1c-a0ee-4551-adda-7202d1f25d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 nova_compute[226295]: 2025-11-29 08:17:47.960 226310 DEBUG oslo_concurrency.lockutils [req-f1592e38-90cc-4336-9a99-b5e3b932cb74 req-228c2597-2df0-4df8-beb2-f1a3c8cf2626 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:47 np0005539564 NetworkManager[48997]: <info>  [1764404267.9749] device (tap3c25940b-e0): carrier: link connected
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.980 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[729e9e47-e2c9-4524-9dda-d049845e023d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:47.998 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbcffc6-93e4-4f58-8035-7b1c993f981d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 18141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273785, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.014 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[60ce8d88-8ce7-4b86-961e-9eb50ce2642b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:387b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724265, 'tstamp': 724265}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273786, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.039 226310 INFO nova.compute.manager [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Rebuilding instance#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.045 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[90e38eec-e82e-4117-bf97-564aaa47320b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 18141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273787, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.083 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cc196043-c7d5-404f-8474-a5daa2fed6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.151 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb8a4b5-1abe-48e8-8ff5-cb64bba9123d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.153 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.153 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.154 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.155 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:48 np0005539564 NetworkManager[48997]: <info>  [1764404268.1565] manager: (tap3c25940b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Nov 29 03:17:48 np0005539564 kernel: tap3c25940b-e0: entered promiscuous mode
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.158 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.159 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.160 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:48 np0005539564 ovn_controller[130591]: 2025-11-29T08:17:48Z|00468|binding|INFO|Releasing lport 9da51447-ee5a-4659-ba78-deb4b11b4098 from this chassis (sb_readonly=0)
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.190 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.191 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.192 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c9444155-296b-47ff-80ce-9b4cf35c4ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.192 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/3c25940b-e63b-4443-a94b-0216a35e8dc6.pid.haproxy
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 3c25940b-e63b-4443-a94b-0216a35e8dc6
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:17:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:17:48.195 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'env', 'PROCESS_TAG=haproxy-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c25940b-e63b-4443-a94b-0216a35e8dc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:17:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:48.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.417 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404268.4165056, 0b082cd2-d1d3-4577-be0a-30b9256a223e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.417 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.430 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.439 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.454 226310 DEBUG nova.compute.manager [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.458 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404268.4167044, 0b082cd2-d1d3-4577-be0a-30b9256a223e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.459 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.485 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.490 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.506 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.514 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'pci_requests' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.517 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.532 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.537 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Triggering sync for uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.537 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Triggering sync for uuid 0b082cd2-d1d3-4577-be0a-30b9256a223e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.538 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.539 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.539 226310 INFO nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] During sync_power_state the instance has a pending task (rebuilding). Skip.#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.540 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.540 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.548 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'resources' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.560 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'migration_context' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.572 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:17:48 np0005539564 nova_compute[226295]: 2025-11-29 08:17:48.579 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:17:48 np0005539564 podman[273861]: 2025-11-29 08:17:48.622764869 +0000 UTC m=+0.050067722 container create 3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:17:48 np0005539564 systemd[1]: Started libpod-conmon-3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32.scope.
Nov 29 03:17:48 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:17:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:48.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:48 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5e078b9430d62b528b5fa637af142fe8d53d363e05b7dd4c479027f2eb3bcb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:48 np0005539564 podman[273861]: 2025-11-29 08:17:48.599718067 +0000 UTC m=+0.027020920 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:17:48 np0005539564 podman[273861]: 2025-11-29 08:17:48.705476669 +0000 UTC m=+0.132779572 container init 3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:17:48 np0005539564 podman[273861]: 2025-11-29 08:17:48.712475108 +0000 UTC m=+0.139777971 container start 3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:17:48 np0005539564 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[273876]: [NOTICE]   (273880) : New worker (273882) forked
Nov 29 03:17:48 np0005539564 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[273876]: [NOTICE]   (273880) : Loading success.
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.228 226310 DEBUG nova.compute.manager [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received event network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.229 226310 DEBUG oslo_concurrency.lockutils [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.230 226310 DEBUG oslo_concurrency.lockutils [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.230 226310 DEBUG oslo_concurrency.lockutils [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.231 226310 DEBUG nova.compute.manager [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Processing event network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.232 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.250 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404269.2401247, 0b082cd2-d1d3-4577-be0a-30b9256a223e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.250 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.256 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.266 226310 INFO nova.virt.libvirt.driver [-] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Instance spawned successfully.#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.267 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.276 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.281 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.303 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.307 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.307 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.308 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.309 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.309 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.310 226310 DEBUG nova.virt.libvirt.driver [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.381 226310 INFO nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Took 8.14 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.382 226310 DEBUG nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.472 226310 INFO nova.compute.manager [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Took 10.13 seconds to build instance.#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.492 226310 DEBUG oslo_concurrency.lockutils [None req-ec7fac5f-e14a-4181-9f92-b3e9e2803b6a 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.493 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.494 226310 INFO nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:49 np0005539564 nova_compute[226295]: 2025-11-29 08:17:49.497 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:50.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:50.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:51 np0005539564 nova_compute[226295]: 2025-11-29 08:17:51.326 226310 DEBUG nova.compute.manager [req-40211b69-2b6f-4907-ab91-d314a26503a5 req-13188326-2e82-43d1-9fef-c8a1b17c67ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received event network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:51 np0005539564 nova_compute[226295]: 2025-11-29 08:17:51.326 226310 DEBUG oslo_concurrency.lockutils [req-40211b69-2b6f-4907-ab91-d314a26503a5 req-13188326-2e82-43d1-9fef-c8a1b17c67ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:51 np0005539564 nova_compute[226295]: 2025-11-29 08:17:51.327 226310 DEBUG oslo_concurrency.lockutils [req-40211b69-2b6f-4907-ab91-d314a26503a5 req-13188326-2e82-43d1-9fef-c8a1b17c67ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:51 np0005539564 nova_compute[226295]: 2025-11-29 08:17:51.327 226310 DEBUG oslo_concurrency.lockutils [req-40211b69-2b6f-4907-ab91-d314a26503a5 req-13188326-2e82-43d1-9fef-c8a1b17c67ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:51 np0005539564 nova_compute[226295]: 2025-11-29 08:17:51.327 226310 DEBUG nova.compute.manager [req-40211b69-2b6f-4907-ab91-d314a26503a5 req-13188326-2e82-43d1-9fef-c8a1b17c67ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] No waiting events found dispatching network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:51 np0005539564 nova_compute[226295]: 2025-11-29 08:17:51.327 226310 WARNING nova.compute.manager [req-40211b69-2b6f-4907-ab91-d314a26503a5 req-13188326-2e82-43d1-9fef-c8a1b17c67ad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received unexpected event network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:17:51 np0005539564 nova_compute[226295]: 2025-11-29 08:17:51.377 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:51 np0005539564 nova_compute[226295]: 2025-11-29 08:17:51.683 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:52 np0005539564 nova_compute[226295]: 2025-11-29 08:17:52.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:52 np0005539564 nova_compute[226295]: 2025-11-29 08:17:52.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:17:52 np0005539564 nova_compute[226295]: 2025-11-29 08:17:52.402 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:52.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:53 np0005539564 nova_compute[226295]: 2025-11-29 08:17:53.109 226310 DEBUG nova.compute.manager [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:53 np0005539564 nova_compute[226295]: 2025-11-29 08:17:53.174 226310 INFO nova.compute.manager [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] instance snapshotting#033[00m
Nov 29 03:17:53 np0005539564 nova_compute[226295]: 2025-11-29 08:17:53.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:53 np0005539564 nova_compute[226295]: 2025-11-29 08:17:53.517 226310 INFO nova.virt.libvirt.driver [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Beginning live snapshot process#033[00m
Nov 29 03:17:53 np0005539564 nova_compute[226295]: 2025-11-29 08:17:53.664 226310 DEBUG nova.virt.libvirt.imagebackend [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.009 226310 DEBUG nova.storage.rbd_utils [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] creating snapshot(b2adca1188de45ae8ca2042294bd2b54) on rbd image(0b082cd2-d1d3-4577-be0a-30b9256a223e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:17:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:54.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.367 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-af25c08f-ae92-4cf1-8fac-4374bcbc6614" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.368 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-af25c08f-ae92-4cf1-8fac-4374bcbc6614" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.369 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.369 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e305 e305: 3 total, 3 up, 3 in
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.679 226310 DEBUG nova.storage.rbd_utils [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] cloning vms/0b082cd2-d1d3-4577-be0a-30b9256a223e_disk@b2adca1188de45ae8ca2042294bd2b54 to images/995630c6-dc23-4abf-afc5-51778a6f1496 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:17:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:54.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.782 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:54 np0005539564 nova_compute[226295]: 2025-11-29 08:17:54.857 226310 DEBUG nova.storage.rbd_utils [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] flattening images/995630c6-dc23-4abf-afc5-51778a6f1496 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:17:55 np0005539564 nova_compute[226295]: 2025-11-29 08:17:55.277 226310 DEBUG nova.storage.rbd_utils [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] removing snapshot(b2adca1188de45ae8ca2042294bd2b54) on rbd image(0b082cd2-d1d3-4577-be0a-30b9256a223e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:17:55 np0005539564 nova_compute[226295]: 2025-11-29 08:17:55.387 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:55 np0005539564 nova_compute[226295]: 2025-11-29 08:17:55.431 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-af25c08f-ae92-4cf1-8fac-4374bcbc6614" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:55 np0005539564 nova_compute[226295]: 2025-11-29 08:17:55.432 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:17:55 np0005539564 nova_compute[226295]: 2025-11-29 08:17:55.433 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:55 np0005539564 nova_compute[226295]: 2025-11-29 08:17:55.434 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e306 e306: 3 total, 3 up, 3 in
Nov 29 03:17:55 np0005539564 nova_compute[226295]: 2025-11-29 08:17:55.671 226310 DEBUG nova.storage.rbd_utils [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] creating snapshot(snap) on rbd image(995630c6-dc23-4abf-afc5-51778a6f1496) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:17:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:56.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e307 e307: 3 total, 3 up, 3 in
Nov 29 03:17:56 np0005539564 nova_compute[226295]: 2025-11-29 08:17:56.687 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:56.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:57 np0005539564 nova_compute[226295]: 2025-11-29 08:17:57.405 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:58.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:58 np0005539564 nova_compute[226295]: 2025-11-29 08:17:58.625 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:17:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:17:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:58.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:00 np0005539564 nova_compute[226295]: 2025-11-29 08:18:00.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:00 np0005539564 nova_compute[226295]: 2025-11-29 08:18:00.509 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:00 np0005539564 nova_compute[226295]: 2025-11-29 08:18:00.510 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:00 np0005539564 nova_compute[226295]: 2025-11-29 08:18:00.511 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:00 np0005539564 nova_compute[226295]: 2025-11-29 08:18:00.511 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:18:00 np0005539564 nova_compute[226295]: 2025-11-29 08:18:00.512 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:00.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1673223080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.062 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.153 226310 INFO nova.virt.libvirt.driver [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Snapshot image upload complete#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.155 226310 INFO nova.compute.manager [None req-1e313c9a-e048-430c-a4a6-9c0c2aa3c75d 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Took 7.98 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.196 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.196 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.200 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.201 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.442 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.444 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3999MB free_disk=20.941387176513672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.444 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.445 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:01 np0005539564 nova_compute[226295]: 2025-11-29 08:18:01.691 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:02.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.407 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.500 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance af25c08f-ae92-4cf1-8fac-4374bcbc6614 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.501 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 0b082cd2-d1d3-4577-be0a-30b9256a223e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.502 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.502 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.519 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.548 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.549 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.572 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.603 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:18:02 np0005539564 nova_compute[226295]: 2025-11-29 08:18:02.679 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:02.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e308 e308: 3 total, 3 up, 3 in
Nov 29 03:18:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/503601174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:03 np0005539564 nova_compute[226295]: 2025-11-29 08:18:03.164 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:03 np0005539564 nova_compute[226295]: 2025-11-29 08:18:03.175 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:03 np0005539564 nova_compute[226295]: 2025-11-29 08:18:03.264 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:03 np0005539564 nova_compute[226295]: 2025-11-29 08:18:03.294 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:18:03 np0005539564 nova_compute[226295]: 2025-11-29 08:18:03.294 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:03.732 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:03.734 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:03.734 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:04.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:04.395 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:04.396 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:18:04 np0005539564 nova_compute[226295]: 2025-11-29 08:18:04.444 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:04.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:18:05Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a1:48:33 10.100.0.14
Nov 29 03:18:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:18:05Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a1:48:33 10.100.0.14
Nov 29 03:18:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:06.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:06 np0005539564 nova_compute[226295]: 2025-11-29 08:18:06.668 226310 INFO nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance shutdown successfully after 18 seconds.#033[00m
Nov 29 03:18:06 np0005539564 nova_compute[226295]: 2025-11-29 08:18:06.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:06.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:06 np0005539564 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 29 03:18:06 np0005539564 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007f.scope: Consumed 15.134s CPU time.
Nov 29 03:18:06 np0005539564 systemd-machined[190128]: Machine qemu-56-instance-0000007f terminated.
Nov 29 03:18:07 np0005539564 nova_compute[226295]: 2025-11-29 08:18:07.102 226310 INFO nova.virt.libvirt.driver [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance destroyed successfully.#033[00m
Nov 29 03:18:07 np0005539564 nova_compute[226295]: 2025-11-29 08:18:07.108 226310 INFO nova.virt.libvirt.driver [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance destroyed successfully.#033[00m
Nov 29 03:18:07 np0005539564 nova_compute[226295]: 2025-11-29 08:18:07.410 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:08 np0005539564 podman[274103]: 2025-11-29 08:18:08.563000316 +0000 UTC m=+0.098358534 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:18:08 np0005539564 podman[274102]: 2025-11-29 08:18:08.599685016 +0000 UTC m=+0.138344293 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:18:08 np0005539564 podman[274101]: 2025-11-29 08:18:08.615274546 +0000 UTC m=+0.156048040 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:18:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:08.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.498 226310 INFO nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Deleting instance files /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614_del#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.500 226310 INFO nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Deletion of /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614_del complete#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.668 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.669 226310 INFO nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Creating image(s)#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.702 226310 DEBUG nova.storage.rbd_utils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.733 226310 DEBUG nova.storage.rbd_utils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.760 226310 DEBUG nova.storage.rbd_utils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.764 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.861 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.862 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.864 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.864 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.907 226310 DEBUG nova.storage.rbd_utils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:09 np0005539564 nova_compute[226295]: 2025-11-29 08:18:09.911 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.273 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:10.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.371 226310 DEBUG nova.storage.rbd_utils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] resizing rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.521 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.522 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Ensure instance console log exists: /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.522 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.522 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.523 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.525 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.529 226310 WARNING nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.537 226310 DEBUG nova.virt.libvirt.host [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.537 226310 DEBUG nova.virt.libvirt.host [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.542 226310 DEBUG nova.virt.libvirt.host [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.542 226310 DEBUG nova.virt.libvirt.host [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.544 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.544 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.545 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.545 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.545 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.545 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.546 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.546 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.546 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.546 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.547 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.547 226310 DEBUG nova.virt.hardware [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.547 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:10 np0005539564 nova_compute[226295]: 2025-11-29 08:18:10.564 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:10.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3279641367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.047 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.081 226310 DEBUG nova.storage.rbd_utils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.084 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/565305582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.613 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.618 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <uuid>af25c08f-ae92-4cf1-8fac-4374bcbc6614</uuid>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <name>instance-0000007f</name>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerShowV254Test-server-1944351406</nova:name>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:18:10</nova:creationTime>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <nova:user uuid="942fc293eca343c5968f74e14b91d515">tempest-ServerShowV254Test-611860325-project-member</nova:user>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <nova:project uuid="2f97cf99a149454eb333842c8e9713b0">tempest-ServerShowV254Test-611860325</nova:project>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <entry name="serial">af25c08f-ae92-4cf1-8fac-4374bcbc6614</entry>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <entry name="uuid">af25c08f-ae92-4cf1-8fac-4374bcbc6614</entry>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/console.log" append="off"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:18:11 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:18:11 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:18:11 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:18:11 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.695 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.695 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.695 226310 INFO nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Using config drive#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.724 226310 DEBUG nova.storage.rbd_utils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539564 nova_compute[226295]: 2025-11-29 08:18:11.752 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'ec2_ids' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:12 np0005539564 nova_compute[226295]: 2025-11-29 08:18:12.077 226310 INFO nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Creating config drive at /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config#033[00m
Nov 29 03:18:12 np0005539564 nova_compute[226295]: 2025-11-29 08:18:12.083 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp32pkfx2z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:12 np0005539564 nova_compute[226295]: 2025-11-29 08:18:12.225 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp32pkfx2z" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:12 np0005539564 nova_compute[226295]: 2025-11-29 08:18:12.271 226310 DEBUG nova.storage.rbd_utils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] rbd image af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:12 np0005539564 nova_compute[226295]: 2025-11-29 08:18:12.276 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:12.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:12 np0005539564 nova_compute[226295]: 2025-11-29 08:18:12.457 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:12 np0005539564 nova_compute[226295]: 2025-11-29 08:18:12.515 226310 DEBUG oslo_concurrency.processutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config af25c08f-ae92-4cf1-8fac-4374bcbc6614_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:12 np0005539564 nova_compute[226295]: 2025-11-29 08:18:12.516 226310 INFO nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Deleting local config drive /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614/disk.config because it was imported into RBD.#033[00m
Nov 29 03:18:12 np0005539564 systemd-machined[190128]: New machine qemu-58-instance-0000007f.
Nov 29 03:18:12 np0005539564 systemd[1]: Started Virtual Machine qemu-58-instance-0000007f.
Nov 29 03:18:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:12.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.109 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for af25c08f-ae92-4cf1-8fac-4374bcbc6614 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.110 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404293.108701, af25c08f-ae92-4cf1-8fac-4374bcbc6614 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.110 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.116 226310 DEBUG nova.compute.manager [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.116 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.124 226310 INFO nova.virt.libvirt.driver [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance spawned successfully.#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.125 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.148 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.159 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.166 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.167 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.167 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.168 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.169 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.170 226310 DEBUG nova.virt.libvirt.driver [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.214 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.215 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404293.1144152, af25c08f-ae92-4cf1-8fac-4374bcbc6614 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.215 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.246 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.253 226310 DEBUG nova.compute.manager [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.255 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.315 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:18:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:13.398 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.505 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.505 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.506 226310 DEBUG nova.objects.instance [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.753408) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293753507, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1836, "num_deletes": 257, "total_data_size": 3945448, "memory_usage": 4004712, "flush_reason": "Manual Compaction"}
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293769512, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 1642320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47883, "largest_seqno": 49714, "table_properties": {"data_size": 1636270, "index_size": 3060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16587, "raw_average_key_size": 21, "raw_value_size": 1622724, "raw_average_value_size": 2112, "num_data_blocks": 134, "num_entries": 768, "num_filter_entries": 768, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404157, "oldest_key_time": 1764404157, "file_creation_time": 1764404293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 16140 microseconds, and 8793 cpu microseconds.
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.769564) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 1642320 bytes OK
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.769587) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.771453) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.771469) EVENT_LOG_v1 {"time_micros": 1764404293771463, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.771488) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3936971, prev total WAL file size 3936971, number of live WAL files 2.
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.772646) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353131' seq:72057594037927935, type:22 .. '6D6772737461740031373635' seq:0, type:0; will stop at (end)
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(1603KB)], [90(11MB)]
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293772715, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 13756021, "oldest_snapshot_seqno": -1}
Nov 29 03:18:13 np0005539564 nova_compute[226295]: 2025-11-29 08:18:13.884 226310 DEBUG oslo_concurrency.lockutils [None req-ec163fd9-382a-4a38-bf9d-ff0a589535a2 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 8110 keys, 10792413 bytes, temperature: kUnknown
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293891430, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 10792413, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10740173, "index_size": 30880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208832, "raw_average_key_size": 25, "raw_value_size": 10597587, "raw_average_value_size": 1306, "num_data_blocks": 1213, "num_entries": 8110, "num_filter_entries": 8110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.891815) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10792413 bytes
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.893347) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.8 rd, 90.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(14.9) write-amplify(6.6) OK, records in: 8579, records dropped: 469 output_compression: NoCompression
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.893386) EVENT_LOG_v1 {"time_micros": 1764404293893365, "job": 56, "event": "compaction_finished", "compaction_time_micros": 118816, "compaction_time_cpu_micros": 45128, "output_level": 6, "num_output_files": 1, "total_output_size": 10792413, "num_input_records": 8579, "num_output_records": 8110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293894251, "job": 56, "event": "table_file_deletion", "file_number": 92}
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293898798, "job": 56, "event": "table_file_deletion", "file_number": 90}
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.772480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.898870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.898880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.898883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.898886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:13.898895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:14.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:14.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.119 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.120 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.120 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.120 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.120 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.122 226310 INFO nova.compute.manager [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Terminating instance#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.122 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "refresh_cache-af25c08f-ae92-4cf1-8fac-4374bcbc6614" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.123 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquired lock "refresh_cache-af25c08f-ae92-4cf1-8fac-4374bcbc6614" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.123 226310 DEBUG nova.network.neutron [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.291 226310 DEBUG nova.network.neutron [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.712 226310 DEBUG nova.network.neutron [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.883 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Releasing lock "refresh_cache-af25c08f-ae92-4cf1-8fac-4374bcbc6614" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:15 np0005539564 nova_compute[226295]: 2025-11-29 08:18:15.884 226310 DEBUG nova.compute.manager [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:18:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:16.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:18:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:18:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:18:16 np0005539564 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 29 03:18:16 np0005539564 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007f.scope: Consumed 3.455s CPU time.
Nov 29 03:18:16 np0005539564 systemd-machined[190128]: Machine qemu-58-instance-0000007f terminated.
Nov 29 03:18:16 np0005539564 nova_compute[226295]: 2025-11-29 08:18:16.515 226310 INFO nova.virt.libvirt.driver [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance destroyed successfully.#033[00m
Nov 29 03:18:16 np0005539564 nova_compute[226295]: 2025-11-29 08:18:16.516 226310 DEBUG nova.objects.instance [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lazy-loading 'resources' on Instance uuid af25c08f-ae92-4cf1-8fac-4374bcbc6614 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:16.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:16 np0005539564 nova_compute[226295]: 2025-11-29 08:18:16.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.090 226310 INFO nova.virt.libvirt.driver [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Deleting instance files /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614_del#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.092 226310 INFO nova.virt.libvirt.driver [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Deletion of /var/lib/nova/instances/af25c08f-ae92-4cf1-8fac-4374bcbc6614_del complete#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.136 226310 INFO nova.compute.manager [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.136 226310 DEBUG oslo.service.loopingcall [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.137 226310 DEBUG nova.compute.manager [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.137 226310 DEBUG nova.network.neutron [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.376 226310 DEBUG nova.network.neutron [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.404 226310 DEBUG nova.network.neutron [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.434 226310 INFO nova.compute.manager [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Took 0.30 seconds to deallocate network for instance.#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.457 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.474 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.475 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:17 np0005539564 nova_compute[226295]: 2025-11-29 08:18:17.710 226310 DEBUG oslo_concurrency.processutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2182033048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:18 np0005539564 nova_compute[226295]: 2025-11-29 08:18:18.226 226310 DEBUG oslo_concurrency.processutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:18 np0005539564 nova_compute[226295]: 2025-11-29 08:18:18.236 226310 DEBUG nova.compute.provider_tree [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:18 np0005539564 nova_compute[226295]: 2025-11-29 08:18:18.267 226310 DEBUG nova.scheduler.client.report [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:18 np0005539564 nova_compute[226295]: 2025-11-29 08:18:18.314 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:18.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:18 np0005539564 nova_compute[226295]: 2025-11-29 08:18:18.360 226310 INFO nova.scheduler.client.report [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Deleted allocations for instance af25c08f-ae92-4cf1-8fac-4374bcbc6614#033[00m
Nov 29 03:18:18 np0005539564 nova_compute[226295]: 2025-11-29 08:18:18.454 226310 DEBUG oslo_concurrency.lockutils [None req-19724815-2509-4637-9c83-9c8c37b57e07 942fc293eca343c5968f74e14b91d515 2f97cf99a149454eb333842c8e9713b0 - - default default] Lock "af25c08f-ae92-4cf1-8fac-4374bcbc6614" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:18.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:20.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:20.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:21 np0005539564 nova_compute[226295]: 2025-11-29 08:18:21.736 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:22 np0005539564 nova_compute[226295]: 2025-11-29 08:18:22.460 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:24.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:24.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:26.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:26 np0005539564 nova_compute[226295]: 2025-11-29 08:18:26.741 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:26.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:27 np0005539564 nova_compute[226295]: 2025-11-29 08:18:27.463 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:18:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3454248546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:18:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:18:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3454248546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:28.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.669371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308669422, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 445, "num_deletes": 255, "total_data_size": 501889, "memory_usage": 510920, "flush_reason": "Manual Compaction"}
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308675541, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 331057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49719, "largest_seqno": 50159, "table_properties": {"data_size": 328498, "index_size": 595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6166, "raw_average_key_size": 18, "raw_value_size": 323375, "raw_average_value_size": 971, "num_data_blocks": 25, "num_entries": 333, "num_filter_entries": 333, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404294, "oldest_key_time": 1764404294, "file_creation_time": 1764404308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 6273 microseconds, and 2863 cpu microseconds.
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.675648) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 331057 bytes OK
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.675675) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.677420) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.677456) EVENT_LOG_v1 {"time_micros": 1764404308677447, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.677483) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 499083, prev total WAL file size 499083, number of live WAL files 2.
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.678282) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353133' seq:72057594037927935, type:22 .. '6C6F676D0031373634' seq:0, type:0; will stop at (end)
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(323KB)], [93(10MB)]
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308678544, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 11123470, "oldest_snapshot_seqno": -1}
Nov 29 03:18:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:28.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7921 keys, 10984636 bytes, temperature: kUnknown
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308827698, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10984636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10932948, "index_size": 30776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205877, "raw_average_key_size": 25, "raw_value_size": 10792923, "raw_average_value_size": 1362, "num_data_blocks": 1206, "num_entries": 7921, "num_filter_entries": 7921, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.828143) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10984636 bytes
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.829688) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.6 rd, 73.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.3 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(66.8) write-amplify(33.2) OK, records in: 8443, records dropped: 522 output_compression: NoCompression
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.829718) EVENT_LOG_v1 {"time_micros": 1764404308829704, "job": 58, "event": "compaction_finished", "compaction_time_micros": 149208, "compaction_time_cpu_micros": 27956, "output_level": 6, "num_output_files": 1, "total_output_size": 10984636, "num_input_records": 8443, "num_output_records": 7921, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308830070, "job": 58, "event": "table_file_deletion", "file_number": 95}
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308833952, "job": 58, "event": "table_file_deletion", "file_number": 93}
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.678110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.834018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.834026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.834030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.834034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:18:28.834038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:30.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:30.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.291 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "20483537-11e8-42ae-9d49-1e955b2cd34f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.292 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "20483537-11e8-42ae-9d49-1e955b2cd34f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.311 226310 DEBUG nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.513 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404296.5120223, af25c08f-ae92-4cf1-8fac-4374bcbc6614 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.513 226310 INFO nova.compute.manager [-] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.546 226310 DEBUG nova.compute.manager [None req-98e0154c-2beb-48ed-ac30-edb048c45c70 - - - - - -] [instance: af25c08f-ae92-4cf1-8fac-4374bcbc6614] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.577 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.578 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.585 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.585 226310 INFO nova.compute.claims [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:18:31 np0005539564 nova_compute[226295]: 2025-11-29 08:18:31.745 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.045 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:32.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.468 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/998118513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.552 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.563 226310 DEBUG nova.compute.provider_tree [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.589 226310 DEBUG nova.scheduler.client.report [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.630 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.632 226310 DEBUG nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.685 226310 DEBUG nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.698 226310 INFO nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.719 226310 DEBUG nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:18:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.810 226310 DEBUG nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.813 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.814 226310 INFO nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Creating image(s)#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.847 226310 DEBUG nova.storage.rbd_utils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image 20483537-11e8-42ae-9d49-1e955b2cd34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.876 226310 DEBUG nova.storage.rbd_utils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image 20483537-11e8-42ae-9d49-1e955b2cd34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.907 226310 DEBUG nova.storage.rbd_utils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image 20483537-11e8-42ae-9d49-1e955b2cd34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:32 np0005539564 nova_compute[226295]: 2025-11-29 08:18:32.911 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.021 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.023 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.024 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.025 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.059 226310 DEBUG nova.storage.rbd_utils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image 20483537-11e8-42ae-9d49-1e955b2cd34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.064 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 20483537-11e8-42ae-9d49-1e955b2cd34f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.427 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 20483537-11e8-42ae-9d49-1e955b2cd34f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.507 226310 DEBUG nova.storage.rbd_utils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] resizing rbd image 20483537-11e8-42ae-9d49-1e955b2cd34f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.598 226310 DEBUG nova.objects.instance [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'migration_context' on Instance uuid 20483537-11e8-42ae-9d49-1e955b2cd34f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.616 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.617 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Ensure instance console log exists: /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.617 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.617 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.618 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.619 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.623 226310 WARNING nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.627 226310 DEBUG nova.virt.libvirt.host [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.628 226310 DEBUG nova.virt.libvirt.host [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.631 226310 DEBUG nova.virt.libvirt.host [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.631 226310 DEBUG nova.virt.libvirt.host [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.632 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.633 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.633 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.633 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.634 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.634 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.634 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.634 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.635 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.635 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.635 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.635 226310 DEBUG nova.virt.hardware [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:33 np0005539564 nova_compute[226295]: 2025-11-29 08:18:33.638 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2922267082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:34 np0005539564 nova_compute[226295]: 2025-11-29 08:18:34.088 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:34 np0005539564 nova_compute[226295]: 2025-11-29 08:18:34.116 226310 DEBUG nova.storage.rbd_utils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image 20483537-11e8-42ae-9d49-1e955b2cd34f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:34 np0005539564 nova_compute[226295]: 2025-11-29 08:18:34.120 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:34.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:34 np0005539564 nova_compute[226295]: 2025-11-29 08:18:34.577 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:34 np0005539564 nova_compute[226295]: 2025-11-29 08:18:34.581 226310 DEBUG nova.objects.instance [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'pci_devices' on Instance uuid 20483537-11e8-42ae-9d49-1e955b2cd34f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:34 np0005539564 nova_compute[226295]: 2025-11-29 08:18:34.604 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <uuid>20483537-11e8-42ae-9d49-1e955b2cd34f</uuid>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <name>instance-00000084</name>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerShowV247Test-server-535377440</nova:name>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:18:33</nova:creationTime>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <nova:user uuid="d043b72e9a1f4575835e938f1a090e3a">tempest-ServerShowV247Test-1340079126-project-member</nova:user>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <nova:project uuid="4b2d7c5689334b7eb116fab1fd5dedac">tempest-ServerShowV247Test-1340079126</nova:project>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <nova:ports/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <entry name="serial">20483537-11e8-42ae-9d49-1e955b2cd34f</entry>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <entry name="uuid">20483537-11e8-42ae-9d49-1e955b2cd34f</entry>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/20483537-11e8-42ae-9d49-1e955b2cd34f_disk">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/20483537-11e8-42ae-9d49-1e955b2cd34f_disk.config">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f/console.log" append="off"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:18:34 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:18:34 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:18:34 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:18:34 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.061 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.062 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.063 226310 INFO nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Using config drive#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.101 226310 DEBUG nova.storage.rbd_utils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image 20483537-11e8-42ae-9d49-1e955b2cd34f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.374 226310 INFO nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Creating config drive at /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f/disk.config#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.381 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbqyyj77w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.526 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbqyyj77w" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.562 226310 DEBUG nova.storage.rbd_utils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image 20483537-11e8-42ae-9d49-1e955b2cd34f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.568 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f/disk.config 20483537-11e8-42ae-9d49-1e955b2cd34f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.785 226310 DEBUG oslo_concurrency.processutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f/disk.config 20483537-11e8-42ae-9d49-1e955b2cd34f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:35 np0005539564 nova_compute[226295]: 2025-11-29 08:18:35.786 226310 INFO nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Deleting local config drive /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:18:35 np0005539564 systemd-machined[190128]: New machine qemu-59-instance-00000084.
Nov 29 03:18:35 np0005539564 systemd[1]: Started Virtual Machine qemu-59-instance-00000084.
Nov 29 03:18:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:36.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.887 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404316.8871815, 20483537-11e8-42ae-9d49-1e955b2cd34f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.888 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.893 226310 DEBUG nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.893 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.899 226310 INFO nova.virt.libvirt.driver [-] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Instance spawned successfully.#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.900 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.940 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.950 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.957 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.958 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.958 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.959 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.960 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.961 226310 DEBUG nova.virt.libvirt.driver [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.974 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.975 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404316.8916276, 20483537-11e8-42ae-9d49-1e955b2cd34f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:36 np0005539564 nova_compute[226295]: 2025-11-29 08:18:36.976 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:37 np0005539564 nova_compute[226295]: 2025-11-29 08:18:37.006 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:37 np0005539564 nova_compute[226295]: 2025-11-29 08:18:37.012 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:37 np0005539564 nova_compute[226295]: 2025-11-29 08:18:37.046 226310 INFO nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Took 4.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:18:37 np0005539564 nova_compute[226295]: 2025-11-29 08:18:37.046 226310 DEBUG nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:37 np0005539564 nova_compute[226295]: 2025-11-29 08:18:37.048 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:37 np0005539564 nova_compute[226295]: 2025-11-29 08:18:37.092 226310 INFO nova.compute.manager [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Took 5.54 seconds to build instance.#033[00m
Nov 29 03:18:37 np0005539564 nova_compute[226295]: 2025-11-29 08:18:37.109 226310 DEBUG oslo_concurrency.lockutils [None req-fd4460b6-cbac-4a4e-b536-df7aa1a05c7e d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "20483537-11e8-42ae-9d49-1e955b2cd34f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:37 np0005539564 nova_compute[226295]: 2025-11-29 08:18:37.471 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e309 e309: 3 total, 3 up, 3 in
Nov 29 03:18:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:38.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e310 e310: 3 total, 3 up, 3 in
Nov 29 03:18:39 np0005539564 podman[275100]: 2025-11-29 08:18:39.503470209 +0000 UTC m=+0.056247133 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:18:39 np0005539564 podman[275099]: 2025-11-29 08:18:39.516573343 +0000 UTC m=+0.070905668 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:18:39 np0005539564 podman[275098]: 2025-11-29 08:18:39.537630552 +0000 UTC m=+0.091206167 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:18:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e311 e311: 3 total, 3 up, 3 in
Nov 29 03:18:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:40.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:40.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:41 np0005539564 nova_compute[226295]: 2025-11-29 08:18:41.753 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:42.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:42 np0005539564 nova_compute[226295]: 2025-11-29 08:18:42.474 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:44.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:46.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:46 np0005539564 nova_compute[226295]: 2025-11-29 08:18:46.756 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:47 np0005539564 nova_compute[226295]: 2025-11-29 08:18:47.475 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:47.952 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:47.952 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:18:47 np0005539564 nova_compute[226295]: 2025-11-29 08:18:47.954 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:48.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 e312: 3 total, 3 up, 3 in
Nov 29 03:18:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:50.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:50.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:52 np0005539564 nova_compute[226295]: 2025-11-29 08:18:52.041 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539564 nova_compute[226295]: 2025-11-29 08:18:52.286 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:52 np0005539564 nova_compute[226295]: 2025-11-29 08:18:52.287 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:52 np0005539564 nova_compute[226295]: 2025-11-29 08:18:52.288 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:52.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:52 np0005539564 nova_compute[226295]: 2025-11-29 08:18:52.479 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:52.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:18:52.954 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:53 np0005539564 nova_compute[226295]: 2025-11-29 08:18:53.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:53 np0005539564 nova_compute[226295]: 2025-11-29 08:18:53.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:18:54 np0005539564 nova_compute[226295]: 2025-11-29 08:18:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:54 np0005539564 nova_compute[226295]: 2025-11-29 08:18:54.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:18:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:54.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:54 np0005539564 nova_compute[226295]: 2025-11-29 08:18:54.557 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:54 np0005539564 nova_compute[226295]: 2025-11-29 08:18:54.558 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:54 np0005539564 nova_compute[226295]: 2025-11-29 08:18:54.558 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:18:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:54.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:56 np0005539564 nova_compute[226295]: 2025-11-29 08:18:56.152 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Updating instance_info_cache with network_info: [{"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:56 np0005539564 nova_compute[226295]: 2025-11-29 08:18:56.179 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:56 np0005539564 nova_compute[226295]: 2025-11-29 08:18:56.180 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:18:56 np0005539564 nova_compute[226295]: 2025-11-29 08:18:56.181 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:56 np0005539564 nova_compute[226295]: 2025-11-29 08:18:56.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:56.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:56.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:57 np0005539564 nova_compute[226295]: 2025-11-29 08:18:57.125 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539564 nova_compute[226295]: 2025-11-29 08:18:57.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:57 np0005539564 nova_compute[226295]: 2025-11-29 08:18:57.483 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:58.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:18:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:58.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:00.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:00.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:01 np0005539564 nova_compute[226295]: 2025-11-29 08:19:01.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:01 np0005539564 nova_compute[226295]: 2025-11-29 08:19:01.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:01 np0005539564 nova_compute[226295]: 2025-11-29 08:19:01.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:01 np0005539564 nova_compute[226295]: 2025-11-29 08:19:01.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:01 np0005539564 nova_compute[226295]: 2025-11-29 08:19:01.376 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:19:01 np0005539564 nova_compute[226295]: 2025-11-29 08:19:01.377 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/585820072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:01 np0005539564 nova_compute[226295]: 2025-11-29 08:19:01.884 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.004 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.005 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.011 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.011 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.128 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.271 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.273 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3989MB free_disk=20.785259246826172GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.274 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.274 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.348 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 0b082cd2-d1d3-4577-be0a-30b9256a223e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.349 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 20483537-11e8-42ae-9d49-1e955b2cd34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.349 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.349 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:19:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:02.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.434 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.485 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:02.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3757451918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.983 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:02 np0005539564 nova_compute[226295]: 2025-11-29 08:19:02.991 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:03 np0005539564 nova_compute[226295]: 2025-11-29 08:19:03.014 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:03 np0005539564 nova_compute[226295]: 2025-11-29 08:19:03.042 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:19:03 np0005539564 nova_compute[226295]: 2025-11-29 08:19:03.043 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:03.733 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:03.734 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:03.736 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:04.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:04.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:06.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:06.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:07 np0005539564 nova_compute[226295]: 2025-11-29 08:19:07.131 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:07 np0005539564 nova_compute[226295]: 2025-11-29 08:19:07.521 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.182598) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348182636, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 699, "num_deletes": 252, "total_data_size": 1077276, "memory_usage": 1094416, "flush_reason": "Manual Compaction"}
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348191605, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 709396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50164, "largest_seqno": 50858, "table_properties": {"data_size": 706014, "index_size": 1226, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8177, "raw_average_key_size": 19, "raw_value_size": 699105, "raw_average_value_size": 1676, "num_data_blocks": 54, "num_entries": 417, "num_filter_entries": 417, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404309, "oldest_key_time": 1764404309, "file_creation_time": 1764404348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 9115 microseconds, and 2835 cpu microseconds.
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.191703) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 709396 bytes OK
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.191749) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.193690) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.193720) EVENT_LOG_v1 {"time_micros": 1764404348193709, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.193757) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1073475, prev total WAL file size 1073475, number of live WAL files 2.
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.194884) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(692KB)], [96(10MB)]
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348194982, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11694032, "oldest_snapshot_seqno": -1}
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7822 keys, 9808480 bytes, temperature: kUnknown
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348313243, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 9808480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9758587, "index_size": 29237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 204598, "raw_average_key_size": 26, "raw_value_size": 9621241, "raw_average_value_size": 1230, "num_data_blocks": 1134, "num_entries": 7822, "num_filter_entries": 7822, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.313667) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9808480 bytes
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.327865) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.8 rd, 82.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.5 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(30.3) write-amplify(13.8) OK, records in: 8338, records dropped: 516 output_compression: NoCompression
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.327962) EVENT_LOG_v1 {"time_micros": 1764404348327900, "job": 60, "event": "compaction_finished", "compaction_time_micros": 118403, "compaction_time_cpu_micros": 33946, "output_level": 6, "num_output_files": 1, "total_output_size": 9808480, "num_input_records": 8338, "num_output_records": 7822, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348328459, "job": 60, "event": "table_file_deletion", "file_number": 98}
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348333742, "job": 60, "event": "table_file_deletion", "file_number": 96}
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.194723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.333845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.333851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.333853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.333854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:19:08.333855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:08.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:19:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:08.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:19:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:10.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:10 np0005539564 podman[275207]: 2025-11-29 08:19:10.548073805 +0000 UTC m=+0.080404944 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:19:10 np0005539564 podman[275206]: 2025-11-29 08:19:10.572821214 +0000 UTC m=+0.104917797 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:19:10 np0005539564 podman[275205]: 2025-11-29 08:19:10.606043332 +0000 UTC m=+0.142486613 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:19:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:10.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:12 np0005539564 nova_compute[226295]: 2025-11-29 08:19:12.172 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:12.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:12 np0005539564 nova_compute[226295]: 2025-11-29 08:19:12.524 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:12.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:14.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:15 np0005539564 nova_compute[226295]: 2025-11-29 08:19:15.256 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:15 np0005539564 nova_compute[226295]: 2025-11-29 08:19:15.256 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:15 np0005539564 nova_compute[226295]: 2025-11-29 08:19:15.273 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:15 np0005539564 nova_compute[226295]: 2025-11-29 08:19:15.370 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:15 np0005539564 nova_compute[226295]: 2025-11-29 08:19:15.371 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:15 np0005539564 nova_compute[226295]: 2025-11-29 08:19:15.384 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:15 np0005539564 nova_compute[226295]: 2025-11-29 08:19:15.384 226310 INFO nova.compute.claims [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:19:15 np0005539564 nova_compute[226295]: 2025-11-29 08:19:15.541 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:16 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3646109557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.044 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.055 226310 DEBUG nova.compute.provider_tree [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.103 226310 DEBUG nova.scheduler.client.report [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.153 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.155 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.230 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.231 226310 DEBUG nova.network.neutron [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.261 226310 INFO nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.282 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.393 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.395 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.396 226310 INFO nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Creating image(s)#033[00m
Nov 29 03:19:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:16.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.444 226310 DEBUG nova.storage.rbd_utils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.494 226310 DEBUG nova.storage.rbd_utils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.543 226310 DEBUG nova.storage.rbd_utils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.548 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.655 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.657 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.658 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.659 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.708 226310 DEBUG nova.storage.rbd_utils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:16 np0005539564 nova_compute[226295]: 2025-11-29 08:19:16.713 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.143 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.183 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.245 226310 DEBUG nova.policy [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b52040d601a4a56abcaf3f046f1e349', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '358970eca7ad4b05b70f43e5507ac052', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.257 226310 DEBUG nova.storage.rbd_utils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] resizing rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.431 226310 DEBUG nova.objects.instance [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'migration_context' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.456 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.456 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Ensure instance console log exists: /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.457 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.457 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.458 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:17 np0005539564 nova_compute[226295]: 2025-11-29 08:19:17.527 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.000 226310 DEBUG nova.network.neutron [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Successfully created port: 0ac7b30d-dad2-4718-b060-add6421b1065 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.260 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "20483537-11e8-42ae-9d49-1e955b2cd34f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.262 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "20483537-11e8-42ae-9d49-1e955b2cd34f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.262 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "20483537-11e8-42ae-9d49-1e955b2cd34f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.263 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "20483537-11e8-42ae-9d49-1e955b2cd34f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.263 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "20483537-11e8-42ae-9d49-1e955b2cd34f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.266 226310 INFO nova.compute.manager [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Terminating instance#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.268 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "refresh_cache-20483537-11e8-42ae-9d49-1e955b2cd34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.268 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquired lock "refresh_cache-20483537-11e8-42ae-9d49-1e955b2cd34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.268 226310 DEBUG nova.network.neutron [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:18.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.491 226310 DEBUG nova.network.neutron [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:18.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.829 226310 DEBUG nova.network.neutron [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.846 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Releasing lock "refresh_cache-20483537-11e8-42ae-9d49-1e955b2cd34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.847 226310 DEBUG nova.compute.manager [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.900 226310 DEBUG nova.network.neutron [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Successfully updated port: 0ac7b30d-dad2-4718-b060-add6421b1065 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.923 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.924 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:18 np0005539564 nova_compute[226295]: 2025-11-29 08:19:18.925 226310 DEBUG nova.network.neutron [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:18 np0005539564 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000084.scope: Deactivated successfully.
Nov 29 03:19:18 np0005539564 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000084.scope: Consumed 15.782s CPU time.
Nov 29 03:19:18 np0005539564 systemd-machined[190128]: Machine qemu-59-instance-00000084 terminated.
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.047 226310 DEBUG nova.compute.manager [req-437717ac-24d6-4e90-85b6-e34c83898f32 req-15a3a1aa-da8c-4d2c-ad2a-253204f7d56f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-changed-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.048 226310 DEBUG nova.compute.manager [req-437717ac-24d6-4e90-85b6-e34c83898f32 req-15a3a1aa-da8c-4d2c-ad2a-253204f7d56f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Refreshing instance network info cache due to event network-changed-0ac7b30d-dad2-4718-b060-add6421b1065. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.048 226310 DEBUG oslo_concurrency.lockutils [req-437717ac-24d6-4e90-85b6-e34c83898f32 req-15a3a1aa-da8c-4d2c-ad2a-253204f7d56f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.080 226310 INFO nova.virt.libvirt.driver [-] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Instance destroyed successfully.#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.081 226310 DEBUG nova.objects.instance [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'resources' on Instance uuid 20483537-11e8-42ae-9d49-1e955b2cd34f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.168 226310 DEBUG nova.network.neutron [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.565 226310 INFO nova.virt.libvirt.driver [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Deleting instance files /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f_del#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.565 226310 INFO nova.virt.libvirt.driver [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Deletion of /var/lib/nova/instances/20483537-11e8-42ae-9d49-1e955b2cd34f_del complete#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.636 226310 INFO nova.compute.manager [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.637 226310 DEBUG oslo.service.loopingcall [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.637 226310 DEBUG nova.compute.manager [-] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.637 226310 DEBUG nova.network.neutron [-] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.768 226310 DEBUG nova.network.neutron [-] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.794 226310 DEBUG nova.network.neutron [-] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.814 226310 INFO nova.compute.manager [-] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Took 0.18 seconds to deallocate network for instance.#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.874 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.875 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:19 np0005539564 nova_compute[226295]: 2025-11-29 08:19:19.969 226310 DEBUG oslo_concurrency.processutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.156 226310 DEBUG nova.network.neutron [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updating instance_info_cache with network_info: [{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.175 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.175 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance network_info: |[{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.177 226310 DEBUG oslo_concurrency.lockutils [req-437717ac-24d6-4e90-85b6-e34c83898f32 req-15a3a1aa-da8c-4d2c-ad2a-253204f7d56f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.177 226310 DEBUG nova.network.neutron [req-437717ac-24d6-4e90-85b6-e34c83898f32 req-15a3a1aa-da8c-4d2c-ad2a-253204f7d56f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Refreshing network info cache for port 0ac7b30d-dad2-4718-b060-add6421b1065 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.183 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Start _get_guest_xml network_info=[{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.191 226310 WARNING nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.196 226310 DEBUG nova.virt.libvirt.host [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.197 226310 DEBUG nova.virt.libvirt.host [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.209 226310 DEBUG nova.virt.libvirt.host [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.211 226310 DEBUG nova.virt.libvirt.host [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.213 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.213 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.214 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.215 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.216 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.216 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.217 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.217 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.218 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.218 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.219 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.219 226310 DEBUG nova.virt.hardware [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.226 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:20.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/610726783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.458 226310 DEBUG oslo_concurrency.processutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.468 226310 DEBUG nova.compute.provider_tree [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.489 226310 DEBUG nova.scheduler.client.report [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.514 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.544 226310 INFO nova.scheduler.client.report [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Deleted allocations for instance 20483537-11e8-42ae-9d49-1e955b2cd34f#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.623 226310 DEBUG oslo_concurrency.lockutils [None req-4b1a3d47-d839-4b8a-ade2-2118de6ac9b2 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "20483537-11e8-42ae-9d49-1e955b2cd34f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/385554390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.756 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.786 226310 DEBUG nova.storage.rbd_utils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:20 np0005539564 nova_compute[226295]: 2025-11-29 08:19:20.790 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:20.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:21 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3711450792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.250 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.253 226310 DEBUG nova.virt.libvirt.vif [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1206692563',display_name='tempest-ServerStableDeviceRescueTest-server-1206692563',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1206692563',id=135,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-0h9yile4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:16Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=38bba84b-1fb0-460a-a6aa-707ef29970b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.253 226310 DEBUG nova.network.os_vif_util [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.255 226310 DEBUG nova.network.os_vif_util [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9e:3a,bridge_name='br-int',has_traffic_filtering=True,id=0ac7b30d-dad2-4718-b060-add6421b1065,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ac7b30d-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.257 226310 DEBUG nova.objects.instance [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.276 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <uuid>38bba84b-1fb0-460a-a6aa-707ef29970b2</uuid>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <name>instance-00000087</name>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1206692563</nova:name>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:19:20</nova:creationTime>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <nova:user uuid="3b52040d601a4a56abcaf3f046f1e349">tempest-ServerStableDeviceRescueTest-1105304301-project-member</nova:user>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <nova:project uuid="358970eca7ad4b05b70f43e5507ac052">tempest-ServerStableDeviceRescueTest-1105304301</nova:project>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <nova:port uuid="0ac7b30d-dad2-4718-b060-add6421b1065">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <entry name="serial">38bba84b-1fb0-460a-a6aa-707ef29970b2</entry>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <entry name="uuid">38bba84b-1fb0-460a-a6aa-707ef29970b2</entry>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/38bba84b-1fb0-460a-a6aa-707ef29970b2_disk">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:d4:9e:3a"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <target dev="tap0ac7b30d-da"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/console.log" append="off"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:19:21 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:19:21 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:19:21 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:19:21 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.278 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Preparing to wait for external event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.278 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.279 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.280 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.281 226310 DEBUG nova.virt.libvirt.vif [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1206692563',display_name='tempest-ServerStableDeviceRescueTest-server-1206692563',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1206692563',id=135,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-0h9yile4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:16Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=38bba84b-1fb0-460a-a6aa-707ef29970b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.281 226310 DEBUG nova.network.os_vif_util [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.282 226310 DEBUG nova.network.os_vif_util [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9e:3a,bridge_name='br-int',has_traffic_filtering=True,id=0ac7b30d-dad2-4718-b060-add6421b1065,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ac7b30d-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.283 226310 DEBUG os_vif [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9e:3a,bridge_name='br-int',has_traffic_filtering=True,id=0ac7b30d-dad2-4718-b060-add6421b1065,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ac7b30d-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.284 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.285 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.286 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.291 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.292 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ac7b30d-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.292 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ac7b30d-da, col_values=(('external_ids', {'iface-id': '0ac7b30d-dad2-4718-b060-add6421b1065', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:9e:3a', 'vm-uuid': '38bba84b-1fb0-460a-a6aa-707ef29970b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.295 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:21 np0005539564 NetworkManager[48997]: <info>  [1764404361.2972] manager: (tap0ac7b30d-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.300 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.304 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.306 226310 INFO os_vif [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9e:3a,bridge_name='br-int',has_traffic_filtering=True,id=0ac7b30d-dad2-4718-b060-add6421b1065,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ac7b30d-da')#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.387 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.388 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.388 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:d4:9e:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.389 226310 INFO nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Using config drive#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.435 226310 DEBUG nova.storage.rbd_utils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.938 226310 DEBUG nova.network.neutron [req-437717ac-24d6-4e90-85b6-e34c83898f32 req-15a3a1aa-da8c-4d2c-ad2a-253204f7d56f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updated VIF entry in instance network info cache for port 0ac7b30d-dad2-4718-b060-add6421b1065. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.939 226310 DEBUG nova.network.neutron [req-437717ac-24d6-4e90-85b6-e34c83898f32 req-15a3a1aa-da8c-4d2c-ad2a-253204f7d56f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updating instance_info_cache with network_info: [{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:21 np0005539564 nova_compute[226295]: 2025-11-29 08:19:21.958 226310 DEBUG oslo_concurrency.lockutils [req-437717ac-24d6-4e90-85b6-e34c83898f32 req-15a3a1aa-da8c-4d2c-ad2a-253204f7d56f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.166 226310 INFO nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Creating config drive at /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config#033[00m
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.175 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnu4vmijd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.338 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnu4vmijd" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.376 226310 DEBUG nova.storage.rbd_utils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.381 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:22.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.529 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.593 226310 DEBUG oslo_concurrency.processutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.594 226310 INFO nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Deleting local config drive /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:22 np0005539564 kernel: tap0ac7b30d-da: entered promiscuous mode
Nov 29 03:19:22 np0005539564 NetworkManager[48997]: <info>  [1764404362.6525] manager: (tap0ac7b30d-da): new Tun device (/org/freedesktop/NetworkManager/Devices/234)
Nov 29 03:19:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:22Z|00469|binding|INFO|Claiming lport 0ac7b30d-dad2-4718-b060-add6421b1065 for this chassis.
Nov 29 03:19:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:22Z|00470|binding|INFO|0ac7b30d-dad2-4718-b060-add6421b1065: Claiming fa:16:3e:d4:9e:3a 10.100.0.10
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.654 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.665 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9e:3a 10.100.0.10'], port_security=['fa:16:3e:d4:9e:3a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '38bba84b-1fb0-460a-a6aa-707ef29970b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0ac7b30d-dad2-4718-b060-add6421b1065) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.668 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0ac7b30d-dad2-4718-b060-add6421b1065 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.670 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:19:22 np0005539564 systemd-machined[190128]: New machine qemu-60-instance-00000087.
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.683 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e6799fba-4d22-4c9a-8bcd-9201ef435722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.685 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32485b0e-11 in ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.688 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32485b0e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.688 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[24b88574-7dcd-492d-bcb5-5f364e58fbf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.689 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[316162b9-b0ac-4a87-ba07-87b7f6d20ff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 systemd[1]: Started Virtual Machine qemu-60-instance-00000087.
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.705 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ceb58f-388d-4251-9c4d-bf79c16e7ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 systemd-udevd[275636]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:22 np0005539564 NetworkManager[48997]: <info>  [1764404362.7252] device (tap0ac7b30d-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.724 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:22 np0005539564 NetworkManager[48997]: <info>  [1764404362.7265] device (tap0ac7b30d-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:22Z|00471|binding|INFO|Releasing lport 9da51447-ee5a-4659-ba78-deb4b11b4098 from this chassis (sb_readonly=0)
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.727 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[00852a5a-c642-496f-b7c6-515e7da0c39e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:22Z|00472|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 up in Southbound
Nov 29 03:19:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:22Z|00473|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 ovn-installed in OVS
Nov 29 03:19:22 np0005539564 nova_compute[226295]: 2025-11-29 08:19:22.734 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.758 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[340c1228-5659-4ffe-be0f-c9f279f29317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 NetworkManager[48997]: <info>  [1764404362.7637] manager: (tap32485b0e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/235)
Nov 29 03:19:22 np0005539564 systemd-udevd[275640]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.762 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ec61d816-7cfa-47ee-8443-17452bcd2bb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.795 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b9331e31-02fd-4e0c-8d2a-c2890bafa29f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.798 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd30493-5288-4761-a58d-0a7e51188041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 NetworkManager[48997]: <info>  [1764404362.8208] device (tap32485b0e-10): carrier: link connected
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.827 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e40bd35f-bb9a-48a5-b679-4b5d81aae126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:22.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.851 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7858839e-61d8-45ea-aad8-3730db2b4c10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733750, 'reachable_time': 31085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275667, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.867 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6edd4855-1c4a-492a-be3f-032e814229f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:4406'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733750, 'tstamp': 733750}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275668, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.892 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7aa747-ea20-4ab3-8b64-492354a15dbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733750, 'reachable_time': 31085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275669, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:22.938 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[32410a9a-5d21-4a5d-b919-5ba29e19e395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.026 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bd12a10f-a8c4-43d7-9a90-b334df276ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.027 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.028 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.029 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:23 np0005539564 NetworkManager[48997]: <info>  [1764404363.0320] manager: (tap32485b0e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Nov 29 03:19:23 np0005539564 kernel: tap32485b0e-10: entered promiscuous mode
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.037 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:23Z|00474|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.068 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.071 226310 DEBUG nova.compute.manager [req-44b28fce-5f19-456d-bcb9-37332c883271 req-eb2574d7-3c7a-43e8-8f07-0c6cdb09d251 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.071 226310 DEBUG oslo_concurrency.lockutils [req-44b28fce-5f19-456d-bcb9-37332c883271 req-eb2574d7-3c7a-43e8-8f07-0c6cdb09d251 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.072 226310 DEBUG oslo_concurrency.lockutils [req-44b28fce-5f19-456d-bcb9-37332c883271 req-eb2574d7-3c7a-43e8-8f07-0c6cdb09d251 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.072 226310 DEBUG oslo_concurrency.lockutils [req-44b28fce-5f19-456d-bcb9-37332c883271 req-eb2574d7-3c7a-43e8-8f07-0c6cdb09d251 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.072 226310 DEBUG nova.compute.manager [req-44b28fce-5f19-456d-bcb9-37332c883271 req-eb2574d7-3c7a-43e8-8f07-0c6cdb09d251 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Processing event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.084 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.086 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.088 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[18250546-44eb-446d-80d8-73cc47d2ebe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.089 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:23.091 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'env', 'PROCESS_TAG=haproxy-32485b0e-177b-4dfd-a55a-0249528f32e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32485b0e-177b-4dfd-a55a-0249528f32e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:23 np0005539564 podman[275783]: 2025-11-29 08:19:23.540678179 +0000 UTC m=+0.058317777 container create 5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:19:23 np0005539564 systemd[1]: Started libpod-conmon-5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab.scope.
Nov 29 03:19:23 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:19:23 np0005539564 podman[275783]: 2025-11-29 08:19:23.507560504 +0000 UTC m=+0.025200152 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:23 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecf2a11ba77c8f68b1b788a57a41a95f44230b7d9910f33e089190c1f944743/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:23 np0005539564 podman[275783]: 2025-11-29 08:19:23.614237009 +0000 UTC m=+0.131876627 container init 5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:19:23 np0005539564 podman[275783]: 2025-11-29 08:19:23.622625246 +0000 UTC m=+0.140264864 container start 5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:19:23 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[275854]: [NOTICE]   (275861) : New worker (275864) forked
Nov 29 03:19:23 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[275854]: [NOTICE]   (275861) : Loading success.
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.684 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.686 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404363.683953, 38bba84b-1fb0-460a-a6aa-707ef29970b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.686 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.694 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.698 226310 INFO nova.virt.libvirt.driver [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance spawned successfully.#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.698 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.716 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.724 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.732 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.732 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.733 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.733 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.734 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.734 226310 DEBUG nova.virt.libvirt.driver [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.761 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.761 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404363.6842303, 38bba84b-1fb0-460a-a6aa-707ef29970b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.761 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.786 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.791 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404363.6915479, 38bba84b-1fb0-460a-a6aa-707ef29970b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.791 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.799 226310 INFO nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Took 7.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.799 226310 DEBUG nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.806 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.815 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.845 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.856 226310 INFO nova.compute.manager [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Took 8.54 seconds to build instance.#033[00m
Nov 29 03:19:23 np0005539564 nova_compute[226295]: 2025-11-29 08:19:23.871 226310 DEBUG oslo_concurrency.lockutils [None req-81818c46-a1c6-4737-b815-755caa91e43b 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:24.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:19:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:19:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:24.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:25 np0005539564 nova_compute[226295]: 2025-11-29 08:19:25.137 226310 DEBUG nova.compute.manager [req-6f9a77ed-6e3d-47cc-840e-cde5246a3195 req-57bd2242-febd-4a7b-bb56-ed8b3d9ca4b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:25 np0005539564 nova_compute[226295]: 2025-11-29 08:19:25.138 226310 DEBUG oslo_concurrency.lockutils [req-6f9a77ed-6e3d-47cc-840e-cde5246a3195 req-57bd2242-febd-4a7b-bb56-ed8b3d9ca4b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:25 np0005539564 nova_compute[226295]: 2025-11-29 08:19:25.138 226310 DEBUG oslo_concurrency.lockutils [req-6f9a77ed-6e3d-47cc-840e-cde5246a3195 req-57bd2242-febd-4a7b-bb56-ed8b3d9ca4b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:25 np0005539564 nova_compute[226295]: 2025-11-29 08:19:25.138 226310 DEBUG oslo_concurrency.lockutils [req-6f9a77ed-6e3d-47cc-840e-cde5246a3195 req-57bd2242-febd-4a7b-bb56-ed8b3d9ca4b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:25 np0005539564 nova_compute[226295]: 2025-11-29 08:19:25.139 226310 DEBUG nova.compute.manager [req-6f9a77ed-6e3d-47cc-840e-cde5246a3195 req-57bd2242-febd-4a7b-bb56-ed8b3d9ca4b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:25 np0005539564 nova_compute[226295]: 2025-11-29 08:19:25.139 226310 WARNING nova.compute.manager [req-6f9a77ed-6e3d-47cc-840e-cde5246a3195 req-57bd2242-febd-4a7b-bb56-ed8b3d9ca4b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:26 np0005539564 nova_compute[226295]: 2025-11-29 08:19:26.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:26 np0005539564 nova_compute[226295]: 2025-11-29 08:19:26.314 226310 DEBUG nova.compute.manager [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:26 np0005539564 nova_compute[226295]: 2025-11-29 08:19:26.393 226310 INFO nova.compute.manager [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] instance snapshotting#033[00m
Nov 29 03:19:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:26.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:26 np0005539564 nova_compute[226295]: 2025-11-29 08:19:26.814 226310 INFO nova.virt.libvirt.driver [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Beginning live snapshot process#033[00m
Nov 29 03:19:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:26.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:26 np0005539564 nova_compute[226295]: 2025-11-29 08:19:26.987 226310 DEBUG nova.virt.libvirt.imagebackend [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:19:27 np0005539564 nova_compute[226295]: 2025-11-29 08:19:27.264 226310 DEBUG nova.storage.rbd_utils [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] creating snapshot(6644ecfa408f4ef59cfd9639ecf7da4c) on rbd image(38bba84b-1fb0-460a-a6aa-707ef29970b2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:19:27 np0005539564 nova_compute[226295]: 2025-11-29 08:19:27.532 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e313 e313: 3 total, 3 up, 3 in
Nov 29 03:19:27 np0005539564 nova_compute[226295]: 2025-11-29 08:19:27.729 226310 DEBUG nova.storage.rbd_utils [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] cloning vms/38bba84b-1fb0-460a-a6aa-707ef29970b2_disk@6644ecfa408f4ef59cfd9639ecf7da4c to images/e372d671-0356-4b39-949b-13a58b062421 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:19:27 np0005539564 nova_compute[226295]: 2025-11-29 08:19:27.871 226310 DEBUG nova.storage.rbd_utils [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] flattening images/e372d671-0356-4b39-949b-13a58b062421 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:19:28 np0005539564 nova_compute[226295]: 2025-11-29 08:19:28.196 226310 DEBUG nova.storage.rbd_utils [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] removing snapshot(6644ecfa408f4ef59cfd9639ecf7da4c) on rbd image(38bba84b-1fb0-460a-a6aa-707ef29970b2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:19:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:28.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e314 e314: 3 total, 3 up, 3 in
Nov 29 03:19:28 np0005539564 nova_compute[226295]: 2025-11-29 08:19:28.725 226310 DEBUG nova.storage.rbd_utils [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] creating snapshot(snap) on rbd image(e372d671-0356-4b39-949b-13a58b062421) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:19:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:28.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e315 e315: 3 total, 3 up, 3 in
Nov 29 03:19:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:30.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:30.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:31 np0005539564 nova_compute[226295]: 2025-11-29 08:19:31.299 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:31 np0005539564 nova_compute[226295]: 2025-11-29 08:19:31.406 226310 INFO nova.virt.libvirt.driver [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Snapshot image upload complete#033[00m
Nov 29 03:19:31 np0005539564 nova_compute[226295]: 2025-11-29 08:19:31.407 226310 INFO nova.compute.manager [None req-20e04010-c881-4187-8e60-2e2fe4837c61 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Took 5.01 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:19:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:32 np0005539564 nova_compute[226295]: 2025-11-29 08:19:32.534 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:32.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:34 np0005539564 nova_compute[226295]: 2025-11-29 08:19:34.086 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404359.0750644, 20483537-11e8-42ae-9d49-1e955b2cd34f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:34 np0005539564 nova_compute[226295]: 2025-11-29 08:19:34.087 226310 INFO nova.compute.manager [-] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:19:34 np0005539564 nova_compute[226295]: 2025-11-29 08:19:34.154 226310 DEBUG nova.compute.manager [None req-db152321-926b-469a-a130-33e34ba1a541 - - - - - -] [instance: 20483537-11e8-42ae-9d49-1e955b2cd34f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:34.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:34 np0005539564 nova_compute[226295]: 2025-11-29 08:19:34.823 226310 INFO nova.compute.manager [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Rescuing#033[00m
Nov 29 03:19:34 np0005539564 nova_compute[226295]: 2025-11-29 08:19:34.823 226310 DEBUG oslo_concurrency.lockutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:34 np0005539564 nova_compute[226295]: 2025-11-29 08:19:34.824 226310 DEBUG oslo_concurrency.lockutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:34 np0005539564 nova_compute[226295]: 2025-11-29 08:19:34.824 226310 DEBUG nova.network.neutron [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:34.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:36 np0005539564 nova_compute[226295]: 2025-11-29 08:19:36.302 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:36.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:36.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:36 np0005539564 nova_compute[226295]: 2025-11-29 08:19:36.901 226310 DEBUG nova.network.neutron [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updating instance_info_cache with network_info: [{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:36 np0005539564 nova_compute[226295]: 2025-11-29 08:19:36.939 226310 DEBUG oslo_concurrency.lockutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:37 np0005539564 nova_compute[226295]: 2025-11-29 08:19:37.227 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:19:37 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 29 03:19:37 np0005539564 nova_compute[226295]: 2025-11-29 08:19:37.537 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:38.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e316 e316: 3 total, 3 up, 3 in
Nov 29 03:19:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:38.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:40 np0005539564 kernel: tap0ac7b30d-da (unregistering): left promiscuous mode
Nov 29 03:19:40 np0005539564 NetworkManager[48997]: <info>  [1764404380.3892] device (tap0ac7b30d-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00475|binding|INFO|Releasing lport 0ac7b30d-dad2-4718-b060-add6421b1065 from this chassis (sb_readonly=0)
Nov 29 03:19:40 np0005539564 nova_compute[226295]: 2025-11-29 08:19:40.400 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00476|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 down in Southbound
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00477|binding|INFO|Removing iface tap0ac7b30d-da ovn-installed in OVS
Nov 29 03:19:40 np0005539564 nova_compute[226295]: 2025-11-29 08:19:40.404 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.417 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9e:3a 10.100.0.10'], port_security=['fa:16:3e:d4:9e:3a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '38bba84b-1fb0-460a-a6aa-707ef29970b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0ac7b30d-dad2-4718-b060-add6421b1065) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.420 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0ac7b30d-dad2-4718-b060-add6421b1065 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.422 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32485b0e-177b-4dfd-a55a-0249528f32e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.423 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f22bee-d0b5-4fab-b2c4-ea153be12b8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.424 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace which is not needed anymore#033[00m
Nov 29 03:19:40 np0005539564 nova_compute[226295]: 2025-11-29 08:19:40.425 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:40.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:40 np0005539564 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 29 03:19:40 np0005539564 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000087.scope: Consumed 15.165s CPU time.
Nov 29 03:19:40 np0005539564 systemd-machined[190128]: Machine qemu-60-instance-00000087 terminated.
Nov 29 03:19:40 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[275854]: [NOTICE]   (275861) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:40 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[275854]: [NOTICE]   (275861) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:40 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[275854]: [WARNING]  (275861) : Exiting Master process...
Nov 29 03:19:40 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[275854]: [WARNING]  (275861) : Exiting Master process...
Nov 29 03:19:40 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[275854]: [ALERT]    (275861) : Current worker (275864) exited with code 143 (Terminated)
Nov 29 03:19:40 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[275854]: [WARNING]  (275861) : All workers exited. Exiting... (0)
Nov 29 03:19:40 np0005539564 systemd[1]: libpod-5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab.scope: Deactivated successfully.
Nov 29 03:19:40 np0005539564 podman[276120]: 2025-11-29 08:19:40.586198354 +0000 UTC m=+0.054048572 container died 5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:40 np0005539564 kernel: tap0ac7b30d-da: entered promiscuous mode
Nov 29 03:19:40 np0005539564 NetworkManager[48997]: <info>  [1764404380.6193] manager: (tap0ac7b30d-da): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Nov 29 03:19:40 np0005539564 kernel: tap0ac7b30d-da (unregistering): left promiscuous mode
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00478|binding|INFO|Claiming lport 0ac7b30d-dad2-4718-b060-add6421b1065 for this chassis.
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00479|binding|INFO|0ac7b30d-dad2-4718-b060-add6421b1065: Claiming fa:16:3e:d4:9e:3a 10.100.0.10
Nov 29 03:19:40 np0005539564 systemd[1]: var-lib-containers-storage-overlay-5ecf2a11ba77c8f68b1b788a57a41a95f44230b7d9910f33e089190c1f944743-merged.mount: Deactivated successfully.
Nov 29 03:19:40 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:40 np0005539564 nova_compute[226295]: 2025-11-29 08:19:40.670 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539564 podman[276120]: 2025-11-29 08:19:40.68037197 +0000 UTC m=+0.148222138 container cleanup 5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:19:40 np0005539564 systemd[1]: libpod-conmon-5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab.scope: Deactivated successfully.
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00480|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 ovn-installed in OVS
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00481|if_status|INFO|Dropped 6 log messages in last 341 seconds (most recently, 334 seconds ago) due to excessive rate
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00482|if_status|INFO|Not setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 down as sb is readonly
Nov 29 03:19:40 np0005539564 nova_compute[226295]: 2025-11-29 08:19:40.699 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539564 podman[276144]: 2025-11-29 08:19:40.709673302 +0000 UTC m=+0.087855236 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:19:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:40Z|00483|binding|INFO|Releasing lport 0ac7b30d-dad2-4718-b060-add6421b1065 from this chassis (sb_readonly=0)
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.734 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9e:3a 10.100.0.10'], port_security=['fa:16:3e:d4:9e:3a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '38bba84b-1fb0-460a-a6aa-707ef29970b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0ac7b30d-dad2-4718-b060-add6421b1065) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.738 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9e:3a 10.100.0.10'], port_security=['fa:16:3e:d4:9e:3a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '38bba84b-1fb0-460a-a6aa-707ef29970b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0ac7b30d-dad2-4718-b060-add6421b1065) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:40 np0005539564 nova_compute[226295]: 2025-11-29 08:19:40.753 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539564 podman[276184]: 2025-11-29 08:19:40.758360948 +0000 UTC m=+0.046983781 container remove 5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:40 np0005539564 podman[276136]: 2025-11-29 08:19:40.764682409 +0000 UTC m=+0.091397822 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.764 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ff78ba-7ad9-494f-a97d-38f56a5b0f4b]: (4, ('Sat Nov 29 08:19:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab)\n5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab\nSat Nov 29 08:19:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab)\n5ec0eb04ed76ddc3bdf164fad50e27cbb63b81d8210a56bbbb680464dbc70eab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.767 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3aeca3-c2fe-4ca9-97e7-1a34c729e8a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.768 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:40 np0005539564 nova_compute[226295]: 2025-11-29 08:19:40.770 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539564 kernel: tap32485b0e-10: left promiscuous mode
Nov 29 03:19:40 np0005539564 nova_compute[226295]: 2025-11-29 08:19:40.796 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.801 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3170ce35-8515-4b6e-86ec-94ddc7895314]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 podman[276165]: 2025-11-29 08:19:40.80321931 +0000 UTC m=+0.118113564 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.812 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6887787f-903d-4963-bc9b-322103654dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.813 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[993f3d42-68d1-454f-99ed-41462028d58a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.830 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[19da0c46-6a93-44bf-8e35-68469771be0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733743, 'reachable_time': 31886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276235, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 systemd[1]: run-netns-ovnmeta\x2d32485b0e\x2d177b\x2d4dfd\x2da55a\x2d0249528f32e1.mount: Deactivated successfully.
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.834 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.834 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[8441c138-495b-4b59-a83c-198a97f59921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.835 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0ac7b30d-dad2-4718-b060-add6421b1065 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.836 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32485b0e-177b-4dfd-a55a-0249528f32e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.837 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a83aa344-59e1-40d0-9fc4-be23524105f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.837 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0ac7b30d-dad2-4718-b060-add6421b1065 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.838 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32485b0e-177b-4dfd-a55a-0249528f32e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:40.839 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d2bc3c-3279-41dc-b2b8-cf57472a1bf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:40.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.257 226310 INFO nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance shutdown successfully after 4 seconds.#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.265 226310 INFO nova.virt.libvirt.driver [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance destroyed successfully.#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.266 226310 DEBUG nova.objects.instance [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.285 226310 INFO nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Attempting a stable device rescue#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.305 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.719 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.727 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.728 226310 INFO nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Creating image(s)#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.766 226310 DEBUG nova.storage.rbd_utils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.771 226310 DEBUG nova.objects.instance [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.815 226310 DEBUG nova.storage.rbd_utils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.849 226310 DEBUG nova.storage.rbd_utils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.853 226310 DEBUG oslo_concurrency.lockutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "42d53693c4e3e00688550d4c86d560acb00cabdf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:41 np0005539564 nova_compute[226295]: 2025-11-29 08:19:41.854 226310 DEBUG oslo_concurrency.lockutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "42d53693c4e3e00688550d4c86d560acb00cabdf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.126 226310 DEBUG nova.virt.libvirt.imagebackend [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/e372d671-0356-4b39-949b-13a58b062421/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/e372d671-0356-4b39-949b-13a58b062421/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.185 226310 DEBUG nova.virt.libvirt.imagebackend [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/e372d671-0356-4b39-949b-13a58b062421/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.185 226310 DEBUG nova.storage.rbd_utils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] cloning images/e372d671-0356-4b39-949b-13a58b062421@snap to None/38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.311 226310 DEBUG oslo_concurrency.lockutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "42d53693c4e3e00688550d4c86d560acb00cabdf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.368 226310 DEBUG nova.objects.instance [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'migration_context' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.408 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.410 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Start _get_guest_xml network_info=[{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:d4:9e:3a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'e372d671-0356-4b39-949b-13a58b062421', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.410 226310 DEBUG nova.objects.instance [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'resources' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.427 226310 WARNING nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.434 226310 DEBUG nova.virt.libvirt.host [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.434 226310 DEBUG nova.virt.libvirt.host [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.438 226310 DEBUG nova.virt.libvirt.host [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.438 226310 DEBUG nova.virt.libvirt.host [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.439 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.439 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.440 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.440 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.440 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.440 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.440 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.441 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.441 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.441 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.441 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.441 226310 DEBUG nova.virt.hardware [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.441 226310 DEBUG nova.objects.instance [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.459 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.539 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:42.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.890 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "091988cc-8042-4aa1-b909-5ca1744ff259" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.891 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:42 np0005539564 nova_compute[226295]: 2025-11-29 08:19:42.944 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2289726521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.004 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.068 226310 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.069 226310 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.070 226310 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.070 226310 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.071 226310 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.071 226310 WARNING nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.072 226310 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.072 226310 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.072 226310 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.073 226310 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.073 226310 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.073 226310 WARNING nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.079 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.174 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.175 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.184 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.185 226310 INFO nova.compute.claims [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.401 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/524578851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.563 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.565 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/703786362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.874 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.885 226310 DEBUG nova.compute.provider_tree [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.904 226310 DEBUG nova.scheduler.client.report [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.938 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:43 np0005539564 nova_compute[226295]: 2025-11-29 08:19:43.939 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.000 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.001 226310 DEBUG nova.network.neutron [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:19:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2270226176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.022 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.026 226310 DEBUG nova.virt.libvirt.vif [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1206692563',display_name='tempest-ServerStableDeviceRescueTest-server-1206692563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1206692563',id=135,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-0h9yile4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:31Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=38bba84b-1fb0-460a-a6aa-707ef29970b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:d4:9e:3a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.026 226310 DEBUG nova.network.os_vif_util [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:d4:9e:3a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.028 226310 DEBUG nova.network.os_vif_util [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9e:3a,bridge_name='br-int',has_traffic_filtering=True,id=0ac7b30d-dad2-4718-b060-add6421b1065,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ac7b30d-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.030 226310 DEBUG nova.objects.instance [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.033 226310 INFO nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.067 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <uuid>38bba84b-1fb0-460a-a6aa-707ef29970b2</uuid>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <name>instance-00000087</name>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1206692563</nova:name>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:19:42</nova:creationTime>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <nova:user uuid="3b52040d601a4a56abcaf3f046f1e349">tempest-ServerStableDeviceRescueTest-1105304301-project-member</nova:user>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <nova:project uuid="358970eca7ad4b05b70f43e5507ac052">tempest-ServerStableDeviceRescueTest-1105304301</nova:project>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <nova:port uuid="0ac7b30d-dad2-4718-b060-add6421b1065">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <entry name="serial">38bba84b-1fb0-460a-a6aa-707ef29970b2</entry>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <entry name="uuid">38bba84b-1fb0-460a-a6aa-707ef29970b2</entry>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/38bba84b-1fb0-460a-a6aa-707ef29970b2_disk">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.rescue">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <target dev="sdb" bus="usb"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <boot order="1"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:d4:9e:3a"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <target dev="tap0ac7b30d-da"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/console.log" append="off"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:19:44 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:19:44 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:19:44 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:19:44 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.069 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.081 226310 INFO nova.virt.libvirt.driver [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance destroyed successfully.#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.170 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.171 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.171 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.172 226310 DEBUG nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:d4:9e:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.174 226310 INFO nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Using config drive#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.219 226310 DEBUG nova.storage.rbd_utils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.229 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.230 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.230 226310 INFO nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Creating image(s)#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.265 226310 DEBUG nova.storage.rbd_utils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 091988cc-8042-4aa1-b909-5ca1744ff259_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.306 226310 DEBUG nova.storage.rbd_utils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 091988cc-8042-4aa1-b909-5ca1744ff259_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.343 226310 DEBUG nova.storage.rbd_utils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 091988cc-8042-4aa1-b909-5ca1744ff259_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.349 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.401 226310 DEBUG nova.policy [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '504bc6adabad4f7d8c17b0438c4d9be7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.407 226310 DEBUG nova.objects.instance [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.447 226310 DEBUG nova.objects.instance [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'keypairs' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.456 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.458 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.459 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.459 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.496 226310 DEBUG nova.storage.rbd_utils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 091988cc-8042-4aa1-b909-5ca1744ff259_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.502 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 091988cc-8042-4aa1-b909-5ca1744ff259_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.900 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 091988cc-8042-4aa1-b909-5ca1744ff259_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.977 226310 DEBUG nova.network.neutron [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Successfully created port: 4b3bde4e-a900-44fb-96a9-a6f92c949f67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:19:44 np0005539564 nova_compute[226295]: 2025-11-29 08:19:44.985 226310 DEBUG nova.storage.rbd_utils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] resizing rbd image 091988cc-8042-4aa1-b909-5ca1744ff259_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.088 226310 INFO nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Creating config drive at /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config.rescue#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.092 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ftn61o6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.172 226310 DEBUG nova.objects.instance [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 091988cc-8042-4aa1-b909-5ca1744ff259 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.238 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ftn61o6" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.272 226310 DEBUG nova.storage.rbd_utils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.276 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config.rescue 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.316 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.317 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Ensure instance console log exists: /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.318 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.318 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.318 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.498 226310 DEBUG oslo_concurrency.processutils [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config.rescue 38bba84b-1fb0-460a-a6aa-707ef29970b2_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.499 226310 INFO nova.virt.libvirt.driver [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Deleting local config drive /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:19:45 np0005539564 kernel: tap0ac7b30d-da: entered promiscuous mode
Nov 29 03:19:45 np0005539564 NetworkManager[48997]: <info>  [1764404385.5734] manager: (tap0ac7b30d-da): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Nov 29 03:19:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:45Z|00484|binding|INFO|Claiming lport 0ac7b30d-dad2-4718-b060-add6421b1065 for this chassis.
Nov 29 03:19:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:45Z|00485|binding|INFO|0ac7b30d-dad2-4718-b060-add6421b1065: Claiming fa:16:3e:d4:9e:3a 10.100.0.10
Nov 29 03:19:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:45Z|00486|binding|INFO|Removing lport 0ac7b30d-dad2-4718-b060-add6421b1065 ovn-installed in OVS
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.576 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.584 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9e:3a 10.100.0.10'], port_security=['fa:16:3e:d4:9e:3a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '38bba84b-1fb0-460a-a6aa-707ef29970b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '5', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0ac7b30d-dad2-4718-b060-add6421b1065) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.585 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0ac7b30d-dad2-4718-b060-add6421b1065 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.587 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:19:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:45Z|00487|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 ovn-installed in OVS
Nov 29 03:19:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:45Z|00488|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 up in Southbound
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.590 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.593 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.602 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6b64ca-0f77-4b0c-b74f-37b2e849a9a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.604 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32485b0e-11 in ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.606 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32485b0e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.607 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[18e08cbc-8e11-4213-b9e6-06dec0575476]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.609 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b2497b01-aa06-4b78-b63d-a3c2ac762768]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 systemd-machined[190128]: New machine qemu-61-instance-00000087.
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.625 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[a410c753-6608-4ad4-aaeb-937ebe064ce8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 systemd[1]: Started Virtual Machine qemu-61-instance-00000087.
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.643 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa6af6c-5e5d-4029-9fbd-dea37bba3fa2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 systemd-udevd[276726]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:45 np0005539564 NetworkManager[48997]: <info>  [1764404385.6819] device (tap0ac7b30d-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:45 np0005539564 NetworkManager[48997]: <info>  [1764404385.6831] device (tap0ac7b30d-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.686 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7e09ebb5-8658-4c3a-9423-a6686e6ef96d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 systemd-udevd[276731]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.692 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9e88f692-21e9-44bb-b8af-ac7ef66968fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 NetworkManager[48997]: <info>  [1764404385.6931] manager: (tap32485b0e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.733 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e25e5715-c555-4423-875b-279384352339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.736 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f1e731-8b83-4701-85f3-2b6fc1f03682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 NetworkManager[48997]: <info>  [1764404385.7710] device (tap32485b0e-10): carrier: link connected
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.782 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f4448c-322e-4380-9404-098cd348e163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.811 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a6daa52e-e8cd-4c93-8f8f-57a19d2e7b13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736045, 'reachable_time': 37663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276755, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.833 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2feee920-917f-4539-9285-4b73ba39d173]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:4406'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736045, 'tstamp': 736045}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276756, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.862 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a6372161-c417-4642-9f09-ca25d37e5be9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736045, 'reachable_time': 37663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276757, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.912 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4596b0-07cc-458f-94fd-a56acf03d288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.943 226310 DEBUG nova.compute.manager [req-8d239cbb-a96a-4e0f-a86b-288b0315cd4c req-b5114e7e-298e-48c6-94dd-9afb572c7725 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.944 226310 DEBUG oslo_concurrency.lockutils [req-8d239cbb-a96a-4e0f-a86b-288b0315cd4c req-b5114e7e-298e-48c6-94dd-9afb572c7725 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.944 226310 DEBUG oslo_concurrency.lockutils [req-8d239cbb-a96a-4e0f-a86b-288b0315cd4c req-b5114e7e-298e-48c6-94dd-9afb572c7725 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.945 226310 DEBUG oslo_concurrency.lockutils [req-8d239cbb-a96a-4e0f-a86b-288b0315cd4c req-b5114e7e-298e-48c6-94dd-9afb572c7725 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.945 226310 DEBUG nova.compute.manager [req-8d239cbb-a96a-4e0f-a86b-288b0315cd4c req-b5114e7e-298e-48c6-94dd-9afb572c7725 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.946 226310 WARNING nova.compute.manager [req-8d239cbb-a96a-4e0f-a86b-288b0315cd4c req-b5114e7e-298e-48c6-94dd-9afb572c7725 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.992 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[79ec7e83-fc4c-4ccc-bec4-2b454f7bc5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.994 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.994 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:45.994 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:45 np0005539564 NetworkManager[48997]: <info>  [1764404385.9976] manager: (tap32485b0e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Nov 29 03:19:45 np0005539564 nova_compute[226295]: 2025-11-29 08:19:45.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539564 kernel: tap32485b0e-10: entered promiscuous mode
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.000 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:46.001 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:46Z|00489|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.026 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:46.026 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:46.027 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[02c44cd2-4ea5-4d24-8db2-3f0ac89b4e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:46.028 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:46.030 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'env', 'PROCESS_TAG=haproxy-32485b0e-177b-4dfd-a55a-0249528f32e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32485b0e-177b-4dfd-a55a-0249528f32e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.186 226310 DEBUG nova.network.neutron [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Successfully updated port: 4b3bde4e-a900-44fb-96a9-a6f92c949f67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.204 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "refresh_cache-091988cc-8042-4aa1-b909-5ca1744ff259" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.204 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquired lock "refresh_cache-091988cc-8042-4aa1-b909-5ca1744ff259" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.205 226310 DEBUG nova.network.neutron [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.240 226310 DEBUG nova.compute.manager [None req-a0841746-515e-432d-9557-0032eeb3950c 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.242 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 38bba84b-1fb0-460a-a6aa-707ef29970b2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.242 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404386.2327964, 38bba84b-1fb0-460a-a6aa-707ef29970b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.242 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.263 226310 DEBUG nova.compute.manager [req-8608e70b-5ee6-4330-ad59-9e96312c3125 req-6c1f3f0a-b1d6-4376-b3e0-357dbf9f3a7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received event network-changed-4b3bde4e-a900-44fb-96a9-a6f92c949f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.263 226310 DEBUG nova.compute.manager [req-8608e70b-5ee6-4330-ad59-9e96312c3125 req-6c1f3f0a-b1d6-4376-b3e0-357dbf9f3a7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Refreshing instance network info cache due to event network-changed-4b3bde4e-a900-44fb-96a9-a6f92c949f67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.263 226310 DEBUG oslo_concurrency.lockutils [req-8608e70b-5ee6-4330-ad59-9e96312c3125 req-6c1f3f0a-b1d6-4376-b3e0-357dbf9f3a7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-091988cc-8042-4aa1-b909-5ca1744ff259" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.270 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.274 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.295 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.296 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404386.2337265, 38bba84b-1fb0-460a-a6aa-707ef29970b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.296 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.307 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.316 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.320 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:46 np0005539564 nova_compute[226295]: 2025-11-29 08:19:46.347 226310 DEBUG nova.network.neutron [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:46 np0005539564 podman[276848]: 2025-11-29 08:19:46.433719801 +0000 UTC m=+0.077445214 container create f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:19:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:46.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:46 np0005539564 systemd[1]: Started libpod-conmon-f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc.scope.
Nov 29 03:19:46 np0005539564 podman[276848]: 2025-11-29 08:19:46.390558264 +0000 UTC m=+0.034283777 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:46 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:19:46 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe68cb2ba6ccb5ed885b60846351189e4f43ecc52d86b085b1fca1a0317a9f16/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:46 np0005539564 podman[276848]: 2025-11-29 08:19:46.523020505 +0000 UTC m=+0.166745958 container init f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:19:46 np0005539564 podman[276848]: 2025-11-29 08:19:46.528281057 +0000 UTC m=+0.172006480 container start f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:46 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[276863]: [NOTICE]   (276867) : New worker (276869) forked
Nov 29 03:19:46 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[276863]: [NOTICE]   (276867) : Loading success.
Nov 29 03:19:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:46.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.218 226310 INFO nova.compute.manager [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Unrescuing#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.219 226310 DEBUG oslo_concurrency.lockutils [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.219 226310 DEBUG oslo_concurrency.lockutils [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.219 226310 DEBUG nova.network.neutron [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.239 226310 DEBUG nova.network.neutron [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Updating instance_info_cache with network_info: [{"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.269 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Releasing lock "refresh_cache-091988cc-8042-4aa1-b909-5ca1744ff259" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.269 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Instance network_info: |[{"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.270 226310 DEBUG oslo_concurrency.lockutils [req-8608e70b-5ee6-4330-ad59-9e96312c3125 req-6c1f3f0a-b1d6-4376-b3e0-357dbf9f3a7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-091988cc-8042-4aa1-b909-5ca1744ff259" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.270 226310 DEBUG nova.network.neutron [req-8608e70b-5ee6-4330-ad59-9e96312c3125 req-6c1f3f0a-b1d6-4376-b3e0-357dbf9f3a7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Refreshing network info cache for port 4b3bde4e-a900-44fb-96a9-a6f92c949f67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.274 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Start _get_guest_xml network_info=[{"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.280 226310 WARNING nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.290 226310 DEBUG nova.virt.libvirt.host [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.291 226310 DEBUG nova.virt.libvirt.host [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.295 226310 DEBUG nova.virt.libvirt.host [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.296 226310 DEBUG nova.virt.libvirt.host [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.297 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.297 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.298 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.298 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.298 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.299 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.299 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.299 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.300 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.300 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.300 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.300 226310 DEBUG nova.virt.hardware [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.305 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.541 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/112214616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.799 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.835 226310 DEBUG nova.storage.rbd_utils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 091988cc-8042-4aa1-b909-5ca1744ff259_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:47 np0005539564 nova_compute[226295]: 2025-11-29 08:19:47.840 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3714404165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.303 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.307 226310 DEBUG nova.virt.libvirt.vif [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1059282693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1059282693',id=138,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-2fb0gcdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:44Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=091988cc-8042-4aa1-b909-5ca1744ff259,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.308 226310 DEBUG nova.network.os_vif_util [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.310 226310 DEBUG nova.network.os_vif_util [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:88:4b,bridge_name='br-int',has_traffic_filtering=True,id=4b3bde4e-a900-44fb-96a9-a6f92c949f67,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3bde4e-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.312 226310 DEBUG nova.objects.instance [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 091988cc-8042-4aa1-b909-5ca1744ff259 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.333 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <uuid>091988cc-8042-4aa1-b909-5ca1744ff259</uuid>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <name>instance-0000008a</name>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1059282693</nova:name>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:19:47</nova:creationTime>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <nova:user uuid="504bc6adabad4f7d8c17b0438c4d9be7">tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member</nova:user>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <nova:project uuid="b9d4c81989d641678300c7a1c173a2c2">tempest-ServerBootFromVolumeStableRescueTest-1019923576</nova:project>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <nova:port uuid="4b3bde4e-a900-44fb-96a9-a6f92c949f67">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <entry name="serial">091988cc-8042-4aa1-b909-5ca1744ff259</entry>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <entry name="uuid">091988cc-8042-4aa1-b909-5ca1744ff259</entry>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/091988cc-8042-4aa1-b909-5ca1744ff259_disk">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/091988cc-8042-4aa1-b909-5ca1744ff259_disk.config">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:70:88:4b"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <target dev="tap4b3bde4e-a9"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259/console.log" append="off"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:19:48 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:19:48 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:19:48 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:19:48 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.334 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Preparing to wait for external event network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.335 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.335 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.336 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.337 226310 DEBUG nova.virt.libvirt.vif [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1059282693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1059282693',id=138,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-2fb0gcdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:44Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=091988cc-8042-4aa1-b909-5ca1744ff259,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.337 226310 DEBUG nova.network.os_vif_util [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.338 226310 DEBUG nova.network.os_vif_util [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:88:4b,bridge_name='br-int',has_traffic_filtering=True,id=4b3bde4e-a900-44fb-96a9-a6f92c949f67,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3bde4e-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.339 226310 DEBUG os_vif [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:88:4b,bridge_name='br-int',has_traffic_filtering=True,id=4b3bde4e-a900-44fb-96a9-a6f92c949f67,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3bde4e-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.339 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.340 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.341 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.346 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.347 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b3bde4e-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.348 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b3bde4e-a9, col_values=(('external_ids', {'iface-id': '4b3bde4e-a900-44fb-96a9-a6f92c949f67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:88:4b', 'vm-uuid': '091988cc-8042-4aa1-b909-5ca1744ff259'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:48 np0005539564 NetworkManager[48997]: <info>  [1764404388.3512] manager: (tap4b3bde4e-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.350 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.353 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.357 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.359 226310 INFO os_vif [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:88:4b,bridge_name='br-int',has_traffic_filtering=True,id=4b3bde4e-a900-44fb-96a9-a6f92c949f67,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3bde4e-a9')#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.439 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.439 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.440 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No VIF found with MAC fa:16:3e:70:88:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.440 226310 INFO nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Using config drive#033[00m
Nov 29 03:19:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:48.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:48 np0005539564 nova_compute[226295]: 2025-11-29 08:19:48.479 226310 DEBUG nova.storage.rbd_utils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 091988cc-8042-4aa1-b909-5ca1744ff259_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:48.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:49 np0005539564 nova_compute[226295]: 2025-11-29 08:19:49.452 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:49 np0005539564 nova_compute[226295]: 2025-11-29 08:19:49.967 226310 DEBUG nova.compute.manager [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:49 np0005539564 nova_compute[226295]: 2025-11-29 08:19:49.968 226310 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:49 np0005539564 nova_compute[226295]: 2025-11-29 08:19:49.968 226310 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:49 np0005539564 nova_compute[226295]: 2025-11-29 08:19:49.968 226310 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539564 nova_compute[226295]: 2025-11-29 08:19:49.969 226310 DEBUG nova.compute.manager [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:49 np0005539564 nova_compute[226295]: 2025-11-29 08:19:49.969 226310 WARNING nova.compute.manager [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:19:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:50.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:50.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:50 np0005539564 nova_compute[226295]: 2025-11-29 08:19:50.904 226310 INFO nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Creating config drive at /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259/disk.config#033[00m
Nov 29 03:19:50 np0005539564 nova_compute[226295]: 2025-11-29 08:19:50.914 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpynlifsat execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.063 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpynlifsat" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.110 226310 DEBUG nova.storage.rbd_utils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 091988cc-8042-4aa1-b909-5ca1744ff259_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.116 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259/disk.config 091988cc-8042-4aa1-b909-5ca1744ff259_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.321 226310 DEBUG oslo_concurrency.processutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259/disk.config 091988cc-8042-4aa1-b909-5ca1744ff259_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.323 226310 INFO nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Deleting local config drive /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:51 np0005539564 kernel: tap4b3bde4e-a9: entered promiscuous mode
Nov 29 03:19:51 np0005539564 NetworkManager[48997]: <info>  [1764404391.3989] manager: (tap4b3bde4e-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Nov 29 03:19:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:51Z|00490|binding|INFO|Claiming lport 4b3bde4e-a900-44fb-96a9-a6f92c949f67 for this chassis.
Nov 29 03:19:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:51Z|00491|binding|INFO|4b3bde4e-a900-44fb-96a9-a6f92c949f67: Claiming fa:16:3e:70:88:4b 10.100.0.8
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.408 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.416 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:88:4b 10.100.0.8'], port_security=['fa:16:3e:70:88:4b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '091988cc-8042-4aa1-b909-5ca1744ff259', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4b3bde4e-a900-44fb-96a9-a6f92c949f67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.418 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4b3bde4e-a900-44fb-96a9-a6f92c949f67 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 bound to our chassis#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.419 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:19:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:51Z|00492|binding|INFO|Setting lport 4b3bde4e-a900-44fb-96a9-a6f92c949f67 ovn-installed in OVS
Nov 29 03:19:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:51Z|00493|binding|INFO|Setting lport 4b3bde4e-a900-44fb-96a9-a6f92c949f67 up in Southbound
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.436 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.440 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.446 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[53d8ea9a-83da-4639-9e91-3597673f83c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:51 np0005539564 systemd-machined[190128]: New machine qemu-62-instance-0000008a.
Nov 29 03:19:51 np0005539564 systemd[1]: Started Virtual Machine qemu-62-instance-0000008a.
Nov 29 03:19:51 np0005539564 systemd-udevd[277017]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.491 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d3325d09-fd67-4f6f-9510-ceb7d90078c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.496 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[80e3d7a5-996e-405d-b9e3-45232baba8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:51 np0005539564 NetworkManager[48997]: <info>  [1764404391.4994] device (tap4b3bde4e-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:51 np0005539564 NetworkManager[48997]: <info>  [1764404391.5027] device (tap4b3bde4e-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.547 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c4044f-cf99-4c88-8793-08e9985d270c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.581 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[306d7956-f397-4c5c-97c9-d30bf39337a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 18141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277027, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.606 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4db79111-dc47-4846-9ffb-a36f6620fe0a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724279, 'tstamp': 724279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277029, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724282, 'tstamp': 724282}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277029, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.609 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.611 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.612 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.612 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.613 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.613 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:51.614 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.939 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404391.9384248, 091988cc-8042-4aa1-b909-5ca1744ff259 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.939 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.972 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.975 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404391.9425259, 091988cc-8042-4aa1-b909-5ca1744ff259 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.975 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.994 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:51 np0005539564 nova_compute[226295]: 2025-11-29 08:19:51.997 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.018 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.093 226310 DEBUG nova.compute.manager [req-db5a827c-6ba1-48a5-928d-289eb5ac173c req-9919e2e6-33ae-4e53-bddb-c781004308df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received event network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.093 226310 DEBUG oslo_concurrency.lockutils [req-db5a827c-6ba1-48a5-928d-289eb5ac173c req-9919e2e6-33ae-4e53-bddb-c781004308df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.093 226310 DEBUG oslo_concurrency.lockutils [req-db5a827c-6ba1-48a5-928d-289eb5ac173c req-9919e2e6-33ae-4e53-bddb-c781004308df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.093 226310 DEBUG oslo_concurrency.lockutils [req-db5a827c-6ba1-48a5-928d-289eb5ac173c req-9919e2e6-33ae-4e53-bddb-c781004308df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.094 226310 DEBUG nova.compute.manager [req-db5a827c-6ba1-48a5-928d-289eb5ac173c req-9919e2e6-33ae-4e53-bddb-c781004308df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Processing event network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.094 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.098 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404392.098259, 091988cc-8042-4aa1-b909-5ca1744ff259 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.098 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.100 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.103 226310 INFO nova.virt.libvirt.driver [-] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Instance spawned successfully.#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.103 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.117 226310 DEBUG nova.network.neutron [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updating instance_info_cache with network_info: [{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.128 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.131 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.131 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.131 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.132 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.132 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.133 226310 DEBUG nova.virt.libvirt.driver [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.136 226310 DEBUG oslo_concurrency.lockutils [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.137 226310 DEBUG nova.objects.instance [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'flavor' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.140 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.183 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.206 226310 INFO nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Took 7.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.207 226310 DEBUG nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:52 np0005539564 kernel: tap0ac7b30d-da (unregistering): left promiscuous mode
Nov 29 03:19:52 np0005539564 NetworkManager[48997]: <info>  [1764404392.2173] device (tap0ac7b30d-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.221 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:52Z|00494|binding|INFO|Releasing lport 0ac7b30d-dad2-4718-b060-add6421b1065 from this chassis (sb_readonly=0)
Nov 29 03:19:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:52Z|00495|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 down in Southbound
Nov 29 03:19:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:52Z|00496|binding|INFO|Removing iface tap0ac7b30d-da ovn-installed in OVS
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.229 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9e:3a 10.100.0.10'], port_security=['fa:16:3e:d4:9e:3a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '38bba84b-1fb0-460a-a6aa-707ef29970b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '6', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0ac7b30d-dad2-4718-b060-add6421b1065) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.230 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0ac7b30d-dad2-4718-b060-add6421b1065 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.231 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32485b0e-177b-4dfd-a55a-0249528f32e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.232 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8dc961-5ab7-4527-9d15-4a8b21b94c15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.232 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace which is not needed anymore#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 29 03:19:52 np0005539564 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000087.scope: Consumed 6.576s CPU time.
Nov 29 03:19:52 np0005539564 systemd-machined[190128]: Machine qemu-61-instance-00000087 terminated.
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.290 226310 INFO nova.compute.manager [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Took 9.16 seconds to build instance.#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.312 226310 DEBUG oslo_concurrency.lockutils [None req-c513cc65-0d9e-4c46-b7db-45a6b2942b77 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:52 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[276863]: [NOTICE]   (276867) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:52 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[276863]: [NOTICE]   (276867) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:52 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[276863]: [WARNING]  (276867) : Exiting Master process...
Nov 29 03:19:52 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[276863]: [ALERT]    (276867) : Current worker (276869) exited with code 143 (Terminated)
Nov 29 03:19:52 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[276863]: [WARNING]  (276867) : All workers exited. Exiting... (0)
Nov 29 03:19:52 np0005539564 systemd[1]: libpod-f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc.scope: Deactivated successfully.
Nov 29 03:19:52 np0005539564 podman[277093]: 2025-11-29 08:19:52.388057847 +0000 UTC m=+0.051225956 container died f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.402 226310 INFO nova.virt.libvirt.driver [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance destroyed successfully.#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.402 226310 DEBUG nova.objects.instance [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:52 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:52 np0005539564 systemd[1]: var-lib-containers-storage-overlay-fe68cb2ba6ccb5ed885b60846351189e4f43ecc52d86b085b1fca1a0317a9f16-merged.mount: Deactivated successfully.
Nov 29 03:19:52 np0005539564 podman[277093]: 2025-11-29 08:19:52.447997747 +0000 UTC m=+0.111165826 container cleanup f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:52 np0005539564 systemd[1]: libpod-conmon-f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc.scope: Deactivated successfully.
Nov 29 03:19:52 np0005539564 kernel: tap0ac7b30d-da: entered promiscuous mode
Nov 29 03:19:52 np0005539564 NetworkManager[48997]: <info>  [1764404392.5125] manager: (tap0ac7b30d-da): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.514 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:52Z|00497|binding|INFO|Claiming lport 0ac7b30d-dad2-4718-b060-add6421b1065 for this chassis.
Nov 29 03:19:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:52Z|00498|binding|INFO|0ac7b30d-dad2-4718-b060-add6421b1065: Claiming fa:16:3e:d4:9e:3a 10.100.0.10
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.521 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9e:3a 10.100.0.10'], port_security=['fa:16:3e:d4:9e:3a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '38bba84b-1fb0-460a-a6aa-707ef29970b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '6', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0ac7b30d-dad2-4718-b060-add6421b1065) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:52 np0005539564 NetworkManager[48997]: <info>  [1764404392.5284] device (tap0ac7b30d-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:52 np0005539564 NetworkManager[48997]: <info>  [1764404392.5290] device (tap0ac7b30d-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:52 np0005539564 podman[277131]: 2025-11-29 08:19:52.532329667 +0000 UTC m=+0.049613832 container remove f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:19:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:52Z|00499|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 ovn-installed in OVS
Nov 29 03:19:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:52Z|00500|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 up in Southbound
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.537 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.539 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.543 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[230c3533-e276-4bec-b750-2e5eaece5167]: (4, ('Sat Nov 29 08:19:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc)\nf3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc\nSat Nov 29 08:19:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (f3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc)\nf3fddb4f694e927daf3ac0641b7b6a9d36dfad6768ee0f85081583a38f4354bc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.547 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.547 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[742c2a7a-a1e8-4707-9aee-f8e6f42b8f76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.548 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.549 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 kernel: tap32485b0e-10: left promiscuous mode
Nov 29 03:19:52 np0005539564 systemd-machined[190128]: New machine qemu-63-instance-00000087.
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.554 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.557 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7076b375-5b46-4baf-8f8d-58efcbd644fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 systemd[1]: Started Virtual Machine qemu-63-instance-00000087.
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.571 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0475eec5-1353-4dce-b6cc-8dec35fdb262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.572 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.574 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9614bbfe-c2bd-4787-9c11-68e70c2bcf80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.588 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[99fdb4ae-1cb1-4590-8ea5-129781c120ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736036, 'reachable_time': 42195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277157, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.591 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.591 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[be8a688d-d0d5-435d-9246-4aa8b1f6e0d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.592 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0ac7b30d-dad2-4718-b060-add6421b1065 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:19:52 np0005539564 systemd[1]: run-netns-ovnmeta\x2d32485b0e\x2d177b\x2d4dfd\x2da55a\x2d0249528f32e1.mount: Deactivated successfully.
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.593 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.610 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49dd92a6-f115-4d1d-937a-b3a14bac3471]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.611 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32485b0e-11 in ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.613 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32485b0e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.613 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9db35922-ac90-4933-ae01-34aef63b6602]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.614 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a026297e-0ee9-4255-8855-753e0dcb8637]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.628 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[359b3206-8b6b-47ad-8ccc-59a234ce07e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.667 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d3912526-0076-4f91-bd80-22fc579f0f00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.714 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bc21dfc6-e4c8-4151-9bbb-55f69a0c7bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.723 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7c691547-5e0b-443e-83b8-a1532648c6cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 NetworkManager[48997]: <info>  [1764404392.7249] manager: (tap32485b0e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/244)
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.786 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[edcfc637-470a-4be3-a434-87daeaf92ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.791 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[28a8a29c-cada-4948-b8f9-723a62f0885f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 NetworkManager[48997]: <info>  [1764404392.8189] device (tap32485b0e-10): carrier: link connected
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.827 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[68e14b08-5a96-40ec-95a0-63ea5c35f0ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.847 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[136f8190-7e35-4ee4-a3ac-368801a084cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736750, 'reachable_time': 38892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277188, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.871 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd3df61-5c67-405b-9867-b0809161b80f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:4406'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736750, 'tstamp': 736750}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277189, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.878 226310 DEBUG nova.network.neutron [req-8608e70b-5ee6-4330-ad59-9e96312c3125 req-6c1f3f0a-b1d6-4376-b3e0-357dbf9f3a7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Updated VIF entry in instance network info cache for port 4b3bde4e-a900-44fb-96a9-a6f92c949f67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.879 226310 DEBUG nova.network.neutron [req-8608e70b-5ee6-4330-ad59-9e96312c3125 req-6c1f3f0a-b1d6-4376-b3e0-357dbf9f3a7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Updating instance_info_cache with network_info: [{"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:52.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.897 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f7f31a-a65e-49f4-8b23-ce6117224053]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736750, 'reachable_time': 38892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277190, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:52 np0005539564 nova_compute[226295]: 2025-11-29 08:19:52.907 226310 DEBUG oslo_concurrency.lockutils [req-8608e70b-5ee6-4330-ad59-9e96312c3125 req-6c1f3f0a-b1d6-4376-b3e0-357dbf9f3a7a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-091988cc-8042-4aa1-b909-5ca1744ff259" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:52.943 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[80203632-f569-4426-a033-fac74e76bec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.035 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[27a200a9-8d2e-4ed9-b339-8dfb15db9ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.036 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.036 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.037 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:53 np0005539564 kernel: tap32485b0e-10: entered promiscuous mode
Nov 29 03:19:53 np0005539564 NetworkManager[48997]: <info>  [1764404393.0400] manager: (tap32485b0e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.042 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:19:53Z|00501|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.038 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.059 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.060 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.061 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[32ec6f4b-beaf-4cbf-9bed-483879abfecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.062 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:19:53.062 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'env', 'PROCESS_TAG=haproxy-32485b0e-177b-4dfd-a55a-0249528f32e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32485b0e-177b-4dfd-a55a-0249528f32e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.350 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:53 np0005539564 podman[277222]: 2025-11-29 08:19:53.565271673 +0000 UTC m=+0.073678773 container create 698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:19:53 np0005539564 systemd[1]: Started libpod-conmon-698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01.scope.
Nov 29 03:19:53 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:19:53 np0005539564 podman[277222]: 2025-11-29 08:19:53.53038515 +0000 UTC m=+0.038792360 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:53 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/accbc07cc061d345538754dd6f169fe186b2d125b8a88faf001ec8a2b791ca40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:53 np0005539564 podman[277222]: 2025-11-29 08:19:53.651027561 +0000 UTC m=+0.159434701 container init 698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:19:53 np0005539564 podman[277222]: 2025-11-29 08:19:53.66022704 +0000 UTC m=+0.168634140 container start 698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:19:53 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[277237]: [NOTICE]   (277257) : New worker (277265) forked
Nov 29 03:19:53 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[277237]: [NOTICE]   (277257) : Loading success.
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.840 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 38bba84b-1fb0-460a-a6aa-707ef29970b2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.842 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404393.840238, 38bba84b-1fb0-460a-a6aa-707ef29970b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.842 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.868 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.896 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.937 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.938 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404393.8417947, 38bba84b-1fb0-460a-a6aa-707ef29970b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.938 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.963 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.967 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.982 226310 DEBUG nova.compute.manager [req-dca47e58-457a-4d7f-ab3c-37ae6013859e req-b2063879-7fd5-422b-a667-ae41b6bb2098 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.982 226310 DEBUG oslo_concurrency.lockutils [req-dca47e58-457a-4d7f-ab3c-37ae6013859e req-b2063879-7fd5-422b-a667-ae41b6bb2098 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.983 226310 DEBUG oslo_concurrency.lockutils [req-dca47e58-457a-4d7f-ab3c-37ae6013859e req-b2063879-7fd5-422b-a667-ae41b6bb2098 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.983 226310 DEBUG oslo_concurrency.lockutils [req-dca47e58-457a-4d7f-ab3c-37ae6013859e req-b2063879-7fd5-422b-a667-ae41b6bb2098 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.984 226310 DEBUG nova.compute.manager [req-dca47e58-457a-4d7f-ab3c-37ae6013859e req-b2063879-7fd5-422b-a667-ae41b6bb2098 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.984 226310 WARNING nova.compute.manager [req-dca47e58-457a-4d7f-ab3c-37ae6013859e req-b2063879-7fd5-422b-a667-ae41b6bb2098 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:19:53 np0005539564 nova_compute[226295]: 2025-11-29 08:19:53.990 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.172 226310 DEBUG nova.compute.manager [req-912a3917-675f-480a-b798-9ef6189fc550 req-a72051c1-8f1a-4e80-a617-dbb140596823 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received event network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.174 226310 DEBUG oslo_concurrency.lockutils [req-912a3917-675f-480a-b798-9ef6189fc550 req-a72051c1-8f1a-4e80-a617-dbb140596823 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.174 226310 DEBUG oslo_concurrency.lockutils [req-912a3917-675f-480a-b798-9ef6189fc550 req-a72051c1-8f1a-4e80-a617-dbb140596823 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.175 226310 DEBUG oslo_concurrency.lockutils [req-912a3917-675f-480a-b798-9ef6189fc550 req-a72051c1-8f1a-4e80-a617-dbb140596823 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.176 226310 DEBUG nova.compute.manager [req-912a3917-675f-480a-b798-9ef6189fc550 req-a72051c1-8f1a-4e80-a617-dbb140596823 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] No waiting events found dispatching network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.176 226310 WARNING nova.compute.manager [req-912a3917-675f-480a-b798-9ef6189fc550 req-a72051c1-8f1a-4e80-a617-dbb140596823 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received unexpected event network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.288 226310 DEBUG nova.compute.manager [None req-b383cbd1-ddfe-4148-a1d6-2834eaeb3b16 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:54 np0005539564 nova_compute[226295]: 2025-11-29 08:19:54.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:19:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:54.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:55 np0005539564 nova_compute[226295]: 2025-11-29 08:19:55.538 226310 DEBUG nova.compute.manager [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:55 np0005539564 nova_compute[226295]: 2025-11-29 08:19:55.582 226310 INFO nova.compute.manager [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] instance snapshotting#033[00m
Nov 29 03:19:55 np0005539564 nova_compute[226295]: 2025-11-29 08:19:55.976 226310 INFO nova.virt.libvirt.driver [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Beginning live snapshot process#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.056 226310 DEBUG nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.056 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.057 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.058 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.058 226310 DEBUG nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.059 226310 WARNING nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.060 226310 DEBUG nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.060 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.061 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.061 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.062 226310 DEBUG nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.062 226310 WARNING nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.065 226310 DEBUG nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.066 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.067 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.068 226310 DEBUG oslo_concurrency.lockutils [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.068 226310 DEBUG nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.068 226310 WARNING nova.compute.manager [req-983e0b44-e07e-4cdc-b336-1dd39c092d10 req-b587183d-56b4-43a0-a5f0-ac5786bc95d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.220 226310 DEBUG nova.virt.libvirt.imagebackend [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.438 226310 DEBUG nova.storage.rbd_utils [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] creating snapshot(62e7205050a94f00a3aec69e998e18b8) on rbd image(091988cc-8042-4aa1-b909-5ca1744ff259_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:19:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:56.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.835 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.835 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.836 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:19:56 np0005539564 nova_compute[226295]: 2025-11-29 08:19:56.836 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0b082cd2-d1d3-4577-be0a-30b9256a223e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:56.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e317 e317: 3 total, 3 up, 3 in
Nov 29 03:19:57 np0005539564 nova_compute[226295]: 2025-11-29 08:19:57.077 226310 DEBUG nova.storage.rbd_utils [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] cloning vms/091988cc-8042-4aa1-b909-5ca1744ff259_disk@62e7205050a94f00a3aec69e998e18b8 to images/bf0aed8c-3418-4a5d-ab36-68410b6b956c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:19:57 np0005539564 nova_compute[226295]: 2025-11-29 08:19:57.246 226310 DEBUG nova.storage.rbd_utils [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] flattening images/bf0aed8c-3418-4a5d-ab36-68410b6b956c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:19:57 np0005539564 nova_compute[226295]: 2025-11-29 08:19:57.550 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:57 np0005539564 nova_compute[226295]: 2025-11-29 08:19:57.562 226310 DEBUG nova.storage.rbd_utils [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] removing snapshot(62e7205050a94f00a3aec69e998e18b8) on rbd image(091988cc-8042-4aa1-b909-5ca1744ff259_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:19:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e318 e318: 3 total, 3 up, 3 in
Nov 29 03:19:58 np0005539564 nova_compute[226295]: 2025-11-29 08:19:58.079 226310 DEBUG nova.storage.rbd_utils [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] creating snapshot(snap) on rbd image(bf0aed8c-3418-4a5d-ab36-68410b6b956c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:19:58 np0005539564 nova_compute[226295]: 2025-11-29 08:19:58.206 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Updating instance_info_cache with network_info: [{"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:58 np0005539564 nova_compute[226295]: 2025-11-29 08:19:58.223 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-0b082cd2-d1d3-4577-be0a-30b9256a223e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:58 np0005539564 nova_compute[226295]: 2025-11-29 08:19:58.223 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:19:58 np0005539564 nova_compute[226295]: 2025-11-29 08:19:58.223 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:58 np0005539564 nova_compute[226295]: 2025-11-29 08:19:58.223 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:58 np0005539564 nova_compute[226295]: 2025-11-29 08:19:58.352 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:58.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:19:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:58.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e319 e319: 3 total, 3 up, 3 in
Nov 29 03:20:00 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 03:20:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:00.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:20:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:00.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:20:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:01 np0005539564 nova_compute[226295]: 2025-11-29 08:20:01.853 226310 INFO nova.virt.libvirt.driver [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Snapshot image upload complete#033[00m
Nov 29 03:20:01 np0005539564 nova_compute[226295]: 2025-11-29 08:20:01.854 226310 INFO nova.compute.manager [None req-ac776bf1-2f2d-4ce5-9060-57748b23ac6f 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Took 6.27 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.419 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.419 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.419 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.420 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.420 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.556 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3509580080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.885 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:02.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.984 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.985 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.989 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.990 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.995 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:02 np0005539564 nova_compute[226295]: 2025-11-29 08:20:02.995 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:03.258 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:03.259 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.289 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.347 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.349 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3812MB free_disk=20.764366149902344GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.349 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.349 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.354 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.441 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 0b082cd2-d1d3-4577-be0a-30b9256a223e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.442 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 38bba84b-1fb0-460a-a6aa-707ef29970b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.442 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 091988cc-8042-4aa1-b909-5ca1744ff259 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.443 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.443 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:20:03 np0005539564 nova_compute[226295]: 2025-11-29 08:20:03.514 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:03.734 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:03.734 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:03.735 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1668561165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:04 np0005539564 nova_compute[226295]: 2025-11-29 08:20:04.005 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:04 np0005539564 nova_compute[226295]: 2025-11-29 08:20:04.015 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:04 np0005539564 nova_compute[226295]: 2025-11-29 08:20:04.032 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:04 np0005539564 nova_compute[226295]: 2025-11-29 08:20:04.069 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:20:04 np0005539564 nova_compute[226295]: 2025-11-29 08:20:04.071 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:04.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:04.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:05Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:88:4b 10.100.0.8
Nov 29 03:20:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:05Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:88:4b 10.100.0.8
Nov 29 03:20:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:06.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:06 np0005539564 nova_compute[226295]: 2025-11-29 08:20:06.769 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:06 np0005539564 nova_compute[226295]: 2025-11-29 08:20:06.770 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:06.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:06 np0005539564 nova_compute[226295]: 2025-11-29 08:20:06.974 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.035 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.035 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.040 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.040 226310 INFO nova.compute.claims [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.202 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.559 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3781726136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.711 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.717 226310 DEBUG nova.compute.provider_tree [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.744 226310 DEBUG nova.scheduler.client.report [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.782 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.783 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.830 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.831 226310 DEBUG nova.network.neutron [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.856 226310 INFO nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.875 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:20:07 np0005539564 nova_compute[226295]: 2025-11-29 08:20:07.931 226310 INFO nova.virt.block_device [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Booting with volume-backed-image 1be11678-cfa4-4dee-b54c-6c7e547e5a6a at /dev/vda#033[00m
Nov 29 03:20:08 np0005539564 nova_compute[226295]: 2025-11-29 08:20:08.080 226310 DEBUG nova.policy [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '504bc6adabad4f7d8c17b0438c4d9be7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:20:08 np0005539564 nova_compute[226295]: 2025-11-29 08:20:08.355 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:08.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:08Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:9e:3a 10.100.0.10
Nov 29 03:20:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:08Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:9e:3a 10.100.0.10
Nov 29 03:20:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e320 e320: 3 total, 3 up, 3 in
Nov 29 03:20:08 np0005539564 nova_compute[226295]: 2025-11-29 08:20:08.873 226310 DEBUG nova.network.neutron [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Successfully created port: 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:20:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:08.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:09 np0005539564 nova_compute[226295]: 2025-11-29 08:20:09.695 226310 DEBUG nova.network.neutron [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Successfully updated port: 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:20:09 np0005539564 nova_compute[226295]: 2025-11-29 08:20:09.711 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:09 np0005539564 nova_compute[226295]: 2025-11-29 08:20:09.711 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquired lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:09 np0005539564 nova_compute[226295]: 2025-11-29 08:20:09.712 226310 DEBUG nova.network.neutron [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:09 np0005539564 nova_compute[226295]: 2025-11-29 08:20:09.776 226310 DEBUG nova.compute.manager [req-e49f81d2-67f7-4428-8e41-2bb7faa680f1 req-703e7aea-f863-4814-801c-6de030a34c55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-changed-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:09 np0005539564 nova_compute[226295]: 2025-11-29 08:20:09.777 226310 DEBUG nova.compute.manager [req-e49f81d2-67f7-4428-8e41-2bb7faa680f1 req-703e7aea-f863-4814-801c-6de030a34c55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Refreshing instance network info cache due to event network-changed-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:09 np0005539564 nova_compute[226295]: 2025-11-29 08:20:09.778 226310 DEBUG oslo_concurrency.lockutils [req-e49f81d2-67f7-4428-8e41-2bb7faa680f1 req-703e7aea-f863-4814-801c-6de030a34c55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:09 np0005539564 nova_compute[226295]: 2025-11-29 08:20:09.888 226310 DEBUG nova.network.neutron [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:20:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:10.261 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:10 np0005539564 nova_compute[226295]: 2025-11-29 08:20:10.747 226310 DEBUG nova.network.neutron [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Updating instance_info_cache with network_info: [{"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:10 np0005539564 nova_compute[226295]: 2025-11-29 08:20:10.906 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Releasing lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:10 np0005539564 nova_compute[226295]: 2025-11-29 08:20:10.907 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance network_info: |[{"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:20:10 np0005539564 nova_compute[226295]: 2025-11-29 08:20:10.907 226310 DEBUG oslo_concurrency.lockutils [req-e49f81d2-67f7-4428-8e41-2bb7faa680f1 req-703e7aea-f863-4814-801c-6de030a34c55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:10 np0005539564 nova_compute[226295]: 2025-11-29 08:20:10.908 226310 DEBUG nova.network.neutron [req-e49f81d2-67f7-4428-8e41-2bb7faa680f1 req-703e7aea-f863-4814-801c-6de030a34c55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Refreshing network info cache for port 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:10.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:11 np0005539564 podman[277522]: 2025-11-29 08:20:11.564486312 +0000 UTC m=+0.090472747 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:20:11 np0005539564 podman[277523]: 2025-11-29 08:20:11.565397187 +0000 UTC m=+0.085753100 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:20:11 np0005539564 podman[277521]: 2025-11-29 08:20:11.592491429 +0000 UTC m=+0.133911622 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:20:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:12.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:12 np0005539564 nova_compute[226295]: 2025-11-29 08:20:12.600 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539564 nova_compute[226295]: 2025-11-29 08:20:12.735 226310 DEBUG nova.network.neutron [req-e49f81d2-67f7-4428-8e41-2bb7faa680f1 req-703e7aea-f863-4814-801c-6de030a34c55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Updated VIF entry in instance network info cache for port 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:12 np0005539564 nova_compute[226295]: 2025-11-29 08:20:12.736 226310 DEBUG nova.network.neutron [req-e49f81d2-67f7-4428-8e41-2bb7faa680f1 req-703e7aea-f863-4814-801c-6de030a34c55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Updating instance_info_cache with network_info: [{"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:12 np0005539564 nova_compute[226295]: 2025-11-29 08:20:12.764 226310 DEBUG oslo_concurrency.lockutils [req-e49f81d2-67f7-4428-8e41-2bb7faa680f1 req-703e7aea-f863-4814-801c-6de030a34c55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:12.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:13 np0005539564 nova_compute[226295]: 2025-11-29 08:20:13.357 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:14.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e321 e321: 3 total, 3 up, 3 in
Nov 29 03:20:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e322 e322: 3 total, 3 up, 3 in
Nov 29 03:20:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:16.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:16.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e323 e323: 3 total, 3 up, 3 in
Nov 29 03:20:17 np0005539564 nova_compute[226295]: 2025-11-29 08:20:17.605 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:18 np0005539564 nova_compute[226295]: 2025-11-29 08:20:18.359 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:18.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:18.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.196 226310 DEBUG os_brick.utils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.197 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.219 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.219 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[d080a172-d06e-4e9f-8655-80ce891acdc1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.221 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.236 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.237 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e2802c-f10a-4a06-b982-ca420af74547]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.240 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.254 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.255 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[85ae80dc-5ec9-41a9-83d9-e0296df52700]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.257 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[edce4ca0-ae82-4d0e-a6f6-a6d0bfa1787e]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.258 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.312 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "nvme version" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.316 226310 DEBUG os_brick.initiator.connectors.lightos [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.317 226310 DEBUG os_brick.initiator.connectors.lightos [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.317 226310 DEBUG os_brick.initiator.connectors.lightos [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.318 226310 DEBUG os_brick.utils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] <== get_connector_properties: return (121ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:20:20 np0005539564 nova_compute[226295]: 2025-11-29 08:20:20.318 226310 DEBUG nova.virt.block_device [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Updating existing volume attachment record: 172f1477-7658-4342-8952-216fa1d405a1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:20:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:20.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.495 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.498 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.499 226310 INFO nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Creating image(s)#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.499 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.500 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Ensure instance console log exists: /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.500 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.501 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.501 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.504 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Start _get_guest_xml network_info=[{"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6e321817-b498-4ce5-ba72-237a6641f417', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6e321817-b498-4ce5-ba72-237a6641f417', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '837cb339-2f69-440f-b84e-6bcbd9bd81b9', 'attached_at': '', 'detached_at': '', 'volume_id': '6e321817-b498-4ce5-ba72-237a6641f417', 'serial': '6e321817-b498-4ce5-ba72-237a6641f417'}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '172f1477-7658-4342-8952-216fa1d405a1', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.511 226310 WARNING nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.516 226310 DEBUG nova.virt.libvirt.host [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.517 226310 DEBUG nova.virt.libvirt.host [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.520 226310 DEBUG nova.virt.libvirt.host [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.521 226310 DEBUG nova.virt.libvirt.host [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.522 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.522 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.523 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.523 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.523 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.524 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.524 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.524 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.524 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.525 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.525 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.525 226310 DEBUG nova.virt.hardware [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.564 226310 DEBUG nova.storage.rbd_utils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:21 np0005539564 nova_compute[226295]: 2025-11-29 08:20:21.570 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2464548411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.024 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.079 226310 DEBUG nova.virt.libvirt.vif [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-773780986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-773780986',id=141,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-luhjm0w0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:07Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=837cb339-2f69-440f-b84e-6bcbd9bd81b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.080 226310 DEBUG nova.network.os_vif_util [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.081 226310 DEBUG nova.network.os_vif_util [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:63:ec,bridge_name='br-int',has_traffic_filtering=True,id=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7816a3c3-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.082 226310 DEBUG nova.objects.instance [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.111 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <uuid>837cb339-2f69-440f-b84e-6bcbd9bd81b9</uuid>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <name>instance-0000008d</name>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-773780986</nova:name>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:20:21</nova:creationTime>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <nova:user uuid="504bc6adabad4f7d8c17b0438c4d9be7">tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member</nova:user>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <nova:project uuid="b9d4c81989d641678300c7a1c173a2c2">tempest-ServerBootFromVolumeStableRescueTest-1019923576</nova:project>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <nova:port uuid="7816a3c3-3902-49f4-9c94-cf9bbfc6b25c">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <entry name="serial">837cb339-2f69-440f-b84e-6bcbd9bd81b9</entry>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <entry name="uuid">837cb339-2f69-440f-b84e-6bcbd9bd81b9</entry>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-6e321817-b498-4ce5-ba72-237a6641f417">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <serial>6e321817-b498-4ce5-ba72-237a6641f417</serial>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:6c:63:ec"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <target dev="tap7816a3c3-39"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/console.log" append="off"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:20:22 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:20:22 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:20:22 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:20:22 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.113 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Preparing to wait for external event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.113 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.114 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.114 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.117 226310 DEBUG nova.virt.libvirt.vif [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-773780986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-773780986',id=141,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-luhjm0w0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:07Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=837cb339-2f69-440f-b84e-6bcbd9bd81b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.117 226310 DEBUG nova.network.os_vif_util [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.118 226310 DEBUG nova.network.os_vif_util [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:63:ec,bridge_name='br-int',has_traffic_filtering=True,id=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7816a3c3-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.119 226310 DEBUG os_vif [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:63:ec,bridge_name='br-int',has_traffic_filtering=True,id=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7816a3c3-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.120 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.121 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.121 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.126 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.126 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7816a3c3-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.127 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7816a3c3-39, col_values=(('external_ids', {'iface-id': '7816a3c3-3902-49f4-9c94-cf9bbfc6b25c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:63:ec', 'vm-uuid': '837cb339-2f69-440f-b84e-6bcbd9bd81b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.128 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539564 NetworkManager[48997]: <info>  [1764404422.1300] manager: (tap7816a3c3-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.133 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.138 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.139 226310 INFO os_vif [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:63:ec,bridge_name='br-int',has_traffic_filtering=True,id=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7816a3c3-39')#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.204 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.205 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.205 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No VIF found with MAC fa:16:3e:6c:63:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.206 226310 INFO nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Using config drive#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.242 226310 DEBUG nova.storage.rbd_utils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.608 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.796 226310 INFO nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Creating config drive at /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config#033[00m
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.806 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp001l699b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:22.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:22 np0005539564 nova_compute[226295]: 2025-11-29 08:20:22.968 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp001l699b" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.019 226310 DEBUG nova.storage.rbd_utils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.026 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.252 226310 DEBUG oslo_concurrency.processutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.254 226310 INFO nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Deleting local config drive /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config because it was imported into RBD.#033[00m
Nov 29 03:20:23 np0005539564 kernel: tap7816a3c3-39: entered promiscuous mode
Nov 29 03:20:23 np0005539564 NetworkManager[48997]: <info>  [1764404423.3182] manager: (tap7816a3c3-39): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Nov 29 03:20:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:23Z|00502|binding|INFO|Claiming lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for this chassis.
Nov 29 03:20:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:23Z|00503|binding|INFO|7816a3c3-3902-49f4-9c94-cf9bbfc6b25c: Claiming fa:16:3e:6c:63:ec 10.100.0.12
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.321 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.353 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:63:ec 10.100.0.12'], port_security=['fa:16:3e:6c:63:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '837cb339-2f69-440f-b84e-6bcbd9bd81b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.354 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 bound to our chassis#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.355 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:20:23 np0005539564 systemd-udevd[277699]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:23Z|00504|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c ovn-installed in OVS
Nov 29 03:20:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:23Z|00505|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c up in Southbound
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.365 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.369 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.373 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cbde6682-71c5-4ed0-ae2b-7c5ef928bf41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:23 np0005539564 systemd-machined[190128]: New machine qemu-64-instance-0000008d.
Nov 29 03:20:23 np0005539564 NetworkManager[48997]: <info>  [1764404423.3871] device (tap7816a3c3-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:23 np0005539564 NetworkManager[48997]: <info>  [1764404423.3883] device (tap7816a3c3-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:23 np0005539564 systemd[1]: Started Virtual Machine qemu-64-instance-0000008d.
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.414 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a26f7216-ce81-40dc-81ee-87c011f2e677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.417 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a0914c-696a-453e-a7ee-277bdeff9c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.460 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4f2327-92a0-4625-a2ba-9bcf6811ec59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.481 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bc468055-7736-4f3b-af61-8af8daeadc2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 18141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277715, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.508 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fe20c787-5252-4829-a30b-367ba7d0f14b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724279, 'tstamp': 724279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277717, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724282, 'tstamp': 724282}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277717, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.510 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.512 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:23 np0005539564 nova_compute[226295]: 2025-11-29 08:20:23.513 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.514 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.514 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.514 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:23.514 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 e324: 3 total, 3 up, 3 in
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.120 226310 DEBUG nova.compute.manager [req-f434bcfd-446a-4cc0-a8f9-fb9df23ef58f req-3066b867-2af9-41fa-8092-1910d2435a7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.121 226310 DEBUG oslo_concurrency.lockutils [req-f434bcfd-446a-4cc0-a8f9-fb9df23ef58f req-3066b867-2af9-41fa-8092-1910d2435a7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.122 226310 DEBUG oslo_concurrency.lockutils [req-f434bcfd-446a-4cc0-a8f9-fb9df23ef58f req-3066b867-2af9-41fa-8092-1910d2435a7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.122 226310 DEBUG oslo_concurrency.lockutils [req-f434bcfd-446a-4cc0-a8f9-fb9df23ef58f req-3066b867-2af9-41fa-8092-1910d2435a7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.122 226310 DEBUG nova.compute.manager [req-f434bcfd-446a-4cc0-a8f9-fb9df23ef58f req-3066b867-2af9-41fa-8092-1910d2435a7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Processing event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.374 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404424.3743045, 837cb339-2f69-440f-b84e-6bcbd9bd81b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.375 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.378 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.382 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.386 226310 INFO nova.virt.libvirt.driver [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance spawned successfully.#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.387 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.414 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.421 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.426 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.427 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.428 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.429 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.429 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.430 226310 DEBUG nova.virt.libvirt.driver [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.454 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.455 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404424.3745337, 837cb339-2f69-440f-b84e-6bcbd9bd81b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.455 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.477 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.480 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404424.3809583, 837cb339-2f69-440f-b84e-6bcbd9bd81b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.481 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.493 226310 INFO nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Took 3.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.494 226310 DEBUG nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.507 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.510 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.529 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.567 226310 INFO nova.compute.manager [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Took 17.55 seconds to build instance.#033[00m
Nov 29 03:20:24 np0005539564 nova_compute[226295]: 2025-11-29 08:20:24.588 226310 DEBUG oslo_concurrency.lockutils [None req-cdfcda43-f624-4f6a-bf1f-ae47f214296b 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:24.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:26 np0005539564 nova_compute[226295]: 2025-11-29 08:20:26.236 226310 DEBUG nova.compute.manager [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:26 np0005539564 nova_compute[226295]: 2025-11-29 08:20:26.237 226310 DEBUG oslo_concurrency.lockutils [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:26 np0005539564 nova_compute[226295]: 2025-11-29 08:20:26.238 226310 DEBUG oslo_concurrency.lockutils [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:26 np0005539564 nova_compute[226295]: 2025-11-29 08:20:26.238 226310 DEBUG oslo_concurrency.lockutils [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:26 np0005539564 nova_compute[226295]: 2025-11-29 08:20:26.238 226310 DEBUG nova.compute.manager [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:26 np0005539564 nova_compute[226295]: 2025-11-29 08:20:26.239 226310 WARNING nova.compute.manager [req-1d6c4464-32c7-4c7a-9fed-860e071ae3e4 req-861cf6ca-df51-47ac-b5ac-70721e05f994 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:20:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:20:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:26.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:27 np0005539564 nova_compute[226295]: 2025-11-29 08:20:27.131 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539564 nova_compute[226295]: 2025-11-29 08:20:27.610 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539564 nova_compute[226295]: 2025-11-29 08:20:27.803 226310 INFO nova.compute.manager [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Rescuing#033[00m
Nov 29 03:20:27 np0005539564 nova_compute[226295]: 2025-11-29 08:20:27.803 226310 DEBUG oslo_concurrency.lockutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:27 np0005539564 nova_compute[226295]: 2025-11-29 08:20:27.804 226310 DEBUG oslo_concurrency.lockutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquired lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:27 np0005539564 nova_compute[226295]: 2025-11-29 08:20:27.804 226310 DEBUG nova.network.neutron [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:28.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:28.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:29 np0005539564 nova_compute[226295]: 2025-11-29 08:20:29.272 226310 DEBUG nova.network.neutron [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Updating instance_info_cache with network_info: [{"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:29 np0005539564 nova_compute[226295]: 2025-11-29 08:20:29.417 226310 DEBUG oslo_concurrency.lockutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Releasing lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:29 np0005539564 nova_compute[226295]: 2025-11-29 08:20:29.700 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:20:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:30.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:30.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:32 np0005539564 nova_compute[226295]: 2025-11-29 08:20:32.134 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:32.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:32 np0005539564 nova_compute[226295]: 2025-11-29 08:20:32.613 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:32.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:20:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:20:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:34.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:36.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:37 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:37Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:63:ec 10.100.0.12
Nov 29 03:20:37 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:37Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:63:ec 10.100.0.12
Nov 29 03:20:37 np0005539564 nova_compute[226295]: 2025-11-29 08:20:37.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:37 np0005539564 nova_compute[226295]: 2025-11-29 08:20:37.615 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:38.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:38.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:39 np0005539564 nova_compute[226295]: 2025-11-29 08:20:39.747 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:20:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:40.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:40.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:42 np0005539564 kernel: tap7816a3c3-39 (unregistering): left promiscuous mode
Nov 29 03:20:42 np0005539564 NetworkManager[48997]: <info>  [1764404442.0724] device (tap7816a3c3-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:42Z|00506|binding|INFO|Releasing lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c from this chassis (sb_readonly=0)
Nov 29 03:20:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:42Z|00507|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c down in Southbound
Nov 29 03:20:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:42Z|00508|binding|INFO|Removing iface tap7816a3c3-39 ovn-installed in OVS
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.084 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.087 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.097 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:63:ec 10.100.0.12'], port_security=['fa:16:3e:6c:63:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '837cb339-2f69-440f-b84e-6bcbd9bd81b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.099 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.102 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.112 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.134 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a844d639-c676-462c-a6b7-b44d1a75d41c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:42 np0005539564 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 29 03:20:42 np0005539564 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008d.scope: Consumed 14.890s CPU time.
Nov 29 03:20:42 np0005539564 systemd-machined[190128]: Machine qemu-64-instance-0000008d terminated.
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.176 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[2abb0096-603e-4934-b933-b74d2e3460ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.179 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[23246a3c-8532-4e6d-b13e-64439f4c0750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.190 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539564 podman[277947]: 2025-11-29 08:20:42.199949157 +0000 UTC m=+0.075099772 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.220 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3eddea-e96e-45ba-b6fb-6d3ba46e217f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:42 np0005539564 podman[277946]: 2025-11-29 08:20:42.23777183 +0000 UTC m=+0.120478949 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.242 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f5480a22-d413-4dc7-a100-cdb91afad90b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 18141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278013, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.261 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e78ca5db-45a8-4ad4-b8bb-e9ef53f78d60]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724279, 'tstamp': 724279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278015, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724282, 'tstamp': 724282}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278015, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:42 np0005539564 podman[277945]: 2025-11-29 08:20:42.261834569 +0000 UTC m=+0.146965323 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.263 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.264 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.269 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.270 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.270 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.270 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:42.271 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.317 226310 DEBUG nova.compute.manager [req-ee92f52d-a75d-4981-830d-672e659d385d req-5c43852b-db34-498b-b28f-8044924ef233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.317 226310 DEBUG oslo_concurrency.lockutils [req-ee92f52d-a75d-4981-830d-672e659d385d req-5c43852b-db34-498b-b28f-8044924ef233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.317 226310 DEBUG oslo_concurrency.lockutils [req-ee92f52d-a75d-4981-830d-672e659d385d req-5c43852b-db34-498b-b28f-8044924ef233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.318 226310 DEBUG oslo_concurrency.lockutils [req-ee92f52d-a75d-4981-830d-672e659d385d req-5c43852b-db34-498b-b28f-8044924ef233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.318 226310 DEBUG nova.compute.manager [req-ee92f52d-a75d-4981-830d-672e659d385d req-5c43852b-db34-498b-b28f-8044924ef233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.318 226310 WARNING nova.compute.manager [req-ee92f52d-a75d-4981-830d-672e659d385d req-5c43852b-db34-498b-b28f-8044924ef233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:20:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:20:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:42.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.643 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.764 226310 INFO nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.772 226310 INFO nova.virt.libvirt.driver [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance destroyed successfully.#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.773 226310 DEBUG nova.objects.instance [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:42 np0005539564 nova_compute[226295]: 2025-11-29 08:20:42.793 226310 INFO nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Attempting a stable device rescue#033[00m
Nov 29 03:20:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:42.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.131 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.137 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.138 226310 INFO nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Creating image(s)#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.185 226310 DEBUG nova.storage.rbd_utils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.193 226310 DEBUG nova.objects.instance [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.246 226310 DEBUG nova.storage.rbd_utils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.296 226310 DEBUG nova.storage.rbd_utils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.301 226310 DEBUG oslo_concurrency.lockutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "275b146b47ad209f54707403f56a31adbd1e714f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.302 226310 DEBUG oslo_concurrency.lockutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "275b146b47ad209f54707403f56a31adbd1e714f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.559 226310 DEBUG nova.virt.libvirt.imagebackend [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/bf0aed8c-3418-4a5d-ab36-68410b6b956c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/bf0aed8c-3418-4a5d-ab36-68410b6b956c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.648 226310 DEBUG nova.virt.libvirt.imagebackend [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/bf0aed8c-3418-4a5d-ab36-68410b6b956c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.649 226310 DEBUG nova.storage.rbd_utils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] cloning images/bf0aed8c-3418-4a5d-ab36-68410b6b956c@snap to None/837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.813 226310 DEBUG oslo_concurrency.lockutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "275b146b47ad209f54707403f56a31adbd1e714f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.877 226310 DEBUG nova.objects.instance [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.893 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.895 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Start _get_guest_xml network_info=[{"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "vif_mac": "fa:16:3e:6c:63:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'bf0aed8c-3418-4a5d-ab36-68410b6b956c', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6e321817-b498-4ce5-ba72-237a6641f417', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6e321817-b498-4ce5-ba72-237a6641f417', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '837cb339-2f69-440f-b84e-6bcbd9bd81b9', 'attached_at': '', 'detached_at': '', 'volume_id': '6e321817-b498-4ce5-ba72-237a6641f417', 'serial': '6e321817-b498-4ce5-ba72-237a6641f417'}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '172f1477-7658-4342-8952-216fa1d405a1', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.895 226310 DEBUG nova.objects.instance [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'resources' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.913 226310 WARNING nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.919 226310 DEBUG nova.virt.libvirt.host [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.920 226310 DEBUG nova.virt.libvirt.host [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.923 226310 DEBUG nova.virt.libvirt.host [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.923 226310 DEBUG nova.virt.libvirt.host [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.924 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.924 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.924 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.925 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.925 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.925 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.925 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.925 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.926 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.926 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.926 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.926 226310 DEBUG nova.virt.hardware [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.927 226310 DEBUG nova.objects.instance [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:43 np0005539564 nova_compute[226295]: 2025-11-29 08:20:43.988 226310 DEBUG oslo_concurrency.processutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/798225702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.468 226310 DEBUG oslo_concurrency.processutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.492 226310 DEBUG oslo_concurrency.processutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:44.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.904 226310 DEBUG nova.compute.manager [req-aaf910fe-5e6d-4df8-b061-23af67a26094 req-5b261d67-678c-4ac0-bf67-4ee315e4b10d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.905 226310 DEBUG oslo_concurrency.lockutils [req-aaf910fe-5e6d-4df8-b061-23af67a26094 req-5b261d67-678c-4ac0-bf67-4ee315e4b10d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.906 226310 DEBUG oslo_concurrency.lockutils [req-aaf910fe-5e6d-4df8-b061-23af67a26094 req-5b261d67-678c-4ac0-bf67-4ee315e4b10d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.906 226310 DEBUG oslo_concurrency.lockutils [req-aaf910fe-5e6d-4df8-b061-23af67a26094 req-5b261d67-678c-4ac0-bf67-4ee315e4b10d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.906 226310 DEBUG nova.compute.manager [req-aaf910fe-5e6d-4df8-b061-23af67a26094 req-5b261d67-678c-4ac0-bf67-4ee315e4b10d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.907 226310 WARNING nova.compute.manager [req-aaf910fe-5e6d-4df8-b061-23af67a26094 req-5b261d67-678c-4ac0-bf67-4ee315e4b10d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:20:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:44.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2194724653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.992 226310 DEBUG oslo_concurrency.processutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.994 226310 DEBUG nova.virt.libvirt.vif [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-773780986',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-773780986',id=141,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-luhjm0w0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:24Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=837cb339-2f69-440f-b84e-6bcbd9bd81b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "vif_mac": "fa:16:3e:6c:63:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.994 226310 DEBUG nova.network.os_vif_util [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "vif_mac": "fa:16:3e:6c:63:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.995 226310 DEBUG nova.network.os_vif_util [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:63:ec,bridge_name='br-int',has_traffic_filtering=True,id=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7816a3c3-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:44 np0005539564 nova_compute[226295]: 2025-11-29 08:20:44.996 226310 DEBUG nova.objects.instance [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.140 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <uuid>837cb339-2f69-440f-b84e-6bcbd9bd81b9</uuid>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <name>instance-0000008d</name>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-773780986</nova:name>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:20:43</nova:creationTime>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <nova:user uuid="504bc6adabad4f7d8c17b0438c4d9be7">tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member</nova:user>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <nova:project uuid="b9d4c81989d641678300c7a1c173a2c2">tempest-ServerBootFromVolumeStableRescueTest-1019923576</nova:project>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <nova:port uuid="7816a3c3-3902-49f4-9c94-cf9bbfc6b25c">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <entry name="serial">837cb339-2f69-440f-b84e-6bcbd9bd81b9</entry>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <entry name="uuid">837cb339-2f69-440f-b84e-6bcbd9bd81b9</entry>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-6e321817-b498-4ce5-ba72-237a6641f417">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <serial>6e321817-b498-4ce5-ba72-237a6641f417</serial>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.rescue">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <boot order="1"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:6c:63:ec"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <target dev="tap7816a3c3-39"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/console.log" append="off"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:20:45 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:20:45 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:20:45 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:20:45 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.151 226310 INFO nova.virt.libvirt.driver [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance destroyed successfully.#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.210 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.211 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.211 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.211 226310 DEBUG nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] No VIF found with MAC fa:16:3e:6c:63:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.212 226310 INFO nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Using config drive#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.240 226310 DEBUG nova.storage.rbd_utils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.256 226310 DEBUG nova.objects.instance [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.276 226310 DEBUG nova.objects.instance [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'keypairs' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.651 226310 INFO nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Creating config drive at /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config.rescue#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.657 226310 DEBUG oslo_concurrency.processutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplurui4k3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.811 226310 DEBUG oslo_concurrency.processutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplurui4k3" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.855 226310 DEBUG nova.storage.rbd_utils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] rbd image 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:45 np0005539564 nova_compute[226295]: 2025-11-29 08:20:45.859 226310 DEBUG oslo_concurrency.processutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config.rescue 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.096 226310 DEBUG oslo_concurrency.processutils [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config.rescue 837cb339-2f69-440f-b84e-6bcbd9bd81b9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.097 226310 INFO nova.virt.libvirt.driver [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Deleting local config drive /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:20:46 np0005539564 kernel: tap7816a3c3-39: entered promiscuous mode
Nov 29 03:20:46 np0005539564 NetworkManager[48997]: <info>  [1764404446.1845] manager: (tap7816a3c3-39): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.183 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:46Z|00509|binding|INFO|Claiming lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for this chassis.
Nov 29 03:20:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:46Z|00510|binding|INFO|7816a3c3-3902-49f4-9c94-cf9bbfc6b25c: Claiming fa:16:3e:6c:63:ec 10.100.0.12
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.190 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:63:ec 10.100.0.12'], port_security=['fa:16:3e:6c:63:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '837cb339-2f69-440f-b84e-6bcbd9bd81b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.192 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 bound to our chassis#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.195 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:20:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:46Z|00511|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c ovn-installed in OVS
Nov 29 03:20:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:46Z|00512|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c up in Southbound
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.207 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.210 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.212 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[35b1c36e-d97e-4df8-b47f-3f71a3a1ebb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539564 systemd-udevd[278303]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:46 np0005539564 NetworkManager[48997]: <info>  [1764404446.2325] device (tap7816a3c3-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:46 np0005539564 systemd-machined[190128]: New machine qemu-65-instance-0000008d.
Nov 29 03:20:46 np0005539564 NetworkManager[48997]: <info>  [1764404446.2349] device (tap7816a3c3-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:46 np0005539564 systemd[1]: Started Virtual Machine qemu-65-instance-0000008d.
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.255 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[42ede095-bf7a-45fb-8cfd-37c4e3fdb272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.261 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[2244c958-ce53-45f5-98de-f0d93ac458f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.300 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[590c918b-6ea6-4ece-81b4-5e2392265670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.318 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4dc07a-3a96-4522-b8b6-4d77fceacbf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 18141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278316, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.336 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f3408295-4838-4ffb-a139-75b181deb4ea]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724279, 'tstamp': 724279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278318, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724282, 'tstamp': 724282}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278318, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.338 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.340 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.340 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.341 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.341 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.342 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:46.342 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:20:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 43K writes, 170K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 43K writes, 15K syncs, 2.84 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 40.73 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4431 syncs, 2.43 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:20:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:46.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.735 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 837cb339-2f69-440f-b84e-6bcbd9bd81b9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.736 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404446.7347796, 837cb339-2f69-440f-b84e-6bcbd9bd81b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.736 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.744 226310 DEBUG nova.compute.manager [None req-569391f7-1513-4aab-b596-12bfb52261f1 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.782 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.786 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.826 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.827 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404446.7363954, 837cb339-2f69-440f-b84e-6bcbd9bd81b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.827 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.847 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:46 np0005539564 nova_compute[226295]: 2025-11-29 08:20:46.851 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:46.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:47 np0005539564 nova_compute[226295]: 2025-11-29 08:20:47.052 226310 DEBUG nova.compute.manager [req-eb1de766-3614-43b2-85ac-f8b6c0498010 req-66ac018e-3d46-41e5-a5b9-75e6e27b8bd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:47 np0005539564 nova_compute[226295]: 2025-11-29 08:20:47.052 226310 DEBUG oslo_concurrency.lockutils [req-eb1de766-3614-43b2-85ac-f8b6c0498010 req-66ac018e-3d46-41e5-a5b9-75e6e27b8bd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:47 np0005539564 nova_compute[226295]: 2025-11-29 08:20:47.053 226310 DEBUG oslo_concurrency.lockutils [req-eb1de766-3614-43b2-85ac-f8b6c0498010 req-66ac018e-3d46-41e5-a5b9-75e6e27b8bd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:47 np0005539564 nova_compute[226295]: 2025-11-29 08:20:47.053 226310 DEBUG oslo_concurrency.lockutils [req-eb1de766-3614-43b2-85ac-f8b6c0498010 req-66ac018e-3d46-41e5-a5b9-75e6e27b8bd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:47 np0005539564 nova_compute[226295]: 2025-11-29 08:20:47.053 226310 DEBUG nova.compute.manager [req-eb1de766-3614-43b2-85ac-f8b6c0498010 req-66ac018e-3d46-41e5-a5b9-75e6e27b8bd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:47 np0005539564 nova_compute[226295]: 2025-11-29 08:20:47.053 226310 WARNING nova.compute.manager [req-eb1de766-3614-43b2-85ac-f8b6c0498010 req-66ac018e-3d46-41e5-a5b9-75e6e27b8bd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:20:47 np0005539564 nova_compute[226295]: 2025-11-29 08:20:47.196 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:47 np0005539564 nova_compute[226295]: 2025-11-29 08:20:47.646 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:48.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:48.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.020 226310 INFO nova.compute.manager [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Unrescuing#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.021 226310 DEBUG oslo_concurrency.lockutils [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.021 226310 DEBUG oslo_concurrency.lockutils [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquired lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.021 226310 DEBUG nova.network.neutron [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.165 226310 DEBUG nova.compute.manager [req-89e1836d-b70e-4e3f-a27d-2e52aef12dad req-f3959f1c-ed03-4064-bc14-cda668c07878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.166 226310 DEBUG oslo_concurrency.lockutils [req-89e1836d-b70e-4e3f-a27d-2e52aef12dad req-f3959f1c-ed03-4064-bc14-cda668c07878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.166 226310 DEBUG oslo_concurrency.lockutils [req-89e1836d-b70e-4e3f-a27d-2e52aef12dad req-f3959f1c-ed03-4064-bc14-cda668c07878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.167 226310 DEBUG oslo_concurrency.lockutils [req-89e1836d-b70e-4e3f-a27d-2e52aef12dad req-f3959f1c-ed03-4064-bc14-cda668c07878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.167 226310 DEBUG nova.compute.manager [req-89e1836d-b70e-4e3f-a27d-2e52aef12dad req-f3959f1c-ed03-4064-bc14-cda668c07878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:49 np0005539564 nova_compute[226295]: 2025-11-29 08:20:49.168 226310 WARNING nova.compute.manager [req-89e1836d-b70e-4e3f-a27d-2e52aef12dad req-f3959f1c-ed03-4064-bc14-cda668c07878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:20:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:50.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.547 226310 DEBUG nova.network.neutron [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Updating instance_info_cache with network_info: [{"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.571 226310 DEBUG oslo_concurrency.lockutils [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Releasing lock "refresh_cache-837cb339-2f69-440f-b84e-6bcbd9bd81b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.572 226310 DEBUG nova.objects.instance [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'flavor' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:50 np0005539564 kernel: tap7816a3c3-39 (unregistering): left promiscuous mode
Nov 29 03:20:50 np0005539564 NetworkManager[48997]: <info>  [1764404450.6460] device (tap7816a3c3-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.687 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:50Z|00513|binding|INFO|Releasing lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c from this chassis (sb_readonly=0)
Nov 29 03:20:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:50Z|00514|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c down in Southbound
Nov 29 03:20:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:50Z|00515|binding|INFO|Removing iface tap7816a3c3-39 ovn-installed in OVS
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.699 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:63:ec 10.100.0.12'], port_security=['fa:16:3e:6c:63:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '837cb339-2f69-440f-b84e-6bcbd9bd81b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.700 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.702 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.716 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[69aeb18b-7ba0-438f-b533-ba874a3428ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.745 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[da6e8283-1c2e-4e35-8a18-5bd81af18cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.748 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[25853e4a-000f-4f48-9f4f-1a3bdc16e33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539564 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 29 03:20:50 np0005539564 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008d.scope: Consumed 4.676s CPU time.
Nov 29 03:20:50 np0005539564 systemd-machined[190128]: Machine qemu-65-instance-0000008d terminated.
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.775 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ae15c593-213e-4079-80ec-6d84b23f38ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.790 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfdc4e9-16d6-4a0b-aca6-9d3e65c7dbcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 23996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278391, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.804 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcb7549-49f0-481e-baa2-acc487a3e42e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724279, 'tstamp': 724279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278392, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724282, 'tstamp': 724282}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278392, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.805 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.815 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.815 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.815 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.816 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.829 226310 INFO nova.virt.libvirt.driver [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance destroyed successfully.#033[00m
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.829 226310 DEBUG nova.objects.instance [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:50 np0005539564 kernel: tap7816a3c3-39: entered promiscuous mode
Nov 29 03:20:50 np0005539564 systemd-udevd[278383]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:50 np0005539564 NetworkManager[48997]: <info>  [1764404450.9051] manager: (tap7816a3c3-39): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Nov 29 03:20:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:50Z|00516|binding|INFO|Claiming lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for this chassis.
Nov 29 03:20:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:50Z|00517|binding|INFO|7816a3c3-3902-49f4-9c94-cf9bbfc6b25c: Claiming fa:16:3e:6c:63:ec 10.100.0.12
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539564 NetworkManager[48997]: <info>  [1764404450.9170] device (tap7816a3c3-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:50 np0005539564 NetworkManager[48997]: <info>  [1764404450.9184] device (tap7816a3c3-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.918 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:63:ec 10.100.0.12'], port_security=['fa:16:3e:6c:63:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '837cb339-2f69-440f-b84e-6bcbd9bd81b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.920 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 bound to our chassis#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.924 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:20:50 np0005539564 systemd-machined[190128]: New machine qemu-66-instance-0000008d.
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.940 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[20d68190-2c66-4e29-9c76-6c1fddcba42b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:50Z|00518|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c ovn-installed in OVS
Nov 29 03:20:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:50Z|00519|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c up in Southbound
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.947 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539564 nova_compute[226295]: 2025-11-29 08:20:50.948 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539564 systemd[1]: Started Virtual Machine qemu-66-instance-0000008d.
Nov 29 03:20:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:50.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.982 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[92271b74-0fa1-40db-8bf8-301a9dabe55d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:50.985 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0de7b1c2-b6db-4585-8c6f-37fa3df7346e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:51.013 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[951b7de9-d87f-4a40-a900-697786022725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:51.032 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[644bdbe4-c75e-409a-a480-1c9ecd4fe87b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 23996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278428, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:51.065 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d6b8c1-dc55-45c1-82c6-5a5f338700da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724279, 'tstamp': 724279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278430, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724282, 'tstamp': 724282}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278430, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:51.066 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:51.069 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:51.069 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:51.069 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:51.069 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.262 226310 DEBUG nova.compute.manager [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.262 226310 DEBUG oslo_concurrency.lockutils [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.263 226310 DEBUG oslo_concurrency.lockutils [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.264 226310 DEBUG oslo_concurrency.lockutils [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.264 226310 DEBUG nova.compute.manager [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.265 226310 WARNING nova.compute.manager [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.265 226310 DEBUG nova.compute.manager [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.266 226310 DEBUG oslo_concurrency.lockutils [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.266 226310 DEBUG oslo_concurrency.lockutils [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.267 226310 DEBUG oslo_concurrency.lockutils [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.267 226310 DEBUG nova.compute.manager [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.268 226310 WARNING nova.compute.manager [req-ea5d6304-3d0f-4e8c-917f-87e8965204b9 req-a3a945b7-c71a-4cd8-b017-39a6ab25f345 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.360 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 837cb339-2f69-440f-b84e-6bcbd9bd81b9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.360 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404451.3599958, 837cb339-2f69-440f-b84e-6bcbd9bd81b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.361 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.405 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.409 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.434 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.434 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404451.3622804, 837cb339-2f69-440f-b84e-6bcbd9bd81b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.435 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.469 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.472 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.506 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:20:51 np0005539564 nova_compute[226295]: 2025-11-29 08:20:51.793 226310 DEBUG nova.compute.manager [None req-961f59b5-d00d-4955-9067-236fb16dae85 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:52 np0005539564 nova_compute[226295]: 2025-11-29 08:20:52.066 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:52 np0005539564 nova_compute[226295]: 2025-11-29 08:20:52.200 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:52 np0005539564 nova_compute[226295]: 2025-11-29 08:20:52.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:52.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:52 np0005539564 nova_compute[226295]: 2025-11-29 08:20:52.648 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:52.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:53Z|00520|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.385 226310 DEBUG nova.compute.manager [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.385 226310 DEBUG oslo_concurrency.lockutils [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.386 226310 DEBUG oslo_concurrency.lockutils [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.386 226310 DEBUG oslo_concurrency.lockutils [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.386 226310 DEBUG nova.compute.manager [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.387 226310 WARNING nova.compute.manager [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.388 226310 DEBUG nova.compute.manager [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.388 226310 DEBUG oslo_concurrency.lockutils [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.388 226310 DEBUG oslo_concurrency.lockutils [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.389 226310 DEBUG oslo_concurrency.lockutils [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.389 226310 DEBUG nova.compute.manager [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.390 226310 WARNING nova.compute.manager [req-bc9d67ed-241f-498a-ba52-3da867e12ef9 req-e861d415-307e-4725-b5b9-1fc8b4e46127 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.408 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.409 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.410 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.410 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.411 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.412 226310 INFO nova.compute.manager [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Terminating instance#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.415 226310 DEBUG nova.compute.manager [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:20:53 np0005539564 kernel: tap7816a3c3-39 (unregistering): left promiscuous mode
Nov 29 03:20:53 np0005539564 NetworkManager[48997]: <info>  [1764404453.4712] device (tap7816a3c3-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:53Z|00521|binding|INFO|Releasing lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c from this chassis (sb_readonly=0)
Nov 29 03:20:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:53Z|00522|binding|INFO|Setting lport 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c down in Southbound
Nov 29 03:20:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:20:53Z|00523|binding|INFO|Removing iface tap7816a3c3-39 ovn-installed in OVS
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.488 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.497 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:63:ec 10.100.0.12'], port_security=['fa:16:3e:6c:63:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '837cb339-2f69-440f-b84e-6bcbd9bd81b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '8', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.499 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7816a3c3-3902-49f4-9c94-cf9bbfc6b25c in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.503 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.512 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539564 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.525 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7f30648a-e035-4305-8d3b-a900efa96d0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:53 np0005539564 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008d.scope: Consumed 2.653s CPU time.
Nov 29 03:20:53 np0005539564 systemd-machined[190128]: Machine qemu-66-instance-0000008d terminated.
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.572 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[feee656e-773b-47f6-a2a3-72559b86ae50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.577 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c3208462-f970-4b61-8c5d-436e47b4d387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.617 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d97f09a9-815c-4987-bcbf-5df3c8f9d910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.641 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[54d49ec6-d756-4b20-a254-dc7daa458f15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 23996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278501, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.660 226310 INFO nova.virt.libvirt.driver [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Instance destroyed successfully.#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.661 226310 DEBUG nova.objects.instance [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'resources' on Instance uuid 837cb339-2f69-440f-b84e-6bcbd9bd81b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.671 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1ed6a7-847e-4304-9797-92010f03b4bd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724279, 'tstamp': 724279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278509, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724282, 'tstamp': 724282}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278509, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.673 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.675 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.680 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.682 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.682 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.683 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:20:53.684 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.697 226310 DEBUG nova.virt.libvirt.vif [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-773780986',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-773780986',id=141,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-luhjm0w0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:51Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=837cb339-2f69-440f-b84e-6bcbd9bd81b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.697 226310 DEBUG nova.network.os_vif_util [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "address": "fa:16:3e:6c:63:ec", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7816a3c3-39", "ovs_interfaceid": "7816a3c3-3902-49f4-9c94-cf9bbfc6b25c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.698 226310 DEBUG nova.network.os_vif_util [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:63:ec,bridge_name='br-int',has_traffic_filtering=True,id=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7816a3c3-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.698 226310 DEBUG os_vif [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:63:ec,bridge_name='br-int',has_traffic_filtering=True,id=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7816a3c3-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.700 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.700 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7816a3c3-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.701 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.703 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.706 226310 INFO os_vif [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:63:ec,bridge_name='br-int',has_traffic_filtering=True,id=7816a3c3-3902-49f4-9c94-cf9bbfc6b25c,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7816a3c3-39')#033[00m
Nov 29 03:20:53 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.945 226310 INFO nova.virt.libvirt.driver [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Deleting instance files /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9_del#033[00m
Nov 29 03:20:53 np0005539564 nova_compute[226295]: 2025-11-29 08:20:53.946 226310 INFO nova.virt.libvirt.driver [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Deletion of /var/lib/nova/instances/837cb339-2f69-440f-b84e-6bcbd9bd81b9_del complete#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.001 226310 INFO nova.compute.manager [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.002 226310 DEBUG oslo.service.loopingcall [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.002 226310 DEBUG nova.compute.manager [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.002 226310 DEBUG nova.network.neutron [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.472 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.472 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.499 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:20:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:54.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.591 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.592 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.603 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.604 226310 INFO nova.compute.claims [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:20:54 np0005539564 nova_compute[226295]: 2025-11-29 08:20:54.803 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:54.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/181383023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.256 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.265 226310 DEBUG nova.compute.provider_tree [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.731 226310 DEBUG nova.scheduler.client.report [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.786 226310 DEBUG nova.compute.manager [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.787 226310 DEBUG oslo_concurrency.lockutils [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.788 226310 DEBUG oslo_concurrency.lockutils [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.788 226310 DEBUG oslo_concurrency.lockutils [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.789 226310 DEBUG nova.compute.manager [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.789 226310 DEBUG nova.compute.manager [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-unplugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.790 226310 DEBUG nova.compute.manager [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.790 226310 DEBUG oslo_concurrency.lockutils [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.790 226310 DEBUG oslo_concurrency.lockutils [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.791 226310 DEBUG oslo_concurrency.lockutils [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.791 226310 DEBUG nova.compute.manager [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] No waiting events found dispatching network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.791 226310 WARNING nova.compute.manager [req-40ca0a72-b85a-41bc-90a5-f868a7654e18 req-b858c201-c062-40b8-b59d-40690560ca61 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received unexpected event network-vif-plugged-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.818 226310 DEBUG nova.network.neutron [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.835 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.836 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.928 226310 INFO nova.compute.manager [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Took 1.93 seconds to deallocate network for instance.#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.964 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.965 226310 DEBUG nova.network.neutron [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:20:55 np0005539564 nova_compute[226295]: 2025-11-29 08:20:55.988 226310 INFO nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.007 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:20:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.428 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.430 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.431 226310 INFO nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Creating image(s)#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.472 226310 DEBUG nova.storage.rbd_utils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.515 226310 DEBUG nova.storage.rbd_utils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:56.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.560 226310 DEBUG nova.storage.rbd_utils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.565 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.612 226310 INFO nova.compute.manager [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Took 0.68 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.626 226310 DEBUG nova.policy [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b52040d601a4a56abcaf3f046f1e349', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '358970eca7ad4b05b70f43e5507ac052', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.670 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.670 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.671 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.672 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.712 226310 DEBUG nova.storage.rbd_utils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.718 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.806 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.808 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:56 np0005539564 nova_compute[226295]: 2025-11-29 08:20:56.941 226310 DEBUG oslo_concurrency.processutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:56.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2316166036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:57 np0005539564 nova_compute[226295]: 2025-11-29 08:20:57.427 226310 DEBUG oslo_concurrency.processutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:57 np0005539564 nova_compute[226295]: 2025-11-29 08:20:57.435 226310 DEBUG nova.compute.provider_tree [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:57 np0005539564 nova_compute[226295]: 2025-11-29 08:20:57.652 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:57 np0005539564 nova_compute[226295]: 2025-11-29 08:20:57.836 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:57 np0005539564 nova_compute[226295]: 2025-11-29 08:20:57.924 226310 DEBUG nova.storage.rbd_utils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] resizing rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:20:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:58.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:58 np0005539564 nova_compute[226295]: 2025-11-29 08:20:58.567 226310 DEBUG nova.scheduler.client.report [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:58 np0005539564 nova_compute[226295]: 2025-11-29 08:20:58.620 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:58 np0005539564 nova_compute[226295]: 2025-11-29 08:20:58.621 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:58 np0005539564 nova_compute[226295]: 2025-11-29 08:20:58.621 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:20:58 np0005539564 nova_compute[226295]: 2025-11-29 08:20:58.742 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:58 np0005539564 nova_compute[226295]: 2025-11-29 08:20:58.872 226310 DEBUG nova.compute.manager [req-76ee283b-eb3f-4a34-8ba2-d91d177206d3 req-870bcadb-70ea-45e7-8bbb-9f5d3d5dcfac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Received event network-vif-deleted-7816a3c3-3902-49f4-9c94-cf9bbfc6b25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:58 np0005539564 nova_compute[226295]: 2025-11-29 08:20:58.875 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:20:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:58.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:59 np0005539564 nova_compute[226295]: 2025-11-29 08:20:59.332 226310 DEBUG nova.objects.instance [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'migration_context' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:00 np0005539564 nova_compute[226295]: 2025-11-29 08:21:00.318 226310 INFO nova.scheduler.client.report [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Deleted allocations for instance 837cb339-2f69-440f-b84e-6bcbd9bd81b9#033[00m
Nov 29 03:21:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:00 np0005539564 nova_compute[226295]: 2025-11-29 08:21:00.954 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:00 np0005539564 nova_compute[226295]: 2025-11-29 08:21:00.955 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Ensure instance console log exists: /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:00 np0005539564 nova_compute[226295]: 2025-11-29 08:21:00.956 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:00 np0005539564 nova_compute[226295]: 2025-11-29 08:21:00.956 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:00 np0005539564 nova_compute[226295]: 2025-11-29 08:21:00.956 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:00.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:01 np0005539564 nova_compute[226295]: 2025-11-29 08:21:01.126 226310 DEBUG oslo_concurrency.lockutils [None req-d77ee7d5-49ca-4c87-80d0-6ee2f92601bb 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "837cb339-2f69-440f-b84e-6bcbd9bd81b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:02 np0005539564 nova_compute[226295]: 2025-11-29 08:21:02.654 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e325 e325: 3 total, 3 up, 3 in
Nov 29 03:21:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:02.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.507 226310 DEBUG nova.network.neutron [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Successfully created port: 2c29e276-97c4-4a7c-8428-caad5a74a94d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.735 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.735 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.736 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.745 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.749 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "091988cc-8042-4aa1-b909-5ca1744ff259" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.750 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.750 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.751 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.751 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.754 226310 INFO nova.compute.manager [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Terminating instance#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.755 226310 DEBUG nova.compute.manager [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:21:03 np0005539564 kernel: tap4b3bde4e-a9 (unregistering): left promiscuous mode
Nov 29 03:21:03 np0005539564 NetworkManager[48997]: <info>  [1764404463.8254] device (tap4b3bde4e-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.834 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:03Z|00524|binding|INFO|Releasing lport 4b3bde4e-a900-44fb-96a9-a6f92c949f67 from this chassis (sb_readonly=0)
Nov 29 03:21:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:03Z|00525|binding|INFO|Setting lport 4b3bde4e-a900-44fb-96a9-a6f92c949f67 down in Southbound
Nov 29 03:21:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:03Z|00526|binding|INFO|Removing iface tap4b3bde4e-a9 ovn-installed in OVS
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.842 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:88:4b 10.100.0.8'], port_security=['fa:16:3e:70:88:4b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '091988cc-8042-4aa1-b909-5ca1744ff259', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=4b3bde4e-a900-44fb-96a9-a6f92c949f67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.843 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 4b3bde4e-a900-44fb-96a9-a6f92c949f67 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.845 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c25940b-e63b-4443-a94b-0216a35e8dc6#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.853 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.865 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[04b034a9-3ea4-4bff-8303-08db092cd4be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:03 np0005539564 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 29 03:21:03 np0005539564 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008a.scope: Consumed 17.357s CPU time.
Nov 29 03:21:03 np0005539564 systemd-machined[190128]: Machine qemu-62-instance-0000008a terminated.
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.893 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7769de88-5301-4b46-8681-ee0959e22a57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.897 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[16a4ca1e-f992-47b4-8b8a-ddce0c3c9713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.925 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b8986adf-79d0-43e9-b875-51c80cc42b4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.945 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3ac0cf-c239-421b-bbbe-3f3add9cdb02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c25940b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724265, 'reachable_time': 23996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278759, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.960 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[66534cb7-1009-4ab1-a656-81c587779c05]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724279, 'tstamp': 724279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278760, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c25940b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724282, 'tstamp': 724282}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278760, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.963 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.965 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.970 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.971 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c25940b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.971 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.971 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c25940b-e0, col_values=(('external_ids', {'iface-id': '9da51447-ee5a-4659-ba78-deb4b11b4098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:03.972 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.996 226310 INFO nova.virt.libvirt.driver [-] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Instance destroyed successfully.#033[00m
Nov 29 03:21:03 np0005539564 nova_compute[226295]: 2025-11-29 08:21:03.996 226310 DEBUG nova.objects.instance [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'resources' on Instance uuid 091988cc-8042-4aa1-b909-5ca1744ff259 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.025 226310 DEBUG nova.virt.libvirt.vif [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1059282693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1059282693',id=138,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-2fb0gcdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:01Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=091988cc-8042-4aa1-b909-5ca1744ff259,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.025 226310 DEBUG nova.network.os_vif_util [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "address": "fa:16:3e:70:88:4b", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3bde4e-a9", "ovs_interfaceid": "4b3bde4e-a900-44fb-96a9-a6f92c949f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.026 226310 DEBUG nova.network.os_vif_util [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:88:4b,bridge_name='br-int',has_traffic_filtering=True,id=4b3bde4e-a900-44fb-96a9-a6f92c949f67,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3bde4e-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.026 226310 DEBUG os_vif [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:88:4b,bridge_name='br-int',has_traffic_filtering=True,id=4b3bde4e-a900-44fb-96a9-a6f92c949f67,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3bde4e-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.027 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.028 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b3bde4e-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.032 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.033 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.035 226310 INFO os_vif [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:88:4b,bridge_name='br-int',has_traffic_filtering=True,id=4b3bde4e-a900-44fb-96a9-a6f92c949f67,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3bde4e-a9')#033[00m
Nov 29 03:21:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:04.217 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.217 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:04.218 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.383 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updating instance_info_cache with network_info: [{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.456 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.457 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.458 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.458 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.459 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.496 226310 INFO nova.virt.libvirt.driver [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Deleting instance files /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259_del#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.498 226310 INFO nova.virt.libvirt.driver [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Deletion of /var/lib/nova/instances/091988cc-8042-4aa1-b909-5ca1744ff259_del complete#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.505 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.506 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.506 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.507 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.507 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.587 226310 INFO nova.compute.manager [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.588 226310 DEBUG oslo.service.loopingcall [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.589 226310 DEBUG nova.compute.manager [-] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:21:04 np0005539564 nova_compute[226295]: 2025-11-29 08:21:04.590 226310 DEBUG nova.network.neutron [-] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:21:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:04 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2440225623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:05.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.017 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.268 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.269 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.273 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.274 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.470 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.471 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3874MB free_disk=20.551525115966797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.471 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.472 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.737 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 0b082cd2-d1d3-4577-be0a-30b9256a223e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.737 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 38bba84b-1fb0-460a-a6aa-707ef29970b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.738 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 091988cc-8042-4aa1-b909-5ca1744ff259 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.738 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.739 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.739 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:21:05 np0005539564 nova_compute[226295]: 2025-11-29 08:21:05.994 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.462 226310 DEBUG nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received event network-vif-unplugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.464 226310 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.464 226310 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.465 226310 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.465 226310 DEBUG nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] No waiting events found dispatching network-vif-unplugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.466 226310 DEBUG nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received event network-vif-unplugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.466 226310 DEBUG nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received event network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.467 226310 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.467 226310 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.468 226310 DEBUG oslo_concurrency.lockutils [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.468 226310 DEBUG nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] No waiting events found dispatching network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.469 226310 WARNING nova.compute.manager [req-15e5e33d-69a7-4dd7-9d9b-2a3cef10b276 req-8ac2b9db-9eec-4415-98ae-a848e96a79a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received unexpected event network-vif-plugged-4b3bde4e-a900-44fb-96a9-a6f92c949f67 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:21:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1345593698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.494 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.505 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.577 226310 DEBUG nova.network.neutron [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Successfully updated port: 2c29e276-97c4-4a7c-8428-caad5a74a94d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.643 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.718 226310 DEBUG nova.compute.manager [req-d8ccf337-3976-4281-8ade-9428e035ff14 req-4362c65b-455e-45ea-8c01-a4497a727eb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-changed-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.719 226310 DEBUG nova.compute.manager [req-d8ccf337-3976-4281-8ade-9428e035ff14 req-4362c65b-455e-45ea-8c01-a4497a727eb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Refreshing instance network info cache due to event network-changed-2c29e276-97c4-4a7c-8428-caad5a74a94d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.719 226310 DEBUG oslo_concurrency.lockutils [req-d8ccf337-3976-4281-8ade-9428e035ff14 req-4362c65b-455e-45ea-8c01-a4497a727eb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.719 226310 DEBUG oslo_concurrency.lockutils [req-d8ccf337-3976-4281-8ade-9428e035ff14 req-4362c65b-455e-45ea-8c01-a4497a727eb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.719 226310 DEBUG nova.network.neutron [req-d8ccf337-3976-4281-8ade-9428e035ff14 req-4362c65b-455e-45ea-8c01-a4497a727eb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Refreshing network info cache for port 2c29e276-97c4-4a7c-8428-caad5a74a94d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.726 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.740 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.740 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.829 226310 DEBUG nova.network.neutron [-] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.855 226310 INFO nova.compute.manager [-] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Took 2.27 seconds to deallocate network for instance.#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.922 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.923 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:06 np0005539564 nova_compute[226295]: 2025-11-29 08:21:06.947 226310 DEBUG nova.network.neutron [req-d8ccf337-3976-4281-8ade-9428e035ff14 req-4362c65b-455e-45ea-8c01-a4497a727eb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:07.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.035 226310 DEBUG oslo_concurrency.processutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4212738034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.525 226310 DEBUG oslo_concurrency.processutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.533 226310 DEBUG nova.compute.provider_tree [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.540 226310 DEBUG nova.network.neutron [req-d8ccf337-3976-4281-8ade-9428e035ff14 req-4362c65b-455e-45ea-8c01-a4497a727eb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.557 226310 DEBUG nova.scheduler.client.report [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.565 226310 DEBUG oslo_concurrency.lockutils [req-d8ccf337-3976-4281-8ade-9428e035ff14 req-4362c65b-455e-45ea-8c01-a4497a727eb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.566 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.567 226310 DEBUG nova.network.neutron [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.585 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.619 226310 INFO nova.scheduler.client.report [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Deleted allocations for instance 091988cc-8042-4aa1-b909-5ca1744ff259#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539564 nova_compute[226295]: 2025-11-29 08:21:07.692 226310 DEBUG oslo_concurrency.lockutils [None req-9e9e4fc8-6958-4851-b4e1-e95acdd3cfee 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "091988cc-8042-4aa1-b909-5ca1744ff259" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:08 np0005539564 nova_compute[226295]: 2025-11-29 08:21:08.003 226310 DEBUG nova.network.neutron [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:08 np0005539564 nova_compute[226295]: 2025-11-29 08:21:08.658 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404453.656848, 837cb339-2f69-440f-b84e-6bcbd9bd81b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:08 np0005539564 nova_compute[226295]: 2025-11-29 08:21:08.659 226310 INFO nova.compute.manager [-] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:08 np0005539564 nova_compute[226295]: 2025-11-29 08:21:08.678 226310 DEBUG nova.compute.manager [None req-8d2a0d8d-693b-4ac7-bc23-5a9c26c828a7 - - - - - -] [instance: 837cb339-2f69-440f-b84e-6bcbd9bd81b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:08 np0005539564 nova_compute[226295]: 2025-11-29 08:21:08.842 226310 DEBUG nova.compute.manager [req-e4a9824d-141e-44b7-90a3-97af36e6b3f1 req-111e45a3-ae12-43a5-ac67-0dd95d9b526e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Received event network-vif-deleted-4b3bde4e-a900-44fb-96a9-a6f92c949f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:09.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.030 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.109 226310 DEBUG nova.network.neutron [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating instance_info_cache with network_info: [{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.342 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.343 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance network_info: |[{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.347 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Start _get_guest_xml network_info=[{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.352 226310 WARNING nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.356 226310 DEBUG nova.virt.libvirt.host [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.357 226310 DEBUG nova.virt.libvirt.host [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.361 226310 DEBUG nova.virt.libvirt.host [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.362 226310 DEBUG nova.virt.libvirt.host [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.365 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.365 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.366 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.366 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.367 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.367 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.367 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.368 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.368 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.369 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.369 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.369 226310 DEBUG nova.virt.hardware [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.375 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2189757330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.927 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.970 226310 DEBUG nova.storage.rbd_utils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:09 np0005539564 nova_compute[226295]: 2025-11-29 08:21:09.975 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:10.221 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:10 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2937818125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.412 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.414 226310 DEBUG nova.virt.libvirt.vif [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-926273439',display_name='tempest-ServerStableDeviceRescueTest-server-926273439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-926273439',id=146,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM90uGuHFEZUG4e9c1+zQZPo7nyMBkO25w7+OeL6gyYWhhodtL4WK14dtSTsr1qPRwaxAcx3xf1h4uzlv3mmeGtCTv+RpFMHq6ymHA+bpygtGf2oytOmyzvB5m3+xPiJeg==',key_name='tempest-keypair-1800430740',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-3qu0pws6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=63dba7cc-b3af-4fb3-bbe8-58d5a087af19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.415 226310 DEBUG nova.network.os_vif_util [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.416 226310 DEBUG nova.network.os_vif_util [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:de:29,bridge_name='br-int',has_traffic_filtering=True,id=2c29e276-97c4-4a7c-8428-caad5a74a94d,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c29e276-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.418 226310 DEBUG nova.objects.instance [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.525 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <uuid>63dba7cc-b3af-4fb3-bbe8-58d5a087af19</uuid>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <name>instance-00000092</name>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-926273439</nova:name>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:21:09</nova:creationTime>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <nova:user uuid="3b52040d601a4a56abcaf3f046f1e349">tempest-ServerStableDeviceRescueTest-1105304301-project-member</nova:user>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <nova:project uuid="358970eca7ad4b05b70f43e5507ac052">tempest-ServerStableDeviceRescueTest-1105304301</nova:project>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <nova:port uuid="2c29e276-97c4-4a7c-8428-caad5a74a94d">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <entry name="serial">63dba7cc-b3af-4fb3-bbe8-58d5a087af19</entry>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <entry name="uuid">63dba7cc-b3af-4fb3-bbe8-58d5a087af19</entry>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:f6:de:29"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <target dev="tap2c29e276-97"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/console.log" append="off"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:21:10 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:21:10 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:21:10 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:21:10 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.527 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Preparing to wait for external event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.527 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.528 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.528 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.529 226310 DEBUG nova.virt.libvirt.vif [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-926273439',display_name='tempest-ServerStableDeviceRescueTest-server-926273439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-926273439',id=146,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM90uGuHFEZUG4e9c1+zQZPo7nyMBkO25w7+OeL6gyYWhhodtL4WK14dtSTsr1qPRwaxAcx3xf1h4uzlv3mmeGtCTv+RpFMHq6ymHA+bpygtGf2oytOmyzvB5m3+xPiJeg==',key_name='tempest-keypair-1800430740',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-3qu0pws6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=63dba7cc-b3af-4fb3-bbe8-58d5a087af19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.529 226310 DEBUG nova.network.os_vif_util [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.530 226310 DEBUG nova.network.os_vif_util [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:de:29,bridge_name='br-int',has_traffic_filtering=True,id=2c29e276-97c4-4a7c-8428-caad5a74a94d,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c29e276-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.530 226310 DEBUG os_vif [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:de:29,bridge_name='br-int',has_traffic_filtering=True,id=2c29e276-97c4-4a7c-8428-caad5a74a94d,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c29e276-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.531 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.532 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.532 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.535 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.535 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c29e276-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.536 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c29e276-97, col_values=(('external_ids', {'iface-id': '2c29e276-97c4-4a7c-8428-caad5a74a94d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:de:29', 'vm-uuid': '63dba7cc-b3af-4fb3-bbe8-58d5a087af19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.537 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:10 np0005539564 NetworkManager[48997]: <info>  [1764404470.5385] manager: (tap2c29e276-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.541 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.546 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.547 226310 INFO os_vif [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:de:29,bridge_name='br-int',has_traffic_filtering=True,id=2c29e276-97c4-4a7c-8428-caad5a74a94d,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c29e276-97')#033[00m
Nov 29 03:21:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:10.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.623 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.623 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.624 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:f6:de:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.624 226310 INFO nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Using config drive#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.653 226310 DEBUG nova.storage.rbd_utils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:10 np0005539564 nova_compute[226295]: 2025-11-29 08:21:10.992 226310 INFO nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Creating config drive at /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config#033[00m
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.004 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppr6zijkd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:11.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.172 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppr6zijkd" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.481 226310 DEBUG nova.storage.rbd_utils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.486 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.743 226310 DEBUG oslo_concurrency.processutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.743 226310 INFO nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Deleting local config drive /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:11 np0005539564 kernel: tap2c29e276-97: entered promiscuous mode
Nov 29 03:21:11 np0005539564 NetworkManager[48997]: <info>  [1764404471.8106] manager: (tap2c29e276-97): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Nov 29 03:21:11 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:11Z|00527|binding|INFO|Claiming lport 2c29e276-97c4-4a7c-8428-caad5a74a94d for this chassis.
Nov 29 03:21:11 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:11Z|00528|binding|INFO|2c29e276-97c4-4a7c-8428-caad5a74a94d: Claiming fa:16:3e:f6:de:29 10.100.0.12
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.812 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.819 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:de:29 10.100.0.12'], port_security=['fa:16:3e:f6:de:29 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '63dba7cc-b3af-4fb3-bbe8-58d5a087af19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e478a14-56df-4311-a000-02a6d80fadda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2c29e276-97c4-4a7c-8428-caad5a74a94d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.821 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2c29e276-97c4-4a7c-8428-caad5a74a94d in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.824 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:21:11 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:11Z|00529|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d ovn-installed in OVS
Nov 29 03:21:11 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:11Z|00530|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d up in Southbound
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.848 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d218d390-3751-4007-b7ce-e32d3bee09b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.850 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539564 nova_compute[226295]: 2025-11-29 08:21:11.855 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539564 systemd-machined[190128]: New machine qemu-67-instance-00000092.
Nov 29 03:21:11 np0005539564 systemd[1]: Started Virtual Machine qemu-67-instance-00000092.
Nov 29 03:21:11 np0005539564 systemd-udevd[278998]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.892 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e18d9988-1b4a-4a98-84c4-702679ee08e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.898 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4079fb-9ae1-4cdd-9ed5-425d9b10634e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539564 NetworkManager[48997]: <info>  [1764404471.9038] device (tap2c29e276-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:11 np0005539564 NetworkManager[48997]: <info>  [1764404471.9063] device (tap2c29e276-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.947 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[55a085b5-b560-41a4-8a51-99a8b7a9cc74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.971 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5fc442-95a6-4f51-b652-f8962636c3e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736750, 'reachable_time': 22429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279006, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.996 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[eb19edff-514b-45ca-8b92-a257b5c94060]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736766, 'tstamp': 736766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279009, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736770, 'tstamp': 736770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279009, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:11.998 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.000 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:12.003 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:12.003 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:12.004 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:12.004 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:12 np0005539564 podman[279049]: 2025-11-29 08:21:12.517119877 +0000 UTC m=+0.050631709 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:21:12 np0005539564 podman[279048]: 2025-11-29 08:21:12.517796826 +0000 UTC m=+0.065122072 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 29 03:21:12 np0005539564 podman[279047]: 2025-11-29 08:21:12.541764644 +0000 UTC m=+0.089092390 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 03:21:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.660 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e326 e326: 3 total, 3 up, 3 in
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.702 226310 DEBUG nova.compute.manager [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.702 226310 DEBUG oslo_concurrency.lockutils [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.703 226310 DEBUG oslo_concurrency.lockutils [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.703 226310 DEBUG oslo_concurrency.lockutils [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.703 226310 DEBUG nova.compute.manager [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Processing event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.703 226310 DEBUG nova.compute.manager [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.704 226310 DEBUG oslo_concurrency.lockutils [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.704 226310 DEBUG oslo_concurrency.lockutils [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.704 226310 DEBUG oslo_concurrency.lockutils [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.704 226310 DEBUG nova.compute.manager [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.704 226310 WARNING nova.compute.manager [req-edf92d0e-135e-472a-8c8f-f894305dc80e req-e91481ff-fdf5-4aca-9a1c-2fa799bd7a5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.823 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404472.8231587, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.824 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.828 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.832 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.836 226310 INFO nova.virt.libvirt.driver [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance spawned successfully.#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.837 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.862 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.869 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.876 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.877 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.878 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.878 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.879 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.880 226310 DEBUG nova.virt.libvirt.driver [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.893 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.894 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404472.8280587, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.894 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.947 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.952 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404472.8311346, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.952 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.980 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.985 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.990 226310 INFO nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Took 16.56 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:21:12 np0005539564 nova_compute[226295]: 2025-11-29 08:21:12.991 226310 DEBUG nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.005 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:13.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.058 226310 INFO nova.compute.manager [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Took 18.50 seconds to build instance.#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.085 226310 DEBUG oslo_concurrency.lockutils [None req-1ff8f07f-bb49-4d45-8a84-a1443ee4fccd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e327 e327: 3 total, 3 up, 3 in
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.808 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.808 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.809 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.809 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.810 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.811 226310 INFO nova.compute.manager [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Terminating instance#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.812 226310 DEBUG nova.compute.manager [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:21:13 np0005539564 kernel: tap81ca5375-8e (unregistering): left promiscuous mode
Nov 29 03:21:13 np0005539564 NetworkManager[48997]: <info>  [1764404473.8767] device (tap81ca5375-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:13Z|00531|binding|INFO|Releasing lport 81ca5375-8e5a-46e3-9340-e5375e00d3e6 from this chassis (sb_readonly=0)
Nov 29 03:21:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:13Z|00532|binding|INFO|Setting lport 81ca5375-8e5a-46e3-9340-e5375e00d3e6 down in Southbound
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.932 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:13Z|00533|binding|INFO|Removing iface tap81ca5375-8e ovn-installed in OVS
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.935 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:13.942 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:48:33 10.100.0.14'], port_security=['fa:16:3e:a1:48:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0b082cd2-d1d3-4577-be0a-30b9256a223e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9d4c81989d641678300c7a1c173a2c2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96ed61db-551b-4509-9cdf-2499e8e15e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9df4a573-88b3-436c-84aa-f335285d9a2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=81ca5375-8e5a-46e3-9340-e5375e00d3e6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:13.943 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 81ca5375-8e5a-46e3-9340-e5375e00d3e6 in datapath 3c25940b-e63b-4443-a94b-0216a35e8dc6 unbound from our chassis#033[00m
Nov 29 03:21:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:13.944 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c25940b-e63b-4443-a94b-0216a35e8dc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:13.945 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b9f126-b245-4029-8029-a1a547082ac1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:13.952 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 namespace which is not needed anymore#033[00m
Nov 29 03:21:13 np0005539564 nova_compute[226295]: 2025-11-29 08:21:13.952 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:13 np0005539564 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000080.scope: Deactivated successfully.
Nov 29 03:21:13 np0005539564 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000080.scope: Consumed 24.985s CPU time.
Nov 29 03:21:13 np0005539564 systemd-machined[190128]: Machine qemu-57-instance-00000080 terminated.
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.051 226310 INFO nova.virt.libvirt.driver [-] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Instance destroyed successfully.#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.052 226310 DEBUG nova.objects.instance [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lazy-loading 'resources' on Instance uuid 0b082cd2-d1d3-4577-be0a-30b9256a223e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.073 226310 DEBUG nova.virt.libvirt.vif [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2082719356',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2082719356',id=128,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9d4c81989d641678300c7a1c173a2c2',ramdisk_id='',reservation_id='r-laevdyrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1019923576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:01Z,user_data=None,user_id='504bc6adabad4f7d8c17b0438c4d9be7',uuid=0b082cd2-d1d3-4577-be0a-30b9256a223e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.074 226310 DEBUG nova.network.os_vif_util [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converting VIF {"id": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "address": "fa:16:3e:a1:48:33", "network": {"id": "3c25940b-e63b-4443-a94b-0216a35e8dc6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1584884772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9d4c81989d641678300c7a1c173a2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ca5375-8e", "ovs_interfaceid": "81ca5375-8e5a-46e3-9340-e5375e00d3e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.075 226310 DEBUG nova.network.os_vif_util [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a1:48:33,bridge_name='br-int',has_traffic_filtering=True,id=81ca5375-8e5a-46e3-9340-e5375e00d3e6,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ca5375-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.075 226310 DEBUG os_vif [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:48:33,bridge_name='br-int',has_traffic_filtering=True,id=81ca5375-8e5a-46e3-9340-e5375e00d3e6,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ca5375-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.078 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.078 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81ca5375-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.080 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.082 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.084 226310 INFO os_vif [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:48:33,bridge_name='br-int',has_traffic_filtering=True,id=81ca5375-8e5a-46e3-9340-e5375e00d3e6,network=Network(3c25940b-e63b-4443-a94b-0216a35e8dc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ca5375-8e')#033[00m
Nov 29 03:21:14 np0005539564 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[273876]: [NOTICE]   (273880) : haproxy version is 2.8.14-c23fe91
Nov 29 03:21:14 np0005539564 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[273876]: [NOTICE]   (273880) : path to executable is /usr/sbin/haproxy
Nov 29 03:21:14 np0005539564 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[273876]: [WARNING]  (273880) : Exiting Master process...
Nov 29 03:21:14 np0005539564 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[273876]: [WARNING]  (273880) : Exiting Master process...
Nov 29 03:21:14 np0005539564 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[273876]: [ALERT]    (273880) : Current worker (273882) exited with code 143 (Terminated)
Nov 29 03:21:14 np0005539564 neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6[273876]: [WARNING]  (273880) : All workers exited. Exiting... (0)
Nov 29 03:21:14 np0005539564 systemd[1]: libpod-3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32.scope: Deactivated successfully.
Nov 29 03:21:14 np0005539564 podman[279148]: 2025-11-29 08:21:14.130257599 +0000 UTC m=+0.054562236 container died 3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:21:14 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32-userdata-shm.mount: Deactivated successfully.
Nov 29 03:21:14 np0005539564 systemd[1]: var-lib-containers-storage-overlay-1d5e078b9430d62b528b5fa637af142fe8d53d363e05b7dd4c479027f2eb3bcb-merged.mount: Deactivated successfully.
Nov 29 03:21:14 np0005539564 podman[279148]: 2025-11-29 08:21:14.176836988 +0000 UTC m=+0.101141625 container cleanup 3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:21:14 np0005539564 systemd[1]: libpod-conmon-3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32.scope: Deactivated successfully.
Nov 29 03:21:14 np0005539564 podman[279194]: 2025-11-29 08:21:14.246859402 +0000 UTC m=+0.048288778 container remove 3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.256 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[16aa3bb2-cd71-4136-b79b-957c19283174]: (4, ('Sat Nov 29 08:21:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 (3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32)\n3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32\nSat Nov 29 08:21:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 (3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32)\n3bbb09d736272ac1a6ecb52b9f7994b18bfe860b813a27e100b657053e86af32\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.258 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[760d0f76-5c42-41a2-9b7d-706639bfd522]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.259 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c25940b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.260 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:14 np0005539564 kernel: tap3c25940b-e0: left promiscuous mode
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.280 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.283 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb89d8a-3937-4ad8-a9fd-41803c1fac7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.294 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a8d3b2-6862-40eb-a6f6-c1382315ee66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.295 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f97bde9d-e337-4c6b-aba2-5d39f365b2f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.313 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[55b60a76-440a-4308-8f2e-5d431a22edc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724256, 'reachable_time': 18672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279209, 'error': None, 'target': 'ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:14 np0005539564 systemd[1]: run-netns-ovnmeta\x2d3c25940b\x2de63b\x2d4443\x2da94b\x2d0216a35e8dc6.mount: Deactivated successfully.
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.316 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c25940b-e63b-4443-a94b-0216a35e8dc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:21:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:14.317 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d0deadee-6ca4-4e32-a69b-1da367a513ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.610 226310 INFO nova.virt.libvirt.driver [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Deleting instance files /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e_del#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.612 226310 INFO nova.virt.libvirt.driver [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Deletion of /var/lib/nova/instances/0b082cd2-d1d3-4577-be0a-30b9256a223e_del complete#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.704 226310 INFO nova.compute.manager [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.705 226310 DEBUG oslo.service.loopingcall [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.705 226310 DEBUG nova.compute.manager [-] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:21:14 np0005539564 nova_compute[226295]: 2025-11-29 08:21:14.705 226310 DEBUG nova.network.neutron [-] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:21:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:15.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.181 226310 DEBUG nova.compute.manager [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received event network-vif-unplugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.182 226310 DEBUG oslo_concurrency.lockutils [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.183 226310 DEBUG oslo_concurrency.lockutils [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.183 226310 DEBUG oslo_concurrency.lockutils [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.183 226310 DEBUG nova.compute.manager [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] No waiting events found dispatching network-vif-unplugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.184 226310 DEBUG nova.compute.manager [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received event network-vif-unplugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.184 226310 DEBUG nova.compute.manager [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received event network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.184 226310 DEBUG oslo_concurrency.lockutils [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.185 226310 DEBUG oslo_concurrency.lockutils [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.185 226310 DEBUG oslo_concurrency.lockutils [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.185 226310 DEBUG nova.compute.manager [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] No waiting events found dispatching network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.186 226310 WARNING nova.compute.manager [req-0fb57984-83e9-4db6-8901-444fbca034b4 req-03d45f2d-8471-4f10-ae1c-3d307a0edd8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received unexpected event network-vif-plugged-81ca5375-8e5a-46e3-9340-e5375e00d3e6 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.365 226310 DEBUG nova.network.neutron [-] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.445 226310 INFO nova.compute.manager [-] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Took 0.74 seconds to deallocate network for instance.#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.524 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.525 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:15 np0005539564 nova_compute[226295]: 2025-11-29 08:21:15.628 226310 DEBUG oslo_concurrency.processutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:16 np0005539564 NetworkManager[48997]: <info>  [1764404476.0110] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Nov 29 03:21:16 np0005539564 nova_compute[226295]: 2025-11-29 08:21:16.010 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:16 np0005539564 NetworkManager[48997]: <info>  [1764404476.0122] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Nov 29 03:21:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:16 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2091616258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:16 np0005539564 nova_compute[226295]: 2025-11-29 08:21:16.108 226310 DEBUG oslo_concurrency.processutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:16 np0005539564 nova_compute[226295]: 2025-11-29 08:21:16.117 226310 DEBUG nova.compute.provider_tree [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:16Z|00534|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:21:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:16Z|00535|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:21:16 np0005539564 nova_compute[226295]: 2025-11-29 08:21:16.186 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:16 np0005539564 nova_compute[226295]: 2025-11-29 08:21:16.383 226310 DEBUG nova.scheduler.client.report [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:16 np0005539564 nova_compute[226295]: 2025-11-29 08:21:16.628 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:16 np0005539564 nova_compute[226295]: 2025-11-29 08:21:16.831 226310 INFO nova.scheduler.client.report [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Deleted allocations for instance 0b082cd2-d1d3-4577-be0a-30b9256a223e#033[00m
Nov 29 03:21:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:17.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:17 np0005539564 nova_compute[226295]: 2025-11-29 08:21:17.420 226310 DEBUG nova.compute.manager [req-9b9d26ec-624a-42c2-9272-b77abffdc722 req-1f7c22dd-3255-4733-8f25-dcc4be409e71 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Received event network-vif-deleted-81ca5375-8e5a-46e3-9340-e5375e00d3e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:17 np0005539564 nova_compute[226295]: 2025-11-29 08:21:17.540 226310 DEBUG oslo_concurrency.lockutils [None req-58635745-5856-4fb0-9148-e4fbe1bacac9 504bc6adabad4f7d8c17b0438c4d9be7 b9d4c81989d641678300c7a1c173a2c2 - - default default] Lock "0b082cd2-d1d3-4577-be0a-30b9256a223e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:17 np0005539564 nova_compute[226295]: 2025-11-29 08:21:17.628 226310 DEBUG nova.compute.manager [req-482666de-a4c9-4027-bd8e-fde6bbd748c7 req-365157dd-306e-4036-9008-611dbbe832e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-changed-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:17 np0005539564 nova_compute[226295]: 2025-11-29 08:21:17.628 226310 DEBUG nova.compute.manager [req-482666de-a4c9-4027-bd8e-fde6bbd748c7 req-365157dd-306e-4036-9008-611dbbe832e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Refreshing instance network info cache due to event network-changed-2c29e276-97c4-4a7c-8428-caad5a74a94d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:17 np0005539564 nova_compute[226295]: 2025-11-29 08:21:17.629 226310 DEBUG oslo_concurrency.lockutils [req-482666de-a4c9-4027-bd8e-fde6bbd748c7 req-365157dd-306e-4036-9008-611dbbe832e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:17 np0005539564 nova_compute[226295]: 2025-11-29 08:21:17.629 226310 DEBUG oslo_concurrency.lockutils [req-482666de-a4c9-4027-bd8e-fde6bbd748c7 req-365157dd-306e-4036-9008-611dbbe832e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:17 np0005539564 nova_compute[226295]: 2025-11-29 08:21:17.630 226310 DEBUG nova.network.neutron [req-482666de-a4c9-4027-bd8e-fde6bbd748c7 req-365157dd-306e-4036-9008-611dbbe832e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Refreshing network info cache for port 2c29e276-97c4-4a7c-8428-caad5a74a94d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:17 np0005539564 nova_compute[226295]: 2025-11-29 08:21:17.661 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:18 np0005539564 nova_compute[226295]: 2025-11-29 08:21:18.994 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404463.9925258, 091988cc-8042-4aa1-b909-5ca1744ff259 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:18 np0005539564 nova_compute[226295]: 2025-11-29 08:21:18.994 226310 INFO nova.compute.manager [-] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:19 np0005539564 nova_compute[226295]: 2025-11-29 08:21:19.021 226310 DEBUG nova.compute.manager [None req-49d3394b-9d44-4ebd-bc13-6e2db8279b23 - - - - - -] [instance: 091988cc-8042-4aa1-b909-5ca1744ff259] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:19.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:19 np0005539564 nova_compute[226295]: 2025-11-29 08:21:19.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:19 np0005539564 nova_compute[226295]: 2025-11-29 08:21:19.467 226310 DEBUG nova.network.neutron [req-482666de-a4c9-4027-bd8e-fde6bbd748c7 req-365157dd-306e-4036-9008-611dbbe832e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updated VIF entry in instance network info cache for port 2c29e276-97c4-4a7c-8428-caad5a74a94d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:19 np0005539564 nova_compute[226295]: 2025-11-29 08:21:19.468 226310 DEBUG nova.network.neutron [req-482666de-a4c9-4027-bd8e-fde6bbd748c7 req-365157dd-306e-4036-9008-611dbbe832e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating instance_info_cache with network_info: [{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:19 np0005539564 nova_compute[226295]: 2025-11-29 08:21:19.570 226310 DEBUG oslo_concurrency.lockutils [req-482666de-a4c9-4027-bd8e-fde6bbd748c7 req-365157dd-306e-4036-9008-611dbbe832e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:20.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:21.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:22.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:22 np0005539564 nova_compute[226295]: 2025-11-29 08:21:22.664 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:23.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e328 e328: 3 total, 3 up, 3 in
Nov 29 03:21:24 np0005539564 nova_compute[226295]: 2025-11-29 08:21:24.086 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:24.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:25.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:25Z|00536|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:21:25 np0005539564 nova_compute[226295]: 2025-11-29 08:21:25.486 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:26.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:27.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:27Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:de:29 10.100.0.12
Nov 29 03:21:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:27Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:de:29 10.100.0.12
Nov 29 03:21:27 np0005539564 nova_compute[226295]: 2025-11-29 08:21:27.741 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:28.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:29.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:29 np0005539564 nova_compute[226295]: 2025-11-29 08:21:29.048 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404474.0473025, 0b082cd2-d1d3-4577-be0a-30b9256a223e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:29 np0005539564 nova_compute[226295]: 2025-11-29 08:21:29.048 226310 INFO nova.compute.manager [-] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:29 np0005539564 nova_compute[226295]: 2025-11-29 08:21:29.074 226310 DEBUG nova.compute.manager [None req-ff2ddfab-a3ca-49c2-8c6b-6914c5012d0b - - - - - -] [instance: 0b082cd2-d1d3-4577-be0a-30b9256a223e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:29 np0005539564 nova_compute[226295]: 2025-11-29 08:21:29.089 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:30.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:31.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:32.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:32 np0005539564 nova_compute[226295]: 2025-11-29 08:21:32.743 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:32 np0005539564 nova_compute[226295]: 2025-11-29 08:21:32.969 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:33.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:33 np0005539564 nova_compute[226295]: 2025-11-29 08:21:33.344 226310 DEBUG nova.compute.manager [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:33 np0005539564 nova_compute[226295]: 2025-11-29 08:21:33.421 226310 INFO nova.compute.manager [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] instance snapshotting#033[00m
Nov 29 03:21:33 np0005539564 nova_compute[226295]: 2025-11-29 08:21:33.750 226310 INFO nova.virt.libvirt.driver [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Beginning live snapshot process#033[00m
Nov 29 03:21:33 np0005539564 nova_compute[226295]: 2025-11-29 08:21:33.936 226310 DEBUG nova.virt.libvirt.imagebackend [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:21:34 np0005539564 nova_compute[226295]: 2025-11-29 08:21:34.092 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:34 np0005539564 nova_compute[226295]: 2025-11-29 08:21:34.193 226310 DEBUG nova.storage.rbd_utils [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] creating snapshot(59b9f22bbc974d45b7dab67873b808b5) on rbd image(63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:21:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:21:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 52K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1742 writes, 8602 keys, 1742 commit groups, 1.0 writes per commit group, ingest: 16.80 MB, 0.03 MB/s#012Interval WAL: 1742 writes, 1742 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     17.5      3.51              0.23        30    0.117       0      0       0.0       0.0#012  L6      1/0    9.35 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4     42.4     35.6      7.68              0.96        29    0.265    185K    16K       0.0       0.0#012 Sum      1/0    9.35 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     29.1     29.9     11.19              1.20        59    0.190    185K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     82.7     83.8      0.86              0.29        12    0.072     49K   3092       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     42.4     35.6      7.68              0.96        29    0.265    185K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     17.5      3.51              0.23        29    0.121       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.060, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.33 GB write, 0.08 MB/s write, 0.32 GB read, 0.08 MB/s read, 11.2 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 38.85 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000327 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2132,37.49 MB,12.3327%) FilterBlock(59,524.55 KB,0.168504%) IndexBlock(59,867.66 KB,0.278724%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:21:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e329 e329: 3 total, 3 up, 3 in
Nov 29 03:21:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:35.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:35 np0005539564 nova_compute[226295]: 2025-11-29 08:21:35.094 226310 DEBUG nova.storage.rbd_utils [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] cloning vms/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk@59b9f22bbc974d45b7dab67873b808b5 to images/10d0dcc8-92b3-4e29-baff-aaa5549dce70 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:21:35 np0005539564 nova_compute[226295]: 2025-11-29 08:21:35.210 226310 DEBUG nova.storage.rbd_utils [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] flattening images/10d0dcc8-92b3-4e29-baff-aaa5549dce70 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:21:35 np0005539564 nova_compute[226295]: 2025-11-29 08:21:35.692 226310 DEBUG nova.storage.rbd_utils [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] removing snapshot(59b9f22bbc974d45b7dab67873b808b5) on rbd image(63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:21:35 np0005539564 nova_compute[226295]: 2025-11-29 08:21:35.733 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e330 e330: 3 total, 3 up, 3 in
Nov 29 03:21:36 np0005539564 nova_compute[226295]: 2025-11-29 08:21:36.070 226310 DEBUG nova.storage.rbd_utils [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] creating snapshot(snap) on rbd image(10d0dcc8-92b3-4e29-baff-aaa5549dce70) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:21:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:37.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:37 np0005539564 nova_compute[226295]: 2025-11-29 08:21:37.070 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e331 e331: 3 total, 3 up, 3 in
Nov 29 03:21:37 np0005539564 nova_compute[226295]: 2025-11-29 08:21:37.746 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:38.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:38 np0005539564 nova_compute[226295]: 2025-11-29 08:21:38.667 226310 INFO nova.virt.libvirt.driver [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Snapshot image upload complete#033[00m
Nov 29 03:21:38 np0005539564 nova_compute[226295]: 2025-11-29 08:21:38.668 226310 INFO nova.compute.manager [None req-fdad3a07-f8c2-4d50-bfcd-7374ce7fa6de 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Took 5.24 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:21:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:39.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:39 np0005539564 nova_compute[226295]: 2025-11-29 08:21:39.124 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:40.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:41.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.424 226310 DEBUG oslo_concurrency.lockutils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.427 226310 DEBUG oslo_concurrency.lockutils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.447 226310 DEBUG nova.objects.instance [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'flavor' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:41 np0005539564 podman[279547]: 2025-11-29 08:21:41.467474099 +0000 UTC m=+0.097822046 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.557 226310 DEBUG oslo_concurrency.lockutils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:41 np0005539564 podman[279547]: 2025-11-29 08:21:41.626820296 +0000 UTC m=+0.257168243 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.819 226310 DEBUG oslo_concurrency.lockutils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.820 226310 DEBUG oslo_concurrency.lockutils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.820 226310 INFO nova.compute.manager [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Attaching volume 5a96f9f0-c824-49b0-8c65-4dfade22f1c1 to /dev/vdb#033[00m
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.978 226310 DEBUG os_brick.utils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:21:41 np0005539564 nova_compute[226295]: 2025-11-29 08:21:41.980 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.000 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.000 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[a03baec2-2737-4eef-bed5-26ba8ddef529]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.002 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.017 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.017 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ca5264-9781-43be-b863-2d827c0d912d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.020 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.033 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.034 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6c9075-dc38-412a-9165-6a5f6676b1f6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.036 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[37be7fa8-dbb0-4a14-91ec-6bfcbffffb5a]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.037 226310 DEBUG oslo_concurrency.processutils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.072 226310 DEBUG oslo_concurrency.processutils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.075 226310 DEBUG os_brick.initiator.connectors.lightos [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.075 226310 DEBUG os_brick.initiator.connectors.lightos [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.075 226310 DEBUG os_brick.initiator.connectors.lightos [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.076 226310 DEBUG os_brick.utils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.076 226310 DEBUG nova.virt.block_device [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating existing volume attachment record: c29d86c2-8342-4189-a462-03b5e805cff6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:21:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:42.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.897 226310 DEBUG nova.objects.instance [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'flavor' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.946 226310 DEBUG nova.virt.libvirt.driver [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Attempting to attach volume 5a96f9f0-c824-49b0-8c65-4dfade22f1c1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:21:42 np0005539564 nova_compute[226295]: 2025-11-29 08:21:42.949 226310 DEBUG nova.virt.libvirt.guest [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:21:42 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:21:42 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-5a96f9f0-c824-49b0-8c65-4dfade22f1c1">
Nov 29 03:21:42 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:42 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:42 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:42 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:21:42 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:21:42 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:21:42 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:21:42 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:21:42 np0005539564 nova_compute[226295]:  <serial>5a96f9f0-c824-49b0-8c65-4dfade22f1c1</serial>
Nov 29 03:21:42 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:21:42 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:21:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:43 np0005539564 nova_compute[226295]: 2025-11-29 08:21:43.073 226310 DEBUG nova.virt.libvirt.driver [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:43 np0005539564 nova_compute[226295]: 2025-11-29 08:21:43.073 226310 DEBUG nova.virt.libvirt.driver [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:43 np0005539564 nova_compute[226295]: 2025-11-29 08:21:43.073 226310 DEBUG nova.virt.libvirt.driver [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:43 np0005539564 nova_compute[226295]: 2025-11-29 08:21:43.074 226310 DEBUG nova.virt.libvirt.driver [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:f6:de:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:43 np0005539564 nova_compute[226295]: 2025-11-29 08:21:43.341 226310 DEBUG oslo_concurrency.lockutils [None req-11cdb3c3-9f1f-4b1f-9f07-e5dff7d00cef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:43 np0005539564 podman[279825]: 2025-11-29 08:21:43.524777958 +0000 UTC m=+0.069300925 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:21:43 np0005539564 podman[279826]: 2025-11-29 08:21:43.551878111 +0000 UTC m=+0.092264585 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:21:43 np0005539564 podman[279824]: 2025-11-29 08:21:43.563823954 +0000 UTC m=+0.111497916 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:21:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 e332: 3 total, 3 up, 3 in
Nov 29 03:21:44 np0005539564 nova_compute[226295]: 2025-11-29 08:21:44.127 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:21:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:21:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:44.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:45.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:45 np0005539564 nova_compute[226295]: 2025-11-29 08:21:45.979 226310 INFO nova.compute.manager [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Rescuing#033[00m
Nov 29 03:21:45 np0005539564 nova_compute[226295]: 2025-11-29 08:21:45.980 226310 DEBUG oslo_concurrency.lockutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:45 np0005539564 nova_compute[226295]: 2025-11-29 08:21:45.980 226310 DEBUG oslo_concurrency.lockutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:45 np0005539564 nova_compute[226295]: 2025-11-29 08:21:45.981 226310 DEBUG nova.network.neutron [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:46.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:47 np0005539564 nova_compute[226295]: 2025-11-29 08:21:47.752 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:48.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:49 np0005539564 nova_compute[226295]: 2025-11-29 08:21:49.133 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:50.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:51.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:51 np0005539564 nova_compute[226295]: 2025-11-29 08:21:51.365 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:52.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:52 np0005539564 nova_compute[226295]: 2025-11-29 08:21:52.755 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:53 np0005539564 nova_compute[226295]: 2025-11-29 08:21:53.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:54 np0005539564 nova_compute[226295]: 2025-11-29 08:21:54.136 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:54 np0005539564 nova_compute[226295]: 2025-11-29 08:21:54.306 226310 DEBUG nova.network.neutron [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating instance_info_cache with network_info: [{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:54 np0005539564 nova_compute[226295]: 2025-11-29 08:21:54.333 226310 DEBUG oslo_concurrency.lockutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:54 np0005539564 nova_compute[226295]: 2025-11-29 08:21:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:54.372 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:54 np0005539564 nova_compute[226295]: 2025-11-29 08:21:54.373 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:54.374 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:21:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:54.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:54 np0005539564 nova_compute[226295]: 2025-11-29 08:21:54.818 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:21:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:55 np0005539564 nova_compute[226295]: 2025-11-29 08:21:55.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:56.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:57 np0005539564 nova_compute[226295]: 2025-11-29 08:21:57.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:57 np0005539564 nova_compute[226295]: 2025-11-29 08:21:57.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:21:57 np0005539564 nova_compute[226295]: 2025-11-29 08:21:57.759 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:57 np0005539564 nova_compute[226295]: 2025-11-29 08:21:57.839 226310 INFO nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:21:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:58 np0005539564 nova_compute[226295]: 2025-11-29 08:21:58.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:58 np0005539564 nova_compute[226295]: 2025-11-29 08:21:58.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:21:58 np0005539564 nova_compute[226295]: 2025-11-29 08:21:58.537 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:21:58 np0005539564 nova_compute[226295]: 2025-11-29 08:21:58.538 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:58.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:21:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:59.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:59 np0005539564 nova_compute[226295]: 2025-11-29 08:21:59.139 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539564 kernel: tap2c29e276-97 (unregistering): left promiscuous mode
Nov 29 03:21:59 np0005539564 NetworkManager[48997]: <info>  [1764404519.8394] device (tap2c29e276-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:59 np0005539564 nova_compute[226295]: 2025-11-29 08:21:59.847 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:59Z|00537|binding|INFO|Releasing lport 2c29e276-97c4-4a7c-8428-caad5a74a94d from this chassis (sb_readonly=0)
Nov 29 03:21:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:59Z|00538|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d down in Southbound
Nov 29 03:21:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:21:59Z|00539|binding|INFO|Removing iface tap2c29e276-97 ovn-installed in OVS
Nov 29 03:21:59 np0005539564 nova_compute[226295]: 2025-11-29 08:21:59.850 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:59.867 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:de:29 10.100.0.12'], port_security=['fa:16:3e:f6:de:29 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '63dba7cc-b3af-4fb3-bbe8-58d5a087af19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e478a14-56df-4311-a000-02a6d80fadda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2c29e276-97c4-4a7c-8428-caad5a74a94d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:59.870 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2c29e276-97c4-4a7c-8428-caad5a74a94d in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:21:59 np0005539564 nova_compute[226295]: 2025-11-29 08:21:59.871 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:59.873 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:21:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:59.894 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[56aa56fd-5c94-4100-b42d-aca24b8b2f14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:59 np0005539564 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 29 03:21:59 np0005539564 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000092.scope: Consumed 16.012s CPU time.
Nov 29 03:21:59 np0005539564 systemd-machined[190128]: Machine qemu-67-instance-00000092 terminated.
Nov 29 03:21:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:59.941 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b181a044-4620-49d7-85ad-b29a84a7e1a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:59.945 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5b124f11-79df-45a6-96e4-a62dc27da64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:21:59.981 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ad20b3-b099-47b3-8267-72b37f43ec40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:00.005 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[64ad8993-a6cb-4c6f-8127-6a9f5fd9ea9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736750, 'reachable_time': 22429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279951, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:00.027 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d5128c2b-4eb9-490f-845b-0959a0456950]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736766, 'tstamp': 736766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279952, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736770, 'tstamp': 736770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279952, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:00.030 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.032 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.039 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:00.040 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:00.041 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:00.042 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:00.042 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.087 226310 INFO nova.virt.libvirt.driver [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance destroyed successfully.#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.088 226310 DEBUG nova.objects.instance [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'numa_topology' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.124 226310 INFO nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Attempting a stable device rescue#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.410 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.541 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.548 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.548 226310 INFO nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Creating image(s)#033[00m
Nov 29 03:22:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:00.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.668 226310 DEBUG nova.storage.rbd_utils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.674 226310 DEBUG nova.objects.instance [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.746 226310 DEBUG nova.storage.rbd_utils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.771 226310 DEBUG nova.storage.rbd_utils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.774 226310 DEBUG oslo_concurrency.lockutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "0f2ff5e201ad0f7273da69d1d17211c853ddb250" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:00 np0005539564 nova_compute[226295]: 2025-11-29 08:22:00.775 226310 DEBUG oslo_concurrency.lockutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "0f2ff5e201ad0f7273da69d1d17211c853ddb250" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.224 226310 DEBUG nova.compute.manager [req-15c9b822-2c77-4b14-a2ef-93ad338b076f req-e7aaaaa0-d0c8-48d1-8cf1-86d70fbe8f39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.224 226310 DEBUG oslo_concurrency.lockutils [req-15c9b822-2c77-4b14-a2ef-93ad338b076f req-e7aaaaa0-d0c8-48d1-8cf1-86d70fbe8f39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.224 226310 DEBUG oslo_concurrency.lockutils [req-15c9b822-2c77-4b14-a2ef-93ad338b076f req-e7aaaaa0-d0c8-48d1-8cf1-86d70fbe8f39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.225 226310 DEBUG oslo_concurrency.lockutils [req-15c9b822-2c77-4b14-a2ef-93ad338b076f req-e7aaaaa0-d0c8-48d1-8cf1-86d70fbe8f39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.225 226310 DEBUG nova.compute.manager [req-15c9b822-2c77-4b14-a2ef-93ad338b076f req-e7aaaaa0-d0c8-48d1-8cf1-86d70fbe8f39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.225 226310 WARNING nova.compute.manager [req-15c9b822-2c77-4b14-a2ef-93ad338b076f req-e7aaaaa0-d0c8-48d1-8cf1-86d70fbe8f39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.306 226310 DEBUG nova.virt.libvirt.imagebackend [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/10d0dcc8-92b3-4e29-baff-aaa5549dce70/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/10d0dcc8-92b3-4e29-baff-aaa5549dce70/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.370 226310 DEBUG nova.virt.libvirt.imagebackend [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/10d0dcc8-92b3-4e29-baff-aaa5549dce70/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.371 226310 DEBUG nova.storage.rbd_utils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] cloning images/10d0dcc8-92b3-4e29-baff-aaa5549dce70@snap to None/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.592 226310 DEBUG oslo_concurrency.lockutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "0f2ff5e201ad0f7273da69d1d17211c853ddb250" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.653 226310 DEBUG nova.objects.instance [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'migration_context' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.691 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.695 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Start _get_guest_xml network_info=[{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:f6:de:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '10d0dcc8-92b3-4e29-baff-aaa5549dce70', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5a96f9f0-c824-49b0-8c65-4dfade22f1c1', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5a96f9f0-c824-49b0-8c65-4dfade22f1c1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '63dba7cc-b3af-4fb3-bbe8-58d5a087af19', 'attached_at': '', 'detached_at': '', 'volume_id': '5a96f9f0-c824-49b0-8c65-4dfade22f1c1', 'serial': '5a96f9f0-c824-49b0-8c65-4dfade22f1c1'}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'c29d86c2-8342-4189-a462-03b5e805cff6', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.695 226310 DEBUG nova.objects.instance [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'resources' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.743 226310 WARNING nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.755 226310 DEBUG nova.virt.libvirt.host [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.756 226310 DEBUG nova.virt.libvirt.host [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.759 226310 DEBUG nova.virt.libvirt.host [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.759 226310 DEBUG nova.virt.libvirt.host [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.760 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.761 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.761 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.761 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.761 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.762 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.762 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.762 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.762 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.762 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.763 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.763 226310 DEBUG nova.virt.hardware [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.763 226310 DEBUG nova.objects.instance [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:01 np0005539564 nova_compute[226295]: 2025-11-29 08:22:01.818 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/236404037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:02 np0005539564 nova_compute[226295]: 2025-11-29 08:22:02.303 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:02 np0005539564 nova_compute[226295]: 2025-11-29 08:22:02.360 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:02.375 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:02.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:02 np0005539564 nova_compute[226295]: 2025-11-29 08:22:02.762 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3270619874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:02 np0005539564 nova_compute[226295]: 2025-11-29 08:22:02.915 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:02 np0005539564 nova_compute[226295]: 2025-11-29 08:22:02.979 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.357 226310 DEBUG nova.compute.manager [req-495edd53-3223-4914-b3fd-69c0adb703e0 req-303954fd-90ed-4866-ab22-cdfd2833669a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.358 226310 DEBUG oslo_concurrency.lockutils [req-495edd53-3223-4914-b3fd-69c0adb703e0 req-303954fd-90ed-4866-ab22-cdfd2833669a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.359 226310 DEBUG oslo_concurrency.lockutils [req-495edd53-3223-4914-b3fd-69c0adb703e0 req-303954fd-90ed-4866-ab22-cdfd2833669a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.359 226310 DEBUG oslo_concurrency.lockutils [req-495edd53-3223-4914-b3fd-69c0adb703e0 req-303954fd-90ed-4866-ab22-cdfd2833669a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.360 226310 DEBUG nova.compute.manager [req-495edd53-3223-4914-b3fd-69c0adb703e0 req-303954fd-90ed-4866-ab22-cdfd2833669a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.360 226310 WARNING nova.compute.manager [req-495edd53-3223-4914-b3fd-69c0adb703e0 req-303954fd-90ed-4866-ab22-cdfd2833669a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:22:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1421348518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.441 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.444 226310 DEBUG nova.virt.libvirt.vif [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-926273439',display_name='tempest-ServerStableDeviceRescueTest-server-926273439',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-926273439',id=146,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM90uGuHFEZUG4e9c1+zQZPo7nyMBkO25w7+OeL6gyYWhhodtL4WK14dtSTsr1qPRwaxAcx3xf1h4uzlv3mmeGtCTv+RpFMHq6ymHA+bpygtGf2oytOmyzvB5m3+xPiJeg==',key_name='tempest-keypair-1800430740',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-3qu0pws6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=63dba7cc-b3af-4fb3-bbe8-58d5a087af19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:f6:de:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.444 226310 DEBUG nova.network.os_vif_util [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:f6:de:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.446 226310 DEBUG nova.network.os_vif_util [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:de:29,bridge_name='br-int',has_traffic_filtering=True,id=2c29e276-97c4-4a7c-8428-caad5a74a94d,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c29e276-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.448 226310 DEBUG nova.objects.instance [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.500 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <uuid>63dba7cc-b3af-4fb3-bbe8-58d5a087af19</uuid>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <name>instance-00000092</name>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-926273439</nova:name>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:22:01</nova:creationTime>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <nova:user uuid="3b52040d601a4a56abcaf3f046f1e349">tempest-ServerStableDeviceRescueTest-1105304301-project-member</nova:user>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <nova:project uuid="358970eca7ad4b05b70f43e5507ac052">tempest-ServerStableDeviceRescueTest-1105304301</nova:project>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <nova:port uuid="2c29e276-97c4-4a7c-8428-caad5a74a94d">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <entry name="serial">63dba7cc-b3af-4fb3-bbe8-58d5a087af19</entry>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <entry name="uuid">63dba7cc-b3af-4fb3-bbe8-58d5a087af19</entry>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-5a96f9f0-c824-49b0-8c65-4dfade22f1c1">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <serial>5a96f9f0-c824-49b0-8c65-4dfade22f1c1</serial>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.rescue">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <target dev="vdc" bus="virtio"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <boot order="1"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:f6:de:29"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <target dev="tap2c29e276-97"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/console.log" append="off"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:22:03 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:22:03 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:22:03 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:22:03 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.514 226310 INFO nova.virt.libvirt.driver [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance destroyed successfully.#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.603 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.604 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.604 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.604 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.604 226310 DEBUG nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:f6:de:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.605 226310 INFO nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Using config drive#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.638 226310 DEBUG nova.storage.rbd_utils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.686 226310 DEBUG nova.objects.instance [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:03.735 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:03.736 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:03.737 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:03 np0005539564 nova_compute[226295]: 2025-11-29 08:22:03.751 226310 DEBUG nova.objects.instance [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'keypairs' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:04 np0005539564 nova_compute[226295]: 2025-11-29 08:22:04.143 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:04.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:04 np0005539564 nova_compute[226295]: 2025-11-29 08:22:04.655 226310 INFO nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Creating config drive at /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config.rescue#033[00m
Nov 29 03:22:04 np0005539564 nova_compute[226295]: 2025-11-29 08:22:04.661 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxksb6rwo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:04 np0005539564 nova_compute[226295]: 2025-11-29 08:22:04.826 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxksb6rwo" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:04 np0005539564 nova_compute[226295]: 2025-11-29 08:22:04.871 226310 DEBUG nova.storage.rbd_utils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:04 np0005539564 nova_compute[226295]: 2025-11-29 08:22:04.879 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config.rescue 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.384 226310 DEBUG oslo_concurrency.processutils [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config.rescue 63dba7cc-b3af-4fb3-bbe8-58d5a087af19_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.386 226310 INFO nova.virt.libvirt.driver [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Deleting local config drive /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.410 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:05 np0005539564 kernel: tap2c29e276-97: entered promiscuous mode
Nov 29 03:22:05 np0005539564 NetworkManager[48997]: <info>  [1764404525.4538] manager: (tap2c29e276-97): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Nov 29 03:22:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:05Z|00540|binding|INFO|Claiming lport 2c29e276-97c4-4a7c-8428-caad5a74a94d for this chassis.
Nov 29 03:22:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:05Z|00541|binding|INFO|2c29e276-97c4-4a7c-8428-caad5a74a94d: Claiming fa:16:3e:f6:de:29 10.100.0.12
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.455 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.479 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.479 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.479 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.480 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:22:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:05Z|00542|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d ovn-installed in OVS
Nov 29 03:22:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:05Z|00543|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d up in Southbound
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.480 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.478 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:de:29 10.100.0.12'], port_security=['fa:16:3e:f6:de:29 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '63dba7cc-b3af-4fb3-bbe8-58d5a087af19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7e478a14-56df-4311-a000-02a6d80fadda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2c29e276-97c4-4a7c-8428-caad5a74a94d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.480 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2c29e276-97c4-4a7c-8428-caad5a74a94d in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.481 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:22:05 np0005539564 systemd-machined[190128]: New machine qemu-68-instance-00000092.
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.502 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1700d5-8f27-4430-a1c7-b53f18c45d35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:05 np0005539564 systemd[1]: Started Virtual Machine qemu-68-instance-00000092.
Nov 29 03:22:05 np0005539564 systemd-udevd[280263]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.514 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:05 np0005539564 NetworkManager[48997]: <info>  [1764404525.5254] device (tap2c29e276-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:05 np0005539564 NetworkManager[48997]: <info>  [1764404525.5265] device (tap2c29e276-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.541 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[08cdd235-fdf9-4e8a-ad23-a55d0dcc803e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.545 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[30f2b613-35ff-48fb-8929-761fe7287ac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.578 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ac10e418-b92a-4473-972d-d2291f612be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.602 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb1fcd7-34d9-4568-9081-854a273e06e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736750, 'reachable_time': 22429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280275, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.620 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3baf11d3-8609-43fd-81b8-7ac1e1fb0632]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736766, 'tstamp': 736766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280278, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736770, 'tstamp': 736770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280278, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.622 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.623 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.624 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.626 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.627 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.627 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:05.627 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:05 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/381954643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:05 np0005539564 nova_compute[226295]: 2025-11-29 08:22:05.940 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.007 226310 DEBUG nova.compute.manager [req-ad2dca15-3977-4bab-b6c8-7015f9ce5cce req-d30a30d4-b009-412b-9b79-98d4d35d1142 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.008 226310 DEBUG oslo_concurrency.lockutils [req-ad2dca15-3977-4bab-b6c8-7015f9ce5cce req-d30a30d4-b009-412b-9b79-98d4d35d1142 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.008 226310 DEBUG oslo_concurrency.lockutils [req-ad2dca15-3977-4bab-b6c8-7015f9ce5cce req-d30a30d4-b009-412b-9b79-98d4d35d1142 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.009 226310 DEBUG oslo_concurrency.lockutils [req-ad2dca15-3977-4bab-b6c8-7015f9ce5cce req-d30a30d4-b009-412b-9b79-98d4d35d1142 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.009 226310 DEBUG nova.compute.manager [req-ad2dca15-3977-4bab-b6c8-7015f9ce5cce req-d30a30d4-b009-412b-9b79-98d4d35d1142 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.010 226310 WARNING nova.compute.manager [req-ad2dca15-3977-4bab-b6c8-7015f9ce5cce req-d30a30d4-b009-412b-9b79-98d4d35d1142 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.044 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.044 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.048 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.048 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.049 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.049 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.246 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.247 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4090MB free_disk=20.673568725585938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.247 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.247 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.489 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 38bba84b-1fb0-460a-a6aa-707ef29970b2 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.490 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.491 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.491 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.619 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:06.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.721 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.723 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404526.7210371, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.723 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.734 226310 DEBUG nova.compute.manager [None req-11599b1d-5568-4dc8-a844-1a2fbb9d3982 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.763 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.767 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.842 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.842 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404526.7269576, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.842 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.911 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:06 np0005539564 nova_compute[226295]: 2025-11-29 08:22:06.916 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2978950074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:07 np0005539564 nova_compute[226295]: 2025-11-29 08:22:07.092 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:07 np0005539564 nova_compute[226295]: 2025-11-29 08:22:07.098 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:07.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:07 np0005539564 nova_compute[226295]: 2025-11-29 08:22:07.580 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:07 np0005539564 nova_compute[226295]: 2025-11-29 08:22:07.805 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.113 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.113 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.114 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.115 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.165 226310 DEBUG nova.compute.manager [req-f3d618b9-76e8-4976-8177-c2fe6100e60f req-96d19466-cf9e-4bf8-af8b-35bf3fabf59a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.166 226310 DEBUG oslo_concurrency.lockutils [req-f3d618b9-76e8-4976-8177-c2fe6100e60f req-96d19466-cf9e-4bf8-af8b-35bf3fabf59a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.166 226310 DEBUG oslo_concurrency.lockutils [req-f3d618b9-76e8-4976-8177-c2fe6100e60f req-96d19466-cf9e-4bf8-af8b-35bf3fabf59a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.167 226310 DEBUG oslo_concurrency.lockutils [req-f3d618b9-76e8-4976-8177-c2fe6100e60f req-96d19466-cf9e-4bf8-af8b-35bf3fabf59a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.167 226310 DEBUG nova.compute.manager [req-f3d618b9-76e8-4976-8177-c2fe6100e60f req-96d19466-cf9e-4bf8-af8b-35bf3fabf59a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:08 np0005539564 nova_compute[226295]: 2025-11-29 08:22:08.167 226310 WARNING nova.compute.manager [req-f3d618b9-76e8-4976-8177-c2fe6100e60f req-96d19466-cf9e-4bf8-af8b-35bf3fabf59a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:22:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:08.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:09.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:09 np0005539564 nova_compute[226295]: 2025-11-29 08:22:09.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539564 nova_compute[226295]: 2025-11-29 08:22:10.597 226310 INFO nova.compute.manager [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Unrescuing#033[00m
Nov 29 03:22:10 np0005539564 nova_compute[226295]: 2025-11-29 08:22:10.598 226310 DEBUG oslo_concurrency.lockutils [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:10 np0005539564 nova_compute[226295]: 2025-11-29 08:22:10.598 226310 DEBUG oslo_concurrency.lockutils [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:10 np0005539564 nova_compute[226295]: 2025-11-29 08:22:10.598 226310 DEBUG nova.network.neutron [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:10.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:11.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:11 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:11Z|00544|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 29 03:22:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:12 np0005539564 nova_compute[226295]: 2025-11-29 08:22:12.774 226310 DEBUG nova.compute.manager [req-ec8d1ba4-1499-4e87-b5d0-639c17c03465 req-1f5ba6c8-7613-42d0-ae6b-d9bc29b6ae16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-changed-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:12 np0005539564 nova_compute[226295]: 2025-11-29 08:22:12.775 226310 DEBUG nova.compute.manager [req-ec8d1ba4-1499-4e87-b5d0-639c17c03465 req-1f5ba6c8-7613-42d0-ae6b-d9bc29b6ae16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Refreshing instance network info cache due to event network-changed-2c29e276-97c4-4a7c-8428-caad5a74a94d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:12 np0005539564 nova_compute[226295]: 2025-11-29 08:22:12.775 226310 DEBUG oslo_concurrency.lockutils [req-ec8d1ba4-1499-4e87-b5d0-639c17c03465 req-1f5ba6c8-7613-42d0-ae6b-d9bc29b6ae16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:12 np0005539564 nova_compute[226295]: 2025-11-29 08:22:12.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:13.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:13 np0005539564 nova_compute[226295]: 2025-11-29 08:22:13.994 226310 DEBUG nova.network.neutron [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating instance_info_cache with network_info: [{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.012 226310 DEBUG oslo_concurrency.lockutils [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.013 226310 DEBUG nova.objects.instance [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'flavor' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.014 226310 DEBUG oslo_concurrency.lockutils [req-ec8d1ba4-1499-4e87-b5d0-639c17c03465 req-1f5ba6c8-7613-42d0-ae6b-d9bc29b6ae16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.015 226310 DEBUG nova.network.neutron [req-ec8d1ba4-1499-4e87-b5d0-639c17c03465 req-1f5ba6c8-7613-42d0-ae6b-d9bc29b6ae16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Refreshing network info cache for port 2c29e276-97c4-4a7c-8428-caad5a74a94d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:14 np0005539564 kernel: tap2c29e276-97 (unregistering): left promiscuous mode
Nov 29 03:22:14 np0005539564 NetworkManager[48997]: <info>  [1764404534.1276] device (tap2c29e276-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:14Z|00545|binding|INFO|Releasing lport 2c29e276-97c4-4a7c-8428-caad5a74a94d from this chassis (sb_readonly=0)
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.138 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:14Z|00546|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d down in Southbound
Nov 29 03:22:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:14Z|00547|binding|INFO|Removing iface tap2c29e276-97 ovn-installed in OVS
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.152 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:de:29 10.100.0.12'], port_security=['fa:16:3e:f6:de:29 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '63dba7cc-b3af-4fb3-bbe8-58d5a087af19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7e478a14-56df-4311-a000-02a6d80fadda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2c29e276-97c4-4a7c-8428-caad5a74a94d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.155 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2c29e276-97c4-4a7c-8428-caad5a74a94d in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.157 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.158 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:22:14 np0005539564 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 29 03:22:14 np0005539564 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000092.scope: Consumed 8.931s CPU time.
Nov 29 03:22:14 np0005539564 systemd-machined[190128]: Machine qemu-68-instance-00000092 terminated.
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.178 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[da441703-69cd-4ac4-9ba2-63634a7f6d1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.224 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7522d14d-b3ce-4520-869c-6f3d94f883da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.228 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[51f3442e-abe2-44c6-8d53-bd40d6c97d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 podman[280403]: 2025-11-29 08:22:14.262818357 +0000 UTC m=+0.082467240 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.264 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bda73a23-bcb5-4489-8d0f-87cedbf9ca6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 podman[280402]: 2025-11-29 08:22:14.269137819 +0000 UTC m=+0.086579682 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.286 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e204d1-005c-47d1-920c-9e5c2c8f7a80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736750, 'reachable_time': 22429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280470, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.300 226310 INFO nova.virt.libvirt.driver [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance destroyed successfully.#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.301 226310 DEBUG nova.objects.instance [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'numa_topology' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:14 np0005539564 podman[280401]: 2025-11-29 08:22:14.302017177 +0000 UTC m=+0.128456583 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.306 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[499afb0c-c5f7-4b73-ab0b-d6034cb08ee9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736766, 'tstamp': 736766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280481, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736770, 'tstamp': 736770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280481, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.308 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.310 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.315 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.316 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.316 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.317 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.317 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:14 np0005539564 kernel: tap2c29e276-97: entered promiscuous mode
Nov 29 03:22:14 np0005539564 systemd-udevd[280430]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:14 np0005539564 NetworkManager[48997]: <info>  [1764404534.4017] manager: (tap2c29e276-97): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Nov 29 03:22:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:14Z|00548|binding|INFO|Claiming lport 2c29e276-97c4-4a7c-8428-caad5a74a94d for this chassis.
Nov 29 03:22:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:14Z|00549|binding|INFO|2c29e276-97c4-4a7c-8428-caad5a74a94d: Claiming fa:16:3e:f6:de:29 10.100.0.12
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.401 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 NetworkManager[48997]: <info>  [1764404534.4131] device (tap2c29e276-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:14 np0005539564 NetworkManager[48997]: <info>  [1764404534.4139] device (tap2c29e276-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:14Z|00550|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d ovn-installed in OVS
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.421 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.423 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 systemd-machined[190128]: New machine qemu-69-instance-00000092.
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.445 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:de:29 10.100.0.12'], port_security=['fa:16:3e:f6:de:29 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '63dba7cc-b3af-4fb3-bbe8-58d5a087af19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7e478a14-56df-4311-a000-02a6d80fadda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2c29e276-97c4-4a7c-8428-caad5a74a94d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:14Z|00551|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d up in Southbound
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.446 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2c29e276-97c4-4a7c-8428-caad5a74a94d in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.447 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:22:14 np0005539564 systemd[1]: Started Virtual Machine qemu-69-instance-00000092.
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.467 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[194e4049-dff9-4a16-bc1d-7cd831de96dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.495 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[606f3e40-9c84-4afa-a15c-a18cb65eca0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.498 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[907d46a6-0c74-4ff7-b4bc-fdff4a5e3f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.522 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2325c5-c8a1-4f51-9b8a-eafa4bc550d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.537 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[35065f7a-a511-47d6-96c4-8f27b10229ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736750, 'reachable_time': 22429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280513, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.552 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa5b702-3d2a-41cb-95a9-dde956d8b75c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736766, 'tstamp': 736766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280514, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736770, 'tstamp': 736770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280514, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.553 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.593 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.593 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.593 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.594 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:14.595 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:14.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.673795) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534673873, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2511, "num_deletes": 257, "total_data_size": 5574285, "memory_usage": 5661328, "flush_reason": "Manual Compaction"}
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534715893, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3640641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50863, "largest_seqno": 53369, "table_properties": {"data_size": 3630466, "index_size": 6413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22445, "raw_average_key_size": 21, "raw_value_size": 3609524, "raw_average_value_size": 3389, "num_data_blocks": 277, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404349, "oldest_key_time": 1764404349, "file_creation_time": 1764404534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 42268 microseconds, and 13306 cpu microseconds.
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.716072) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3640641 bytes OK
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.716288) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.718206) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.718222) EVENT_LOG_v1 {"time_micros": 1764404534718216, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.718239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5563039, prev total WAL file size 5563039, number of live WAL files 2.
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.720010) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3555KB)], [99(9578KB)]
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534720068, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13449121, "oldest_snapshot_seqno": -1}
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 8355 keys, 11592762 bytes, temperature: kUnknown
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534887143, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11592762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11537641, "index_size": 33121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 216916, "raw_average_key_size": 25, "raw_value_size": 11389396, "raw_average_value_size": 1363, "num_data_blocks": 1294, "num_entries": 8355, "num_filter_entries": 8355, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.888307) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11592762 bytes
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.889864) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.1 rd, 69.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8887, records dropped: 532 output_compression: NoCompression
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.889894) EVENT_LOG_v1 {"time_micros": 1764404534889880, "job": 62, "event": "compaction_finished", "compaction_time_micros": 167974, "compaction_time_cpu_micros": 49880, "output_level": 6, "num_output_files": 1, "total_output_size": 11592762, "num_input_records": 8887, "num_output_records": 8355, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534891257, "job": 62, "event": "table_file_deletion", "file_number": 101}
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.891 226310 DEBUG nova.compute.manager [req-ac95cf56-f710-42c7-b73b-a312978cbd23 req-fc4a5b74-738b-4987-b2ae-68b69daf9aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-changed-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.892 226310 DEBUG nova.compute.manager [req-ac95cf56-f710-42c7-b73b-a312978cbd23 req-fc4a5b74-738b-4987-b2ae-68b69daf9aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Refreshing instance network info cache due to event network-changed-2c29e276-97c4-4a7c-8428-caad5a74a94d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:14 np0005539564 nova_compute[226295]: 2025-11-29 08:22:14.892 226310 DEBUG oslo_concurrency.lockutils [req-ac95cf56-f710-42c7-b73b-a312978cbd23 req-fc4a5b74-738b-4987-b2ae-68b69daf9aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534894838, "job": 62, "event": "table_file_deletion", "file_number": 99}
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.719813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.894968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.895009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.895013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.895016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:22:14.895019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:15.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.276 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.277 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404535.275973, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.277 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.427 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.430 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.492 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.493 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404535.2769704, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.493 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.802 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:15 np0005539564 nova_compute[226295]: 2025-11-29 08:22:15.807 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:16.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.706 226310 DEBUG nova.network.neutron [req-ec8d1ba4-1499-4e87-b5d0-639c17c03465 req-1f5ba6c8-7613-42d0-ae6b-d9bc29b6ae16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updated VIF entry in instance network info cache for port 2c29e276-97c4-4a7c-8428-caad5a74a94d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.708 226310 DEBUG nova.network.neutron [req-ec8d1ba4-1499-4e87-b5d0-639c17c03465 req-1f5ba6c8-7613-42d0-ae6b-d9bc29b6ae16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating instance_info_cache with network_info: [{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.750 226310 DEBUG nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.751 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.752 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.752 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.753 226310 DEBUG nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.754 226310 WARNING nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.754 226310 DEBUG nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.755 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.756 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.757 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.757 226310 DEBUG nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.758 226310 WARNING nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.759 226310 DEBUG nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.759 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.760 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.760 226310 DEBUG oslo_concurrency.lockutils [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.761 226310 DEBUG nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.762 226310 WARNING nova.compute.manager [req-d1b2dea4-ebe3-41bb-9b8c-8ca6a6a3e804 req-8c4be470-bfd9-448e-956a-571e09e5361c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.764 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.765 226310 DEBUG oslo_concurrency.lockutils [req-ec8d1ba4-1499-4e87-b5d0-639c17c03465 req-1f5ba6c8-7613-42d0-ae6b-d9bc29b6ae16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.767 226310 DEBUG oslo_concurrency.lockutils [req-ac95cf56-f710-42c7-b73b-a312978cbd23 req-fc4a5b74-738b-4987-b2ae-68b69daf9aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:16 np0005539564 nova_compute[226295]: 2025-11-29 08:22:16.767 226310 DEBUG nova.network.neutron [req-ac95cf56-f710-42c7-b73b-a312978cbd23 req-fc4a5b74-738b-4987-b2ae-68b69daf9aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Refreshing network info cache for port 2c29e276-97c4-4a7c-8428-caad5a74a94d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:17 np0005539564 nova_compute[226295]: 2025-11-29 08:22:17.161 226310 DEBUG nova.compute.manager [None req-ced985d0-dfc0-4186-9b3b-121045fe043a 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:17 np0005539564 nova_compute[226295]: 2025-11-29 08:22:17.840 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:18 np0005539564 nova_compute[226295]: 2025-11-29 08:22:18.892 226310 DEBUG nova.compute.manager [req-77ad1001-134d-43c3-a72a-cfc9ca7257b4 req-43a3808b-940f-42a8-b2a2-7980b1150852 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:18 np0005539564 nova_compute[226295]: 2025-11-29 08:22:18.893 226310 DEBUG oslo_concurrency.lockutils [req-77ad1001-134d-43c3-a72a-cfc9ca7257b4 req-43a3808b-940f-42a8-b2a2-7980b1150852 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:18 np0005539564 nova_compute[226295]: 2025-11-29 08:22:18.894 226310 DEBUG oslo_concurrency.lockutils [req-77ad1001-134d-43c3-a72a-cfc9ca7257b4 req-43a3808b-940f-42a8-b2a2-7980b1150852 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:18 np0005539564 nova_compute[226295]: 2025-11-29 08:22:18.894 226310 DEBUG oslo_concurrency.lockutils [req-77ad1001-134d-43c3-a72a-cfc9ca7257b4 req-43a3808b-940f-42a8-b2a2-7980b1150852 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:18 np0005539564 nova_compute[226295]: 2025-11-29 08:22:18.895 226310 DEBUG nova.compute.manager [req-77ad1001-134d-43c3-a72a-cfc9ca7257b4 req-43a3808b-940f-42a8-b2a2-7980b1150852 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:18 np0005539564 nova_compute[226295]: 2025-11-29 08:22:18.895 226310 WARNING nova.compute.manager [req-77ad1001-134d-43c3-a72a-cfc9ca7257b4 req-43a3808b-940f-42a8-b2a2-7980b1150852 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state active and task_state None.#033[00m
Nov 29 03:22:19 np0005539564 nova_compute[226295]: 2025-11-29 08:22:19.116 226310 DEBUG nova.network.neutron [req-ac95cf56-f710-42c7-b73b-a312978cbd23 req-fc4a5b74-738b-4987-b2ae-68b69daf9aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updated VIF entry in instance network info cache for port 2c29e276-97c4-4a7c-8428-caad5a74a94d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:19 np0005539564 nova_compute[226295]: 2025-11-29 08:22:19.117 226310 DEBUG nova.network.neutron [req-ac95cf56-f710-42c7-b73b-a312978cbd23 req-fc4a5b74-738b-4987-b2ae-68b69daf9aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating instance_info_cache with network_info: [{"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:19.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:19 np0005539564 nova_compute[226295]: 2025-11-29 08:22:19.144 226310 DEBUG oslo_concurrency.lockutils [req-ac95cf56-f710-42c7-b73b-a312978cbd23 req-fc4a5b74-738b-4987-b2ae-68b69daf9aad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-63dba7cc-b3af-4fb3-bbe8-58d5a087af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:19 np0005539564 nova_compute[226295]: 2025-11-29 08:22:19.150 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:20.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:21.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:22.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:22 np0005539564 nova_compute[226295]: 2025-11-29 08:22:22.843 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:23.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:23 np0005539564 nova_compute[226295]: 2025-11-29 08:22:23.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:24 np0005539564 nova_compute[226295]: 2025-11-29 08:22:24.152 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:24.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:26.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:27 np0005539564 nova_compute[226295]: 2025-11-29 08:22:27.845 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:22:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2338995606' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:22:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:22:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2338995606' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:22:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:28.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:29 np0005539564 nova_compute[226295]: 2025-11-29 08:22:29.155 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:29Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:de:29 10.100.0.12
Nov 29 03:22:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:30.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:32.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:32 np0005539564 nova_compute[226295]: 2025-11-29 08:22:32.891 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:33.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:33Z|00552|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:22:33 np0005539564 nova_compute[226295]: 2025-11-29 08:22:33.660 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:34 np0005539564 nova_compute[226295]: 2025-11-29 08:22:34.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:36.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:37 np0005539564 nova_compute[226295]: 2025-11-29 08:22:37.931 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:38.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:39 np0005539564 nova_compute[226295]: 2025-11-29 08:22:39.190 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:40.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:42.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:42 np0005539564 nova_compute[226295]: 2025-11-29 08:22:42.935 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:44 np0005539564 nova_compute[226295]: 2025-11-29 08:22:44.193 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:44 np0005539564 podman[280599]: 2025-11-29 08:22:44.575816324 +0000 UTC m=+0.112362439 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:22:44 np0005539564 podman[280597]: 2025-11-29 08:22:44.594811757 +0000 UTC m=+0.139312087 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:22:44 np0005539564 podman[280598]: 2025-11-29 08:22:44.605785794 +0000 UTC m=+0.145980537 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:22:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:44.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.151 226310 DEBUG oslo_concurrency.lockutils [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.152 226310 DEBUG oslo_concurrency.lockutils [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:45.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.171 226310 INFO nova.compute.manager [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Detaching volume 5a96f9f0-c824-49b0-8c65-4dfade22f1c1#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.358 226310 INFO nova.virt.block_device [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Attempting to driver detach volume 5a96f9f0-c824-49b0-8c65-4dfade22f1c1 from mountpoint /dev/vdb#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.370 226310 DEBUG nova.virt.libvirt.driver [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Attempting to detach device vdb from instance 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.371 226310 DEBUG nova.virt.libvirt.guest [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-5a96f9f0-c824-49b0-8c65-4dfade22f1c1">
Nov 29 03:22:45 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <serial>5a96f9f0-c824-49b0-8c65-4dfade22f1c1</serial>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:22:45 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.379 226310 INFO nova.virt.libvirt.driver [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully detached device vdb from instance 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 from the persistent domain config.#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.379 226310 DEBUG nova.virt.libvirt.driver [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.380 226310 DEBUG nova.virt.libvirt.guest [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-5a96f9f0-c824-49b0-8c65-4dfade22f1c1">
Nov 29 03:22:45 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <serial>5a96f9f0-c824-49b0-8c65-4dfade22f1c1</serial>
Nov 29 03:22:45 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:22:45 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:22:45 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.433 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764404565.4329567, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.435 226310 DEBUG nova.virt.libvirt.driver [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.437 226310 INFO nova.virt.libvirt.driver [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully detached device vdb from instance 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 from the live domain config.#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.649 226310 DEBUG nova.objects.instance [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'flavor' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:45 np0005539564 nova_compute[226295]: 2025-11-29 08:22:45.704 226310 DEBUG oslo_concurrency.lockutils [None req-8084fe02-3b60-400a-8959-af5f40f0cebb 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:46.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:47 np0005539564 nova_compute[226295]: 2025-11-29 08:22:47.937 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:48.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e333 e333: 3 total, 3 up, 3 in
Nov 29 03:22:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:49.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:49 np0005539564 nova_compute[226295]: 2025-11-29 08:22:49.196 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.414 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.414 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.415 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.415 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.416 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.417 226310 INFO nova.compute.manager [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Terminating instance#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.418 226310 DEBUG nova.compute.manager [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:22:50 np0005539564 kernel: tap2c29e276-97 (unregistering): left promiscuous mode
Nov 29 03:22:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:50.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:50 np0005539564 NetworkManager[48997]: <info>  [1764404570.7704] device (tap2c29e276-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:50Z|00553|binding|INFO|Releasing lport 2c29e276-97c4-4a7c-8428-caad5a74a94d from this chassis (sb_readonly=0)
Nov 29 03:22:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:50Z|00554|binding|INFO|Setting lport 2c29e276-97c4-4a7c-8428-caad5a74a94d down in Southbound
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.783 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:50Z|00555|binding|INFO|Removing iface tap2c29e276-97 ovn-installed in OVS
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.786 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.791 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:de:29 10.100.0.12'], port_security=['fa:16:3e:f6:de:29 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '63dba7cc-b3af-4fb3-bbe8-58d5a087af19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7e478a14-56df-4311-a000-02a6d80fadda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2c29e276-97c4-4a7c-8428-caad5a74a94d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.792 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2c29e276-97c4-4a7c-8428-caad5a74a94d in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.794 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.813 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8f0ea3-da90-4214-aeae-561614b85afe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.817 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.855 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fc9261-51b7-4419-9cf0-201f0ef89005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.857 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[da849c03-29b2-494c-b10e-ff9492062cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:50 np0005539564 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 29 03:22:50 np0005539564 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000092.scope: Consumed 16.002s CPU time.
Nov 29 03:22:50 np0005539564 systemd-machined[190128]: Machine qemu-69-instance-00000092 terminated.
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.894 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[17db8c73-f59c-4c75-9da3-73def1040795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.917 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b235dc53-d272-4752-996a-2db4b196283a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 784, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 784, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736750, 'reachable_time': 22429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280673, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.936 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[82b5342d-bfd1-4145-b77f-15ecfe5bfd3b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736766, 'tstamp': 736766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280674, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736770, 'tstamp': 736770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280674, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.938 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.939 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:50 np0005539564 nova_compute[226295]: 2025-11-29 08:22:50.944 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.945 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.945 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.945 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:50.945 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.057 226310 INFO nova.virt.libvirt.driver [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Instance destroyed successfully.#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.057 226310 DEBUG nova.objects.instance [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'resources' on Instance uuid 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.073 226310 DEBUG nova.compute.manager [req-29a5d33a-2089-4638-8970-5fa7d3c775f3 req-ca3ab9d0-8a89-4f8d-b16e-bf3f45dfb664 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.073 226310 DEBUG oslo_concurrency.lockutils [req-29a5d33a-2089-4638-8970-5fa7d3c775f3 req-ca3ab9d0-8a89-4f8d-b16e-bf3f45dfb664 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.074 226310 DEBUG oslo_concurrency.lockutils [req-29a5d33a-2089-4638-8970-5fa7d3c775f3 req-ca3ab9d0-8a89-4f8d-b16e-bf3f45dfb664 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.074 226310 DEBUG oslo_concurrency.lockutils [req-29a5d33a-2089-4638-8970-5fa7d3c775f3 req-ca3ab9d0-8a89-4f8d-b16e-bf3f45dfb664 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.074 226310 DEBUG nova.compute.manager [req-29a5d33a-2089-4638-8970-5fa7d3c775f3 req-ca3ab9d0-8a89-4f8d-b16e-bf3f45dfb664 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.074 226310 DEBUG nova.compute.manager [req-29a5d33a-2089-4638-8970-5fa7d3c775f3 req-ca3ab9d0-8a89-4f8d-b16e-bf3f45dfb664 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-unplugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.083 226310 DEBUG nova.virt.libvirt.vif [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-926273439',display_name='tempest-ServerStableDeviceRescueTest-server-926273439',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-926273439',id=146,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM90uGuHFEZUG4e9c1+zQZPo7nyMBkO25w7+OeL6gyYWhhodtL4WK14dtSTsr1qPRwaxAcx3xf1h4uzlv3mmeGtCTv+RpFMHq6ymHA+bpygtGf2oytOmyzvB5m3+xPiJeg==',key_name='tempest-keypair-1800430740',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-3qu0pws6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=63dba7cc-b3af-4fb3-bbe8-58d5a087af19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.083 226310 DEBUG nova.network.os_vif_util [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "address": "fa:16:3e:f6:de:29", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c29e276-97", "ovs_interfaceid": "2c29e276-97c4-4a7c-8428-caad5a74a94d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.085 226310 DEBUG nova.network.os_vif_util [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:de:29,bridge_name='br-int',has_traffic_filtering=True,id=2c29e276-97c4-4a7c-8428-caad5a74a94d,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c29e276-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.085 226310 DEBUG os_vif [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:de:29,bridge_name='br-int',has_traffic_filtering=True,id=2c29e276-97c4-4a7c-8428-caad5a74a94d,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c29e276-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.087 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.088 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c29e276-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.089 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.090 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.092 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.095 226310 INFO os_vif [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:de:29,bridge_name='br-int',has_traffic_filtering=True,id=2c29e276-97c4-4a7c-8428-caad5a74a94d,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c29e276-97')#033[00m
Nov 29 03:22:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:51.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.352 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.522 226310 INFO nova.virt.libvirt.driver [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Deleting instance files /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_del#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.523 226310 INFO nova.virt.libvirt.driver [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Deletion of /var/lib/nova/instances/63dba7cc-b3af-4fb3-bbe8-58d5a087af19_del complete#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.589 226310 INFO nova.compute.manager [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.589 226310 DEBUG oslo.service.loopingcall [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.590 226310 DEBUG nova.compute.manager [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:22:51 np0005539564 nova_compute[226295]: 2025-11-29 08:22:51.590 226310 DEBUG nova.network.neutron [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:22:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 03:22:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 03:22:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 03:22:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 03:22:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 03:22:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 03:22:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 03:22:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:22:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:52 np0005539564 nova_compute[226295]: 2025-11-29 08:22:52.941 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.122 226310 DEBUG nova.network.neutron [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.175 226310 INFO nova.compute.manager [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Took 1.58 seconds to deallocate network for instance.#033[00m
Nov 29 03:22:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:53.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.204 226310 DEBUG nova.compute.manager [req-96827baf-9e1d-4da5-bc4a-be50dcdfb73e req-72299181-ab13-47fe-84d9-e14019019b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.204 226310 DEBUG oslo_concurrency.lockutils [req-96827baf-9e1d-4da5-bc4a-be50dcdfb73e req-72299181-ab13-47fe-84d9-e14019019b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.205 226310 DEBUG oslo_concurrency.lockutils [req-96827baf-9e1d-4da5-bc4a-be50dcdfb73e req-72299181-ab13-47fe-84d9-e14019019b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.205 226310 DEBUG oslo_concurrency.lockutils [req-96827baf-9e1d-4da5-bc4a-be50dcdfb73e req-72299181-ab13-47fe-84d9-e14019019b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.205 226310 DEBUG nova.compute.manager [req-96827baf-9e1d-4da5-bc4a-be50dcdfb73e req-72299181-ab13-47fe-84d9-e14019019b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] No waiting events found dispatching network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.205 226310 WARNING nova.compute.manager [req-96827baf-9e1d-4da5-bc4a-be50dcdfb73e req-72299181-ab13-47fe-84d9-e14019019b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received unexpected event network-vif-plugged-2c29e276-97c4-4a7c-8428-caad5a74a94d for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.247 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.248 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.318 226310 DEBUG oslo_concurrency.processutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2494565161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.781 226310 DEBUG oslo_concurrency.processutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.787 226310 DEBUG nova.compute.provider_tree [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.811 226310 DEBUG nova.scheduler.client.report [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.841 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.891 226310 INFO nova.scheduler.client.report [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Deleted allocations for instance 63dba7cc-b3af-4fb3-bbe8-58d5a087af19#033[00m
Nov 29 03:22:53 np0005539564 nova_compute[226295]: 2025-11-29 08:22:53.991 226310 DEBUG oslo_concurrency.lockutils [None req-103ffef1-dd3c-456f-a994-a7b8cfc70dcf 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "63dba7cc-b3af-4fb3-bbe8-58d5a087af19" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:54 np0005539564 nova_compute[226295]: 2025-11-29 08:22:54.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:54.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:22:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:22:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:22:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:55.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539564 nova_compute[226295]: 2025-11-29 08:22:55.300 226310 DEBUG nova.compute.manager [req-ae1deaba-079c-49c6-9f81-dcd03f882d99 req-92f77773-38bf-4d91-bca9-3244efe7642c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Received event network-vif-deleted-2c29e276-97c4-4a7c-8428-caad5a74a94d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:55 np0005539564 nova_compute[226295]: 2025-11-29 08:22:55.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:56 np0005539564 nova_compute[226295]: 2025-11-29 08:22:56.090 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:56.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:56.935 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:56 np0005539564 nova_compute[226295]: 2025-11-29 08:22:56.937 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:56.938 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:22:57 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:57Z|00556|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:22:57 np0005539564 nova_compute[226295]: 2025-11-29 08:22:57.103 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:57.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:57 np0005539564 ovn_controller[130591]: 2025-11-29T08:22:57Z|00557|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:22:57 np0005539564 nova_compute[226295]: 2025-11-29 08:22:57.271 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539564 nova_compute[226295]: 2025-11-29 08:22:57.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:57 np0005539564 nova_compute[226295]: 2025-11-29 08:22:57.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:57 np0005539564 nova_compute[226295]: 2025-11-29 08:22:57.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:22:57 np0005539564 nova_compute[226295]: 2025-11-29 08:22:57.942 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e334 e334: 3 total, 3 up, 3 in
Nov 29 03:22:58 np0005539564 nova_compute[226295]: 2025-11-29 08:22:58.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:58 np0005539564 nova_compute[226295]: 2025-11-29 08:22:58.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:22:58 np0005539564 nova_compute[226295]: 2025-11-29 08:22:58.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:22:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:58.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:22:58.940 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:58 np0005539564 nova_compute[226295]: 2025-11-29 08:22:58.969 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:58 np0005539564 nova_compute[226295]: 2025-11-29 08:22:58.970 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:58 np0005539564 nova_compute[226295]: 2025-11-29 08:22:58.970 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:22:58 np0005539564 nova_compute[226295]: 2025-11-29 08:22:58.971 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e335 e335: 3 total, 3 up, 3 in
Nov 29 03:22:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:22:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:59.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:00.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:01 np0005539564 nova_compute[226295]: 2025-11-29 08:23:01.093 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:01.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:23:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:23:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:02.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:02 np0005539564 nova_compute[226295]: 2025-11-29 08:23:02.919 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updating instance_info_cache with network_info: [{"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:02 np0005539564 nova_compute[226295]: 2025-11-29 08:23:02.946 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:02 np0005539564 nova_compute[226295]: 2025-11-29 08:23:02.949 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-38bba84b-1fb0-460a-a6aa-707ef29970b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:02 np0005539564 nova_compute[226295]: 2025-11-29 08:23:02.949 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:23:02 np0005539564 nova_compute[226295]: 2025-11-29 08:23:02.950 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:02 np0005539564 nova_compute[226295]: 2025-11-29 08:23:02.951 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:03.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:03.736 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:03.737 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:03.737 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e336 e336: 3 total, 3 up, 3 in
Nov 29 03:23:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:04.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:05.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.506 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.507 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.507 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.508 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.508 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.509 226310 INFO nova.compute.manager [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Terminating instance#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.511 226310 DEBUG nova.compute.manager [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:23:05 np0005539564 kernel: tap0ac7b30d-da (unregistering): left promiscuous mode
Nov 29 03:23:05 np0005539564 NetworkManager[48997]: <info>  [1764404585.5862] device (tap0ac7b30d-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.595 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:23:05Z|00558|binding|INFO|Releasing lport 0ac7b30d-dad2-4718-b060-add6421b1065 from this chassis (sb_readonly=0)
Nov 29 03:23:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:23:05Z|00559|binding|INFO|Setting lport 0ac7b30d-dad2-4718-b060-add6421b1065 down in Southbound
Nov 29 03:23:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:23:05Z|00560|binding|INFO|Removing iface tap0ac7b30d-da ovn-installed in OVS
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.599 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.606 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9e:3a 10.100.0.10'], port_security=['fa:16:3e:d4:9e:3a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '38bba84b-1fb0-460a-a6aa-707ef29970b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '8', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=0ac7b30d-dad2-4718-b060-add6421b1065) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.608 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 0ac7b30d-dad2-4718-b060-add6421b1065 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.610 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32485b0e-177b-4dfd-a55a-0249528f32e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.611 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4942c2ac-008e-4f15-958d-cb6f1faa9c11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.612 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace which is not needed anymore#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.630 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:05 np0005539564 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 29 03:23:05 np0005539564 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000087.scope: Consumed 24.212s CPU time.
Nov 29 03:23:05 np0005539564 systemd-machined[190128]: Machine qemu-63-instance-00000087 terminated.
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.755 226310 INFO nova.virt.libvirt.driver [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Instance destroyed successfully.#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.755 226310 DEBUG nova.objects.instance [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'resources' on Instance uuid 38bba84b-1fb0-460a-a6aa-707ef29970b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.783 226310 DEBUG nova.virt.libvirt.vif [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1206692563',display_name='tempest-ServerStableDeviceRescueTest-server-1206692563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1206692563',id=135,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-0h9yile4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:54Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=38bba84b-1fb0-460a-a6aa-707ef29970b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.783 226310 DEBUG nova.network.os_vif_util [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "0ac7b30d-dad2-4718-b060-add6421b1065", "address": "fa:16:3e:d4:9e:3a", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ac7b30d-da", "ovs_interfaceid": "0ac7b30d-dad2-4718-b060-add6421b1065", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.784 226310 DEBUG nova.network.os_vif_util [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9e:3a,bridge_name='br-int',has_traffic_filtering=True,id=0ac7b30d-dad2-4718-b060-add6421b1065,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ac7b30d-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.785 226310 DEBUG os_vif [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9e:3a,bridge_name='br-int',has_traffic_filtering=True,id=0ac7b30d-dad2-4718-b060-add6421b1065,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ac7b30d-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:05 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[277237]: [NOTICE]   (277257) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:05 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[277237]: [NOTICE]   (277257) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:05 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[277237]: [WARNING]  (277257) : Exiting Master process...
Nov 29 03:23:05 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[277237]: [WARNING]  (277257) : Exiting Master process...
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.787 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.788 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ac7b30d-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:05 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[277237]: [ALERT]    (277257) : Current worker (277265) exited with code 143 (Terminated)
Nov 29 03:23:05 np0005539564 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[277237]: [WARNING]  (277257) : All workers exited. Exiting... (0)
Nov 29 03:23:05 np0005539564 systemd[1]: libpod-698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01.scope: Deactivated successfully.
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.793 226310 DEBUG nova.compute.manager [req-0fb2a2ea-d1fa-4117-99b0-0ae5b508cc68 req-8947ad25-8f8a-46b1-90f2-1e8d6c97089d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.794 226310 DEBUG oslo_concurrency.lockutils [req-0fb2a2ea-d1fa-4117-99b0-0ae5b508cc68 req-8947ad25-8f8a-46b1-90f2-1e8d6c97089d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.794 226310 DEBUG oslo_concurrency.lockutils [req-0fb2a2ea-d1fa-4117-99b0-0ae5b508cc68 req-8947ad25-8f8a-46b1-90f2-1e8d6c97089d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.794 226310 DEBUG oslo_concurrency.lockutils [req-0fb2a2ea-d1fa-4117-99b0-0ae5b508cc68 req-8947ad25-8f8a-46b1-90f2-1e8d6c97089d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.795 226310 DEBUG nova.compute.manager [req-0fb2a2ea-d1fa-4117-99b0-0ae5b508cc68 req-8947ad25-8f8a-46b1-90f2-1e8d6c97089d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.795 226310 DEBUG nova.compute.manager [req-0fb2a2ea-d1fa-4117-99b0-0ae5b508cc68 req-8947ad25-8f8a-46b1-90f2-1e8d6c97089d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-unplugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.795 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.796 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:05 np0005539564 podman[280935]: 2025-11-29 08:23:05.798077566 +0000 UTC m=+0.075866852 container died 698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.798 226310 INFO os_vif [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9e:3a,bridge_name='br-int',has_traffic_filtering=True,id=0ac7b30d-dad2-4718-b060-add6421b1065,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ac7b30d-da')#033[00m
Nov 29 03:23:05 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:05 np0005539564 systemd[1]: var-lib-containers-storage-overlay-accbc07cc061d345538754dd6f169fe186b2d125b8a88faf001ec8a2b791ca40-merged.mount: Deactivated successfully.
Nov 29 03:23:05 np0005539564 podman[280935]: 2025-11-29 08:23:05.840126353 +0000 UTC m=+0.117915629 container cleanup 698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:23:05 np0005539564 systemd[1]: libpod-conmon-698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01.scope: Deactivated successfully.
Nov 29 03:23:05 np0005539564 podman[280992]: 2025-11-29 08:23:05.919441627 +0000 UTC m=+0.053596340 container remove 698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.929 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6b5657-82ec-405e-a6cd-fe3737a35ed0]: (4, ('Sat Nov 29 08:23:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01)\n698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01\nSat Nov 29 08:23:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01)\n698099a571e9c7b1f9f37a546423a0c38a40c04fbfafea5af7ba156d8322bb01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.931 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c42097-bfa1-40e7-92de-04081e96bf45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.933 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.935 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:05 np0005539564 kernel: tap32485b0e-10: left promiscuous mode
Nov 29 03:23:05 np0005539564 nova_compute[226295]: 2025-11-29 08:23:05.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.970 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[eed2a522-e763-4067-afe2-558f5194195f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.995 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[02d76fc7-fa7d-49f4-804c-2458481d5dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:05.997 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[50eaba44-8b97-45a8-9d40-ad3e680a7db3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:06.017 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f114348e-c5df-45c5-b2a8-6043b7ac3c34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736738, 'reachable_time': 38194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281007, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:06 np0005539564 systemd[1]: run-netns-ovnmeta\x2d32485b0e\x2d177b\x2d4dfd\x2da55a\x2d0249528f32e1.mount: Deactivated successfully.
Nov 29 03:23:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:06.021 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:06.022 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[1578b877-d05a-499e-85ba-a904c3c465ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.056 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404571.0544453, 63dba7cc-b3af-4fb3-bbe8-58d5a087af19 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.056 226310 INFO nova.compute.manager [-] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.082 226310 DEBUG nova.compute.manager [None req-60c1526e-b981-4d64-a88d-10d0715f3edf - - - - - -] [instance: 63dba7cc-b3af-4fb3-bbe8-58d5a087af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.245 226310 INFO nova.virt.libvirt.driver [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Deleting instance files /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2_del#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.246 226310 INFO nova.virt.libvirt.driver [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Deletion of /var/lib/nova/instances/38bba84b-1fb0-460a-a6aa-707ef29970b2_del complete#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.325 226310 INFO nova.compute.manager [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.325 226310 DEBUG oslo.service.loopingcall [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.326 226310 DEBUG nova.compute.manager [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.326 226310 DEBUG nova.network.neutron [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.383 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.383 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.383 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:06.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4279436585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:06 np0005539564 nova_compute[226295]: 2025-11-29 08:23:06.895 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.067 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.069 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4331MB free_disk=20.813190460205078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.069 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.070 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.187 226310 DEBUG nova.network.neutron [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.211 226310 INFO nova.compute.manager [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Took 0.88 seconds to deallocate network for instance.#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.237 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 38bba84b-1fb0-460a-a6aa-707ef29970b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.238 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.238 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.255 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.257 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.290 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.290 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.309 226310 DEBUG nova.compute.manager [req-7b3c1847-977f-43af-973b-53da00ae1c31 req-2e3623d6-3842-41e1-b40b-1490e0ddacf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-deleted-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.310 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.337 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.380 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/318140726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.829 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.837 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.947 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:07 np0005539564 nova_compute[226295]: 2025-11-29 08:23:07.960 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.025 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.026 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.027 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.070 226310 DEBUG nova.compute.manager [req-cf5bdf11-6553-4163-b6e7-3db6460f924d req-0263c8f8-368b-4256-bdb9-b95b31033990 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.071 226310 DEBUG oslo_concurrency.lockutils [req-cf5bdf11-6553-4163-b6e7-3db6460f924d req-0263c8f8-368b-4256-bdb9-b95b31033990 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.071 226310 DEBUG oslo_concurrency.lockutils [req-cf5bdf11-6553-4163-b6e7-3db6460f924d req-0263c8f8-368b-4256-bdb9-b95b31033990 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.071 226310 DEBUG oslo_concurrency.lockutils [req-cf5bdf11-6553-4163-b6e7-3db6460f924d req-0263c8f8-368b-4256-bdb9-b95b31033990 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.071 226310 DEBUG nova.compute.manager [req-cf5bdf11-6553-4163-b6e7-3db6460f924d req-0263c8f8-368b-4256-bdb9-b95b31033990 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] No waiting events found dispatching network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.071 226310 WARNING nova.compute.manager [req-cf5bdf11-6553-4163-b6e7-3db6460f924d req-0263c8f8-368b-4256-bdb9-b95b31033990 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Received unexpected event network-vif-plugged-0ac7b30d-dad2-4718-b060-add6421b1065 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.073 226310 DEBUG oslo_concurrency.processutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:08 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3428384830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.532 226310 DEBUG oslo_concurrency.processutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.540 226310 DEBUG nova.compute.provider_tree [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.700 226310 DEBUG nova.scheduler.client.report [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:08.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.753 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.788 226310 INFO nova.scheduler.client.report [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Deleted allocations for instance 38bba84b-1fb0-460a-a6aa-707ef29970b2#033[00m
Nov 29 03:23:08 np0005539564 nova_compute[226295]: 2025-11-29 08:23:08.864 226310 DEBUG oslo_concurrency.lockutils [None req-c14672df-02ee-45c3-bc67-4711aaa731f9 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "38bba84b-1fb0-460a-a6aa-707ef29970b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e337 e337: 3 total, 3 up, 3 in
Nov 29 03:23:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:09.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:10.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:10 np0005539564 nova_compute[226295]: 2025-11-29 08:23:10.790 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:11.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e338 e338: 3 total, 3 up, 3 in
Nov 29 03:23:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:12.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:12 np0005539564 nova_compute[226295]: 2025-11-29 08:23:12.948 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:13.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e339 e339: 3 total, 3 up, 3 in
Nov 29 03:23:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:14.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:15.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:15 np0005539564 podman[281078]: 2025-11-29 08:23:15.50563939 +0000 UTC m=+0.055683046 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:23:15 np0005539564 podman[281077]: 2025-11-29 08:23:15.52117687 +0000 UTC m=+0.073990241 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:23:15 np0005539564 podman[281076]: 2025-11-29 08:23:15.533997696 +0000 UTC m=+0.093416586 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 29 03:23:15 np0005539564 nova_compute[226295]: 2025-11-29 08:23:15.793 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:16.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:17 np0005539564 nova_compute[226295]: 2025-11-29 08:23:17.951 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:18.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 e340: 3 total, 3 up, 3 in
Nov 29 03:23:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:20.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:20 np0005539564 nova_compute[226295]: 2025-11-29 08:23:20.755 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404585.7530327, 38bba84b-1fb0-460a-a6aa-707ef29970b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:20 np0005539564 nova_compute[226295]: 2025-11-29 08:23:20.755 226310 INFO nova.compute.manager [-] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:20 np0005539564 nova_compute[226295]: 2025-11-29 08:23:20.795 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:20 np0005539564 nova_compute[226295]: 2025-11-29 08:23:20.954 226310 DEBUG nova.compute.manager [None req-219d7458-2c6e-4e3b-a595-2cdb53930286 - - - - - -] [instance: 38bba84b-1fb0-460a-a6aa-707ef29970b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:21.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:22.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:22 np0005539564 nova_compute[226295]: 2025-11-29 08:23:22.954 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:23.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:24.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:25 np0005539564 nova_compute[226295]: 2025-11-29 08:23:25.235 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539564 nova_compute[226295]: 2025-11-29 08:23:25.797 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:27.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:27 np0005539564 nova_compute[226295]: 2025-11-29 08:23:27.956 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:29.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:30.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:30 np0005539564 nova_compute[226295]: 2025-11-29 08:23:30.799 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:32 np0005539564 nova_compute[226295]: 2025-11-29 08:23:32.021 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:32 np0005539564 nova_compute[226295]: 2025-11-29 08:23:32.959 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:35.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:35 np0005539564 nova_compute[226295]: 2025-11-29 08:23:35.801 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:36.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:37.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:37 np0005539564 nova_compute[226295]: 2025-11-29 08:23:37.963 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:38.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:39.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:40.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:40 np0005539564 nova_compute[226295]: 2025-11-29 08:23:40.803 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:42.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:42 np0005539564 nova_compute[226295]: 2025-11-29 08:23:42.964 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:43.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:43 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 29 03:23:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:44.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:45.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:45 np0005539564 nova_compute[226295]: 2025-11-29 08:23:45.805 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:46 np0005539564 podman[281144]: 2025-11-29 08:23:46.532986766 +0000 UTC m=+0.088270488 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:23:46 np0005539564 podman[281145]: 2025-11-29 08:23:46.553169141 +0000 UTC m=+0.094583208 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:23:46 np0005539564 podman[281143]: 2025-11-29 08:23:46.570180411 +0000 UTC m=+0.130127379 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:23:46 np0005539564 nova_compute[226295]: 2025-11-29 08:23:46.727 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "5a1e0d67-b865-42ce-b195-7bfea62954af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:46 np0005539564 nova_compute[226295]: 2025-11-29 08:23:46.727 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:46.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:47 np0005539564 nova_compute[226295]: 2025-11-29 08:23:47.102 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:23:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:47.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:47 np0005539564 nova_compute[226295]: 2025-11-29 08:23:47.781 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:47 np0005539564 nova_compute[226295]: 2025-11-29 08:23:47.782 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:47 np0005539564 nova_compute[226295]: 2025-11-29 08:23:47.861 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:23:47 np0005539564 nova_compute[226295]: 2025-11-29 08:23:47.862 226310 INFO nova.compute.claims [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:23:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.005 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.160 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2712794285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.599 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.606 226310 DEBUG nova.compute.provider_tree [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.715 226310 DEBUG nova.scheduler.client.report [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.744 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.745 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:23:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:48.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.901 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.902 226310 DEBUG nova.network.neutron [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.964 226310 INFO nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:23:48 np0005539564 nova_compute[226295]: 2025-11-29 08:23:48.994 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.183 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.186 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.187 226310 INFO nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Creating image(s)#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.216 226310 DEBUG nova.storage.rbd_utils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 5a1e0d67-b865-42ce-b195-7bfea62954af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.245 226310 DEBUG nova.storage.rbd_utils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 5a1e0d67-b865-42ce-b195-7bfea62954af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.277 226310 DEBUG nova.storage.rbd_utils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 5a1e0d67-b865-42ce-b195-7bfea62954af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.281 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.352 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.353 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.354 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.354 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.384 226310 DEBUG nova.storage.rbd_utils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 5a1e0d67-b865-42ce-b195-7bfea62954af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.388 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5a1e0d67-b865-42ce-b195-7bfea62954af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.670 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5a1e0d67-b865-42ce-b195-7bfea62954af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.720 226310 DEBUG nova.policy [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a37c720b9bb4273b66cd2dce30fbf48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.783 226310 DEBUG nova.storage.rbd_utils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] resizing rbd image 5a1e0d67-b865-42ce-b195-7bfea62954af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.912 226310 DEBUG nova.objects.instance [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a1e0d67-b865-42ce-b195-7bfea62954af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.931 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.932 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Ensure instance console log exists: /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.933 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.933 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:49 np0005539564 nova_compute[226295]: 2025-11-29 08:23:49.934 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:50.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:50 np0005539564 nova_compute[226295]: 2025-11-29 08:23:50.808 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:51.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:51 np0005539564 nova_compute[226295]: 2025-11-29 08:23:51.354 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:51 np0005539564 nova_compute[226295]: 2025-11-29 08:23:51.447 226310 DEBUG nova.network.neutron [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Successfully created port: a6b7b735-6f8d-4009-8d41-26fc5a89629e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:23:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:52.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.008 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.026 226310 DEBUG nova.network.neutron [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Successfully updated port: a6b7b735-6f8d-4009-8d41-26fc5a89629e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.042 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "refresh_cache-5a1e0d67-b865-42ce-b195-7bfea62954af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.042 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquired lock "refresh_cache-5a1e0d67-b865-42ce-b195-7bfea62954af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.043 226310 DEBUG nova.network.neutron [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:53.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.377 226310 DEBUG nova.compute.manager [req-8f9366cf-6647-47b3-b42b-69c832a17102 req-e6003a4b-d46f-4d34-9f62-1164c5050e31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received event network-changed-a6b7b735-6f8d-4009-8d41-26fc5a89629e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.378 226310 DEBUG nova.compute.manager [req-8f9366cf-6647-47b3-b42b-69c832a17102 req-e6003a4b-d46f-4d34-9f62-1164c5050e31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Refreshing instance network info cache due to event network-changed-a6b7b735-6f8d-4009-8d41-26fc5a89629e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.378 226310 DEBUG oslo_concurrency.lockutils [req-8f9366cf-6647-47b3-b42b-69c832a17102 req-e6003a4b-d46f-4d34-9f62-1164c5050e31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5a1e0d67-b865-42ce-b195-7bfea62954af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:53 np0005539564 nova_compute[226295]: 2025-11-29 08:23:53.454 226310 DEBUG nova.network.neutron [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:23:54 np0005539564 nova_compute[226295]: 2025-11-29 08:23:54.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:54.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:55 np0005539564 nova_compute[226295]: 2025-11-29 08:23:55.810 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.009 226310 DEBUG nova.network.neutron [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Updating instance_info_cache with network_info: [{"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.066 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Releasing lock "refresh_cache-5a1e0d67-b865-42ce-b195-7bfea62954af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.067 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Instance network_info: |[{"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.067 226310 DEBUG oslo_concurrency.lockutils [req-8f9366cf-6647-47b3-b42b-69c832a17102 req-e6003a4b-d46f-4d34-9f62-1164c5050e31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5a1e0d67-b865-42ce-b195-7bfea62954af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.067 226310 DEBUG nova.network.neutron [req-8f9366cf-6647-47b3-b42b-69c832a17102 req-e6003a4b-d46f-4d34-9f62-1164c5050e31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Refreshing network info cache for port a6b7b735-6f8d-4009-8d41-26fc5a89629e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.070 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Start _get_guest_xml network_info=[{"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.075 226310 WARNING nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.105 226310 DEBUG nova.virt.libvirt.host [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.107 226310 DEBUG nova.virt.libvirt.host [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.114 226310 DEBUG nova.virt.libvirt.host [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.115 226310 DEBUG nova.virt.libvirt.host [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.117 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.117 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.118 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.119 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.119 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.119 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.120 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.120 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.121 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.121 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.122 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.122 226310 DEBUG nova.virt.hardware [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:23:56 np0005539564 nova_compute[226295]: 2025-11-29 08:23:56.127 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:56.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3860970169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.329 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.373 226310 DEBUG nova.storage.rbd_utils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 5a1e0d67-b865-42ce-b195-7bfea62954af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.379 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.420 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.422 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.423 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:23:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2730004005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.882 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.884 226310 DEBUG nova.virt.libvirt.vif [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-761688547',display_name='tempest-ServersNegativeTestJSON-server-761688547',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-761688547',id=154,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-453csrrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:49Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=5a1e0d67-b865-42ce-b195-7bfea62954af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.885 226310 DEBUG nova.network.os_vif_util [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.887 226310 DEBUG nova.network.os_vif_util [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:00,bridge_name='br-int',has_traffic_filtering=True,id=a6b7b735-6f8d-4009-8d41-26fc5a89629e,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6b7b735-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.889 226310 DEBUG nova.objects.instance [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a1e0d67-b865-42ce-b195-7bfea62954af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.924 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <uuid>5a1e0d67-b865-42ce-b195-7bfea62954af</uuid>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <name>instance-0000009a</name>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServersNegativeTestJSON-server-761688547</nova:name>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:23:56</nova:creationTime>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <nova:user uuid="3a37c720b9bb4273b66cd2dce30fbf48">tempest-ServersNegativeTestJSON-213437080-project-member</nova:user>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <nova:project uuid="d9406fbc6fef486fa5b0e79549e78d00">tempest-ServersNegativeTestJSON-213437080</nova:project>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <nova:port uuid="a6b7b735-6f8d-4009-8d41-26fc5a89629e">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <entry name="serial">5a1e0d67-b865-42ce-b195-7bfea62954af</entry>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <entry name="uuid">5a1e0d67-b865-42ce-b195-7bfea62954af</entry>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5a1e0d67-b865-42ce-b195-7bfea62954af_disk">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/5a1e0d67-b865-42ce-b195-7bfea62954af_disk.config">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:9a:8a:00"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <target dev="tapa6b7b735-6f"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af/console.log" append="off"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:23:57 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:23:57 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:23:57 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:23:57 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.926 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Preparing to wait for external event network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.927 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.928 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.928 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.930 226310 DEBUG nova.virt.libvirt.vif [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-761688547',display_name='tempest-ServersNegativeTestJSON-server-761688547',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-761688547',id=154,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-453csrrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:49Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=5a1e0d67-b865-42ce-b195-7bfea62954af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.931 226310 DEBUG nova.network.os_vif_util [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.932 226310 DEBUG nova.network.os_vif_util [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:00,bridge_name='br-int',has_traffic_filtering=True,id=a6b7b735-6f8d-4009-8d41-26fc5a89629e,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6b7b735-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.933 226310 DEBUG os_vif [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:00,bridge_name='br-int',has_traffic_filtering=True,id=a6b7b735-6f8d-4009-8d41-26fc5a89629e,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6b7b735-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.934 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.935 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.936 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.942 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.943 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6b7b735-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.944 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6b7b735-6f, col_values=(('external_ids', {'iface-id': 'a6b7b735-6f8d-4009-8d41-26fc5a89629e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:8a:00', 'vm-uuid': '5a1e0d67-b865-42ce-b195-7bfea62954af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.947 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:57 np0005539564 NetworkManager[48997]: <info>  [1764404637.9480] manager: (tapa6b7b735-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.950 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.956 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:57 np0005539564 nova_compute[226295]: 2025-11-29 08:23:57.957 226310 INFO os_vif [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:00,bridge_name='br-int',has_traffic_filtering=True,id=a6b7b735-6f8d-4009-8d41-26fc5a89629e,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6b7b735-6f')#033[00m
Nov 29 03:23:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.009 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.021 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.022 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.022 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No VIF found with MAC fa:16:3e:9a:8a:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.023 226310 INFO nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Using config drive#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.056 226310 DEBUG nova.storage.rbd_utils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 5a1e0d67-b865-42ce-b195-7bfea62954af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.065 226310 DEBUG nova.network.neutron [req-8f9366cf-6647-47b3-b42b-69c832a17102 req-e6003a4b-d46f-4d34-9f62-1164c5050e31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Updated VIF entry in instance network info cache for port a6b7b735-6f8d-4009-8d41-26fc5a89629e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.066 226310 DEBUG nova.network.neutron [req-8f9366cf-6647-47b3-b42b-69c832a17102 req-e6003a4b-d46f-4d34-9f62-1164c5050e31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Updating instance_info_cache with network_info: [{"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.098 226310 DEBUG oslo_concurrency.lockutils [req-8f9366cf-6647-47b3-b42b-69c832a17102 req-e6003a4b-d46f-4d34-9f62-1164c5050e31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5a1e0d67-b865-42ce-b195-7bfea62954af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:58.211 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.211 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:58.213 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.705 226310 INFO nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Creating config drive at /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af/disk.config#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.716 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptju2lhpb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:58.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.878 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptju2lhpb" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.927 226310 DEBUG nova.storage.rbd_utils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 5a1e0d67-b865-42ce-b195-7bfea62954af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:58 np0005539564 nova_compute[226295]: 2025-11-29 08:23:58.932 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af/disk.config 5a1e0d67-b865-42ce-b195-7bfea62954af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:23:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.358 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.359 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.360 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.537 226310 DEBUG oslo_concurrency.processutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af/disk.config 5a1e0d67-b865-42ce-b195-7bfea62954af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.538 226310 INFO nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Deleting local config drive /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af/disk.config because it was imported into RBD.#033[00m
Nov 29 03:23:59 np0005539564 kernel: tapa6b7b735-6f: entered promiscuous mode
Nov 29 03:23:59 np0005539564 NetworkManager[48997]: <info>  [1764404639.5951] manager: (tapa6b7b735-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.597 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.600 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:23:59Z|00561|binding|INFO|Claiming lport a6b7b735-6f8d-4009-8d41-26fc5a89629e for this chassis.
Nov 29 03:23:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:23:59Z|00562|binding|INFO|a6b7b735-6f8d-4009-8d41-26fc5a89629e: Claiming fa:16:3e:9a:8a:00 10.100.0.13
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.604 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539564 systemd-udevd[281526]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:59 np0005539564 NetworkManager[48997]: <info>  [1764404639.6621] device (tapa6b7b735-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:59 np0005539564 NetworkManager[48997]: <info>  [1764404639.6632] device (tapa6b7b735-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.676 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:23:59Z|00563|binding|INFO|Setting lport a6b7b735-6f8d-4009-8d41-26fc5a89629e ovn-installed in OVS
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.681 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539564 systemd-machined[190128]: New machine qemu-70-instance-0000009a.
Nov 29 03:23:59 np0005539564 systemd[1]: Started Virtual Machine qemu-70-instance-0000009a.
Nov 29 03:23:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:23:59Z|00564|binding|INFO|Setting lport a6b7b735-6f8d-4009-8d41-26fc5a89629e up in Southbound
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.792 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:8a:00 10.100.0.13'], port_security=['fa:16:3e:9a:8a:00 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5a1e0d67-b865-42ce-b195-7bfea62954af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a6b7b735-6f8d-4009-8d41-26fc5a89629e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.795 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:23:59 np0005539564 nova_compute[226295]: 2025-11-29 08:23:59.795 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.795 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a6b7b735-6f8d-4009-8d41-26fc5a89629e in datapath 258f6232-6798-4075-adab-c07c4559ef67 bound to our chassis#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.797 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 258f6232-6798-4075-adab-c07c4559ef67#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.819 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[78cb7286-f196-4fd4-9a09-d20071dd9e9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.820 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap258f6232-61 in ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.823 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap258f6232-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.823 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[03f587db-1890-4097-8656-a6f25cec5d0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.824 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[175dabb3-1c78-407a-9a39-18521b7b8e5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.843 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[1829bb43-0d14-499d-9827-4d5fe725c427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.865 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9c6ead-64de-4b77-8b11-da7ab4fd837a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.902 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e681c14a-79a4-46c9-bc4c-550ed6df06a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.907 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c8de47df-4a57-411e-a3fa-0aca82dc507a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 NetworkManager[48997]: <info>  [1764404639.9083] manager: (tap258f6232-60): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.940 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3895d0-27fb-4834-9e5e-5b1130bccc4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.943 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[99cdf908-6429-40b9-8542-be3beb4b2b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 NetworkManager[48997]: <info>  [1764404639.9681] device (tap258f6232-60): carrier: link connected
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.974 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c85fdc-b9c6-41d9-bb02-616f41a293d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:23:59.992 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[534be273-0c4e-45d2-a565-8811b6bcdaaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap258f6232-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:63:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761464, 'reachable_time': 35252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281562, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.005 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[797c80b5-88b5-47aa-b33e-9b11527c54d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:63e2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761464, 'tstamp': 761464}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281563, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.024 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4d1180-86a4-453f-96f1-903aed45679f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap258f6232-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:63:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761464, 'reachable_time': 35252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281564, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.057 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3048e153-be99-47fe-9daf-027714d17e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.128 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[813507f7-182a-43e1-8a3d-5acd8c615da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.132 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap258f6232-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.132 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.133 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap258f6232-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:00 np0005539564 kernel: tap258f6232-60: entered promiscuous mode
Nov 29 03:24:00 np0005539564 NetworkManager[48997]: <info>  [1764404640.1357] manager: (tap258f6232-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.134 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.136 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.139 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap258f6232-60, col_values=(('external_ids', {'iface-id': 'c87f2e10-0d06-412e-bd89-4b9ab0d16c96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:00Z|00565|binding|INFO|Releasing lport c87f2e10-0d06-412e-bd89-4b9ab0d16c96 from this chassis (sb_readonly=0)
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.140 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.142 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.143 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd353e7-f4ed-46d4-86f7-bf1c5557a3d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.144 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-258f6232-6798-4075-adab-c07c4559ef67
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 258f6232-6798-4075-adab-c07c4559ef67
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.145 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'env', 'PROCESS_TAG=haproxy-258f6232-6798-4075-adab-c07c4559ef67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/258f6232-6798-4075-adab-c07c4559ef67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:00 np0005539564 podman[281614]: 2025-11-29 08:24:00.567861029 +0000 UTC m=+0.093148940 container create a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:24:00 np0005539564 podman[281614]: 2025-11-29 08:24:00.505170033 +0000 UTC m=+0.030457964 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:24:00 np0005539564 systemd[1]: Started libpod-conmon-a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab.scope.
Nov 29 03:24:00 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:24:00 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df382b685961fcd7311e0f89755feb96d2dbbcc1b99ee72af94e804b28111b21/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:00 np0005539564 podman[281614]: 2025-11-29 08:24:00.735047568 +0000 UTC m=+0.260335569 container init a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:24:00 np0005539564 podman[281614]: 2025-11-29 08:24:00.7450904 +0000 UTC m=+0.270378341 container start a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.758 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404640.758068, 5a1e0d67-b865-42ce-b195-7bfea62954af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.759 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:00 np0005539564 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[281652]: [NOTICE]   (281657) : New worker (281659) forked
Nov 29 03:24:00 np0005539564 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[281652]: [NOTICE]   (281657) : Loading success.
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.784 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:00.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.791 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404640.7581942, 5a1e0d67-b865-42ce-b195-7bfea62954af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.791 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.816 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.821 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.824 139780 INFO oslo_service.service [-] Child 281571 exited with status 0#033[00m
Nov 29 03:24:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:00.824 139780 WARNING oslo_service.service [-] pid 281571 not in child list#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.840 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.899 226310 DEBUG nova.compute.manager [req-6a5e990a-9cf4-496a-b4d8-162e6482fda0 req-4acb0280-1a78-4dc4-9300-089b3ea01afb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received event network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.900 226310 DEBUG oslo_concurrency.lockutils [req-6a5e990a-9cf4-496a-b4d8-162e6482fda0 req-4acb0280-1a78-4dc4-9300-089b3ea01afb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.901 226310 DEBUG oslo_concurrency.lockutils [req-6a5e990a-9cf4-496a-b4d8-162e6482fda0 req-4acb0280-1a78-4dc4-9300-089b3ea01afb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.901 226310 DEBUG oslo_concurrency.lockutils [req-6a5e990a-9cf4-496a-b4d8-162e6482fda0 req-4acb0280-1a78-4dc4-9300-089b3ea01afb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.902 226310 DEBUG nova.compute.manager [req-6a5e990a-9cf4-496a-b4d8-162e6482fda0 req-4acb0280-1a78-4dc4-9300-089b3ea01afb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Processing event network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.903 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.907 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404640.907093, 5a1e0d67-b865-42ce-b195-7bfea62954af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.908 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.911 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.915 226310 INFO nova.virt.libvirt.driver [-] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Instance spawned successfully.#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.916 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.924 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.935 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.944 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.945 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.945 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.946 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.947 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:00 np0005539564 nova_compute[226295]: 2025-11-29 08:24:00.948 226310 DEBUG nova.virt.libvirt.driver [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:01 np0005539564 nova_compute[226295]: 2025-11-29 08:24:01.007 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:01 np0005539564 nova_compute[226295]: 2025-11-29 08:24:01.239 226310 INFO nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Took 12.06 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:24:01 np0005539564 nova_compute[226295]: 2025-11-29 08:24:01.240 226310 DEBUG nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:01 np0005539564 nova_compute[226295]: 2025-11-29 08:24:01.730 226310 INFO nova.compute.manager [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Took 14.03 seconds to build instance.#033[00m
Nov 29 03:24:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e341 e341: 3 total, 3 up, 3 in
Nov 29 03:24:02 np0005539564 nova_compute[226295]: 2025-11-29 08:24:02.140 226310 DEBUG oslo_concurrency.lockutils [None req-e67f58a7-295f-4647-ab80-a09d383176bc 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:02.215 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:02.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.018 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "5a1e0d67-b865-42ce-b195-7bfea62954af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.019 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.019 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.019 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.019 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.020 226310 INFO nova.compute.manager [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Terminating instance#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.021 226310 DEBUG nova.compute.manager [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 kernel: tapa6b7b735-6f (unregistering): left promiscuous mode
Nov 29 03:24:03 np0005539564 NetworkManager[48997]: <info>  [1764404643.1159] device (tapa6b7b735-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.126 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:03Z|00566|binding|INFO|Releasing lport a6b7b735-6f8d-4009-8d41-26fc5a89629e from this chassis (sb_readonly=0)
Nov 29 03:24:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:03Z|00567|binding|INFO|Setting lport a6b7b735-6f8d-4009-8d41-26fc5a89629e down in Southbound
Nov 29 03:24:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:03Z|00568|binding|INFO|Removing iface tapa6b7b735-6f ovn-installed in OVS
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.128 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.134 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:8a:00 10.100.0.13'], port_security=['fa:16:3e:9a:8a:00 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5a1e0d67-b865-42ce-b195-7bfea62954af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a6b7b735-6f8d-4009-8d41-26fc5a89629e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.136 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a6b7b735-6f8d-4009-8d41-26fc5a89629e in datapath 258f6232-6798-4075-adab-c07c4559ef67 unbound from our chassis#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.137 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 258f6232-6798-4075-adab-c07c4559ef67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.139 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ee7317-413f-4146-b4b2-f693313b7bfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.139 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 namespace which is not needed anymore#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.145 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.156 226310 DEBUG nova.compute.manager [req-aa54a094-4431-4763-8657-a24f5b9bd30c req-11406a16-80b6-497c-81f9-c034d4b6da6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received event network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.156 226310 DEBUG oslo_concurrency.lockutils [req-aa54a094-4431-4763-8657-a24f5b9bd30c req-11406a16-80b6-497c-81f9-c034d4b6da6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.156 226310 DEBUG oslo_concurrency.lockutils [req-aa54a094-4431-4763-8657-a24f5b9bd30c req-11406a16-80b6-497c-81f9-c034d4b6da6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.156 226310 DEBUG oslo_concurrency.lockutils [req-aa54a094-4431-4763-8657-a24f5b9bd30c req-11406a16-80b6-497c-81f9-c034d4b6da6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.157 226310 DEBUG nova.compute.manager [req-aa54a094-4431-4763-8657-a24f5b9bd30c req-11406a16-80b6-497c-81f9-c034d4b6da6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] No waiting events found dispatching network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.157 226310 WARNING nova.compute.manager [req-aa54a094-4431-4763-8657-a24f5b9bd30c req-11406a16-80b6-497c-81f9-c034d4b6da6f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received unexpected event network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:24:03 np0005539564 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Nov 29 03:24:03 np0005539564 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000009a.scope: Consumed 3.158s CPU time.
Nov 29 03:24:03 np0005539564 systemd-machined[190128]: Machine qemu-70-instance-0000009a terminated.
Nov 29 03:24:03 np0005539564 kernel: tapa6b7b735-6f: entered promiscuous mode
Nov 29 03:24:03 np0005539564 NetworkManager[48997]: <info>  [1764404643.2791] manager: (tapa6b7b735-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Nov 29 03:24:03 np0005539564 kernel: tapa6b7b735-6f (unregistering): left promiscuous mode
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:03Z|00569|binding|INFO|Claiming lport a6b7b735-6f8d-4009-8d41-26fc5a89629e for this chassis.
Nov 29 03:24:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:03Z|00570|binding|INFO|a6b7b735-6f8d-4009-8d41-26fc5a89629e: Claiming fa:16:3e:9a:8a:00 10.100.0.13
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.301 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:8a:00 10.100.0.13'], port_security=['fa:16:3e:9a:8a:00 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5a1e0d67-b865-42ce-b195-7bfea62954af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a6b7b735-6f8d-4009-8d41-26fc5a89629e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.307 226310 INFO nova.virt.libvirt.driver [-] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Instance destroyed successfully.#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.308 226310 DEBUG nova.objects.instance [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'resources' on Instance uuid 5a1e0d67-b865-42ce-b195-7bfea62954af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.309 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.311 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:03Z|00571|binding|INFO|Releasing lport a6b7b735-6f8d-4009-8d41-26fc5a89629e from this chassis (sb_readonly=0)
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.321 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:8a:00 10.100.0.13'], port_security=['fa:16:3e:9a:8a:00 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5a1e0d67-b865-42ce-b195-7bfea62954af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a6b7b735-6f8d-4009-8d41-26fc5a89629e) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.322 226310 DEBUG nova.virt.libvirt.vif [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-761688547',display_name='tempest-ServersNegativeTestJSON-server-761688547',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-761688547',id=154,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-453csrrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:01Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=5a1e0d67-b865-42ce-b195-7bfea62954af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.323 226310 DEBUG nova.network.os_vif_util [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "address": "fa:16:3e:9a:8a:00", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6b7b735-6f", "ovs_interfaceid": "a6b7b735-6f8d-4009-8d41-26fc5a89629e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.324 226310 DEBUG nova.network.os_vif_util [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:00,bridge_name='br-int',has_traffic_filtering=True,id=a6b7b735-6f8d-4009-8d41-26fc5a89629e,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6b7b735-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.324 226310 DEBUG os_vif [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:00,bridge_name='br-int',has_traffic_filtering=True,id=a6b7b735-6f8d-4009-8d41-26fc5a89629e,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6b7b735-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.327 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.328 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6b7b735-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.329 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.330 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.331 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.333 226310 INFO os_vif [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:8a:00,bridge_name='br-int',has_traffic_filtering=True,id=a6b7b735-6f8d-4009-8d41-26fc5a89629e,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6b7b735-6f')#033[00m
Nov 29 03:24:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:03.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.350 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:03 np0005539564 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[281652]: [NOTICE]   (281657) : haproxy version is 2.8.14-c23fe91
Nov 29 03:24:03 np0005539564 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[281652]: [NOTICE]   (281657) : path to executable is /usr/sbin/haproxy
Nov 29 03:24:03 np0005539564 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[281652]: [WARNING]  (281657) : Exiting Master process...
Nov 29 03:24:03 np0005539564 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[281652]: [WARNING]  (281657) : Exiting Master process...
Nov 29 03:24:03 np0005539564 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[281652]: [ALERT]    (281657) : Current worker (281659) exited with code 143 (Terminated)
Nov 29 03:24:03 np0005539564 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[281652]: [WARNING]  (281657) : All workers exited. Exiting... (0)
Nov 29 03:24:03 np0005539564 systemd[1]: libpod-a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab.scope: Deactivated successfully.
Nov 29 03:24:03 np0005539564 podman[281929]: 2025-11-29 08:24:03.542081046 +0000 UTC m=+0.309581410 container died a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:24:03 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab-userdata-shm.mount: Deactivated successfully.
Nov 29 03:24:03 np0005539564 systemd[1]: var-lib-containers-storage-overlay-df382b685961fcd7311e0f89755feb96d2dbbcc1b99ee72af94e804b28111b21-merged.mount: Deactivated successfully.
Nov 29 03:24:03 np0005539564 podman[281929]: 2025-11-29 08:24:03.704438785 +0000 UTC m=+0.471939139 container cleanup a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:24:03 np0005539564 systemd[1]: libpod-conmon-a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab.scope: Deactivated successfully.
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.737 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.737 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.738 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:03 np0005539564 podman[281999]: 2025-11-29 08:24:03.907756982 +0000 UTC m=+0.173099031 container remove a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.918 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6533e46b-f650-4980-9f7e-ba9eae6489b6]: (4, ('Sat Nov 29 08:24:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 (a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab)\na18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab\nSat Nov 29 08:24:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 (a18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab)\na18cb3ce9c8b57139cc1a26886e68ba356f259fa8e361bbc763ef454e0caa8ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.921 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c609e5d7-7dfc-48be-9079-61c342c0802e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.922 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap258f6232-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.925 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 kernel: tap258f6232-60: left promiscuous mode
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.928 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.933 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[42d886a9-b58d-468d-b112-e580a770021b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:03 np0005539564 nova_compute[226295]: 2025-11-29 08:24:03.953 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.962 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[13dc71b5-1be1-477d-85ef-383539a21ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.964 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[785ef24b-30cb-4d00-8169-0480a04e44d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.993 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5ea0b6-77d3-4bfc-b0fe-3b0358214b12]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761457, 'reachable_time': 19969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282014, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.997 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.997 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[8b41f0f7-6803-4c09-881a-a7f2b3fab916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:03.998 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a6b7b735-6f8d-4009-8d41-26fc5a89629e in datapath 258f6232-6798-4075-adab-c07c4559ef67 unbound from our chassis#033[00m
Nov 29 03:24:03 np0005539564 systemd[1]: run-netns-ovnmeta\x2d258f6232\x2d6798\x2d4075\x2dadab\x2dc07c4559ef67.mount: Deactivated successfully.
Nov 29 03:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:04.000 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 258f6232-6798-4075-adab-c07c4559ef67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:04.001 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2738cb60-5180-4872-9d36-0befe2418a0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:04.002 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a6b7b735-6f8d-4009-8d41-26fc5a89629e in datapath 258f6232-6798-4075-adab-c07c4559ef67 unbound from our chassis#033[00m
Nov 29 03:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:04.004 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 258f6232-6798-4075-adab-c07c4559ef67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:24:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:04.005 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fcc36c-73d8-4c88-9718-766bc6201952]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:04.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.877 226310 DEBUG nova.compute.manager [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received event network-vif-unplugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.878 226310 DEBUG oslo_concurrency.lockutils [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.878 226310 DEBUG oslo_concurrency.lockutils [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.878 226310 DEBUG oslo_concurrency.lockutils [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.879 226310 DEBUG nova.compute.manager [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] No waiting events found dispatching network-vif-unplugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.879 226310 DEBUG nova.compute.manager [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received event network-vif-unplugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.880 226310 DEBUG nova.compute.manager [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received event network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.880 226310 DEBUG oslo_concurrency.lockutils [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.880 226310 DEBUG oslo_concurrency.lockutils [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.881 226310 DEBUG oslo_concurrency.lockutils [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.881 226310 DEBUG nova.compute.manager [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] No waiting events found dispatching network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.881 226310 WARNING nova.compute.manager [req-cc2a3863-7e8b-45e0-9ead-99922dc571ab req-261818cd-b1a5-43f9-89e1-9505f0c2c6f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received unexpected event network-vif-plugged-a6b7b735-6f8d-4009-8d41-26fc5a89629e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:24:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e342 e342: 3 total, 3 up, 3 in
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.997 226310 INFO nova.virt.libvirt.driver [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Deleting instance files /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af_del#033[00m
Nov 29 03:24:05 np0005539564 nova_compute[226295]: 2025-11-29 08:24:05.998 226310 INFO nova.virt.libvirt.driver [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Deletion of /var/lib/nova/instances/5a1e0d67-b865-42ce-b195-7bfea62954af_del complete#033[00m
Nov 29 03:24:06 np0005539564 nova_compute[226295]: 2025-11-29 08:24:06.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:06.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:24:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:06 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:24:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.906 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.906 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.907 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.907 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.907 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.981 226310 INFO nova.compute.manager [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Took 4.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.983 226310 DEBUG oslo.service.loopingcall [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.991 226310 DEBUG nova.compute.manager [-] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:24:07 np0005539564 nova_compute[226295]: 2025-11-29 08:24:07.992 226310 DEBUG nova.network.neutron [-] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.064 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.330 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:08 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2651255764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.564 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.765 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.766 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4337MB free_disk=20.932586669921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.767 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.767 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:08.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.980 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 5a1e0d67-b865-42ce-b195-7bfea62954af actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.980 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:24:08 np0005539564 nova_compute[226295]: 2025-11-29 08:24:08.981 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.106 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.268 226310 DEBUG nova.network.neutron [-] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:09.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.374 226310 INFO nova.compute.manager [-] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Took 1.38 seconds to deallocate network for instance.#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.386 226310 DEBUG nova.compute.manager [req-ab194e64-3cf8-41e4-8b92-d8885d3c48f7 req-793aa85c-fbf4-4dd0-a4c5-cd84f889bf06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Received event network-vif-deleted-a6b7b735-6f8d-4009-8d41-26fc5a89629e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.446 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1975245960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.553 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.561 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.612 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.868 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.868 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.869 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:09 np0005539564 nova_compute[226295]: 2025-11-29 08:24:09.982 226310 DEBUG oslo_concurrency.processutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:10 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2023723965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:10 np0005539564 nova_compute[226295]: 2025-11-29 08:24:10.482 226310 DEBUG oslo_concurrency.processutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:10 np0005539564 nova_compute[226295]: 2025-11-29 08:24:10.489 226310 DEBUG nova.compute.provider_tree [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:10 np0005539564 nova_compute[226295]: 2025-11-29 08:24:10.513 226310 DEBUG nova.scheduler.client.report [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:10 np0005539564 nova_compute[226295]: 2025-11-29 08:24:10.554 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:10 np0005539564 nova_compute[226295]: 2025-11-29 08:24:10.646 226310 INFO nova.scheduler.client.report [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Deleted allocations for instance 5a1e0d67-b865-42ce-b195-7bfea62954af#033[00m
Nov 29 03:24:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:10 np0005539564 nova_compute[226295]: 2025-11-29 08:24:10.858 226310 DEBUG oslo_concurrency.lockutils [None req-e22c0661-18c7-4ba4-bca9-122e35e0c4ad 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "5a1e0d67-b865-42ce-b195-7bfea62954af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.332 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:13.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.553 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.553 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.589 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.656 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.657 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.666 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.667 226310 INFO nova.compute.claims [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:24:13 np0005539564 nova_compute[226295]: 2025-11-29 08:24:13.825 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 e343: 3 total, 3 up, 3 in
Nov 29 03:24:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:14 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4089003676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:14 np0005539564 nova_compute[226295]: 2025-11-29 08:24:14.303 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:14 np0005539564 nova_compute[226295]: 2025-11-29 08:24:14.314 226310 DEBUG nova.compute.provider_tree [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:14 np0005539564 nova_compute[226295]: 2025-11-29 08:24:14.365 226310 DEBUG nova.scheduler.client.report [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:14 np0005539564 nova_compute[226295]: 2025-11-29 08:24:14.656 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:14 np0005539564 nova_compute[226295]: 2025-11-29 08:24:14.657 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:24:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:14.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:15.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:15 np0005539564 nova_compute[226295]: 2025-11-29 08:24:15.645 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:24:15 np0005539564 nova_compute[226295]: 2025-11-29 08:24:15.645 226310 DEBUG nova.network.neutron [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:24:15 np0005539564 nova_compute[226295]: 2025-11-29 08:24:15.735 226310 INFO nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:24:15 np0005539564 nova_compute[226295]: 2025-11-29 08:24:15.757 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.104 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.106 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.106 226310 INFO nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Creating image(s)#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.135 226310 DEBUG nova.storage.rbd_utils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.170 226310 DEBUG nova.storage.rbd_utils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.202 226310 DEBUG nova.storage.rbd_utils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.206 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.272 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.273 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.274 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.274 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.305 226310 DEBUG nova.storage.rbd_utils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.310 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.645 226310 DEBUG nova.policy [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dfcf2db50da745c09bffcf32ec016854', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09cc8c3182d845f597dda064f9013941', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:24:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.885 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:16 np0005539564 nova_compute[226295]: 2025-11-29 08:24:16.997 226310 DEBUG nova.storage.rbd_utils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] resizing rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:24:17 np0005539564 nova_compute[226295]: 2025-11-29 08:24:17.175 226310 DEBUG nova.objects.instance [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'migration_context' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:17 np0005539564 nova_compute[226295]: 2025-11-29 08:24:17.204 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:24:17 np0005539564 nova_compute[226295]: 2025-11-29 08:24:17.205 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Ensure instance console log exists: /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:24:17 np0005539564 nova_compute[226295]: 2025-11-29 08:24:17.206 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:17 np0005539564 nova_compute[226295]: 2025-11-29 08:24:17.206 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:17 np0005539564 nova_compute[226295]: 2025-11-29 08:24:17.207 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:17.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:17 np0005539564 podman[282322]: 2025-11-29 08:24:17.559787086 +0000 UTC m=+0.103827948 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:24:17 np0005539564 podman[282323]: 2025-11-29 08:24:17.579282313 +0000 UTC m=+0.118766061 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:24:17 np0005539564 nova_compute[226295]: 2025-11-29 08:24:17.591 226310 DEBUG nova.network.neutron [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Successfully created port: a7a9e323-49eb-415e-85cd-322403ba6517 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:24:17 np0005539564 podman[282321]: 2025-11-29 08:24:17.643228502 +0000 UTC m=+0.188056775 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.306 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404643.3043852, 5a1e0d67-b865-42ce-b195-7bfea62954af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.306 226310 INFO nova.compute.manager [-] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.334 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.343 226310 DEBUG nova.compute.manager [None req-54015e18-5957-4b1c-8379-78cda5b85236 - - - - - -] [instance: 5a1e0d67-b865-42ce-b195-7bfea62954af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:18.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.863 226310 DEBUG nova.network.neutron [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Successfully updated port: a7a9e323-49eb-415e-85cd-322403ba6517 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.896 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.897 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquired lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:18 np0005539564 nova_compute[226295]: 2025-11-29 08:24:18.897 226310 DEBUG nova.network.neutron [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:19 np0005539564 nova_compute[226295]: 2025-11-29 08:24:19.037 226310 DEBUG nova.compute.manager [req-35c6fbe5-aae0-40ac-b1c1-49cb7bb2c692 req-bdaf9036-7035-4b1d-af3b-3147ccbffba0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-changed-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:19 np0005539564 nova_compute[226295]: 2025-11-29 08:24:19.038 226310 DEBUG nova.compute.manager [req-35c6fbe5-aae0-40ac-b1c1-49cb7bb2c692 req-bdaf9036-7035-4b1d-af3b-3147ccbffba0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Refreshing instance network info cache due to event network-changed-a7a9e323-49eb-415e-85cd-322403ba6517. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:19 np0005539564 nova_compute[226295]: 2025-11-29 08:24:19.038 226310 DEBUG oslo_concurrency.lockutils [req-35c6fbe5-aae0-40ac-b1c1-49cb7bb2c692 req-bdaf9036-7035-4b1d-af3b-3147ccbffba0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:19 np0005539564 nova_compute[226295]: 2025-11-29 08:24:19.087 226310 DEBUG nova.network.neutron [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:24:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:19.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:20.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.723 226310 DEBUG nova.network.neutron [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updating instance_info_cache with network_info: [{"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.772 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Releasing lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.773 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance network_info: |[{"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.773 226310 DEBUG oslo_concurrency.lockutils [req-35c6fbe5-aae0-40ac-b1c1-49cb7bb2c692 req-bdaf9036-7035-4b1d-af3b-3147ccbffba0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.773 226310 DEBUG nova.network.neutron [req-35c6fbe5-aae0-40ac-b1c1-49cb7bb2c692 req-bdaf9036-7035-4b1d-af3b-3147ccbffba0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Refreshing network info cache for port a7a9e323-49eb-415e-85cd-322403ba6517 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.777 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Start _get_guest_xml network_info=[{"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.782 226310 WARNING nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.788 226310 DEBUG nova.virt.libvirt.host [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.789 226310 DEBUG nova.virt.libvirt.host [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.798 226310 DEBUG nova.virt.libvirt.host [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.798 226310 DEBUG nova.virt.libvirt.host [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.800 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.801 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.802 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.803 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.803 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.804 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.804 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.805 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.805 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.806 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.806 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.806 226310 DEBUG nova.virt.hardware [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:24:21 np0005539564 nova_compute[226295]: 2025-11-29 08:24:21.811 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1060176328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.350 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.385 226310 DEBUG nova.storage.rbd_utils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.391 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1509376654' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.881 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.883 226310 DEBUG nova.virt.libvirt.vif [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-871834401',display_name='tempest-ServerRescueNegativeTestJSON-server-871834401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-871834401',id=156,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-gi24u0nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:15Z,user_data=None,user_id='dfcf2db50da745c09bffcf32ec016854',uuid=c3e98a32-fd92-4873-a060-88aaf76bf1fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.883 226310 DEBUG nova.network.os_vif_util [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.884 226310 DEBUG nova.network.os_vif_util [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:e3,bridge_name='br-int',has_traffic_filtering=True,id=a7a9e323-49eb-415e-85cd-322403ba6517,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a9e323-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.885 226310 DEBUG nova.objects.instance [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'pci_devices' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.953 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <uuid>c3e98a32-fd92-4873-a060-88aaf76bf1fc</uuid>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <name>instance-0000009c</name>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-871834401</nova:name>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:24:21</nova:creationTime>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <nova:user uuid="dfcf2db50da745c09bffcf32ec016854">tempest-ServerRescueNegativeTestJSON-754875869-project-member</nova:user>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <nova:project uuid="09cc8c3182d845f597dda064f9013941">tempest-ServerRescueNegativeTestJSON-754875869</nova:project>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <nova:port uuid="a7a9e323-49eb-415e-85cd-322403ba6517">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <entry name="serial">c3e98a32-fd92-4873-a060-88aaf76bf1fc</entry>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <entry name="uuid">c3e98a32-fd92-4873-a060-88aaf76bf1fc</entry>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:bc:a3:e3"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <target dev="tapa7a9e323-49"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/console.log" append="off"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:24:22 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:24:22 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:24:22 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:24:22 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.955 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Preparing to wait for external event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.956 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.957 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.957 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.958 226310 DEBUG nova.virt.libvirt.vif [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-871834401',display_name='tempest-ServerRescueNegativeTestJSON-server-871834401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-871834401',id=156,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-gi24u0nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:15Z,user_data=None,user_id='dfcf2db50da745c09bffcf32ec016854',uuid=c3e98a32-fd92-4873-a060-88aaf76bf1fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.959 226310 DEBUG nova.network.os_vif_util [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.960 226310 DEBUG nova.network.os_vif_util [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:e3,bridge_name='br-int',has_traffic_filtering=True,id=a7a9e323-49eb-415e-85cd-322403ba6517,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a9e323-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.961 226310 DEBUG os_vif [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:e3,bridge_name='br-int',has_traffic_filtering=True,id=a7a9e323-49eb-415e-85cd-322403ba6517,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a9e323-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.962 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.962 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.963 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.969 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.969 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7a9e323-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.970 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7a9e323-49, col_values=(('external_ids', {'iface-id': 'a7a9e323-49eb-415e-85cd-322403ba6517', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:a3:e3', 'vm-uuid': 'c3e98a32-fd92-4873-a060-88aaf76bf1fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.973 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539564 NetworkManager[48997]: <info>  [1764404662.9744] manager: (tapa7a9e323-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.976 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.983 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539564 nova_compute[226295]: 2025-11-29 08:24:22.985 226310 INFO os_vif [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:e3,bridge_name='br-int',has_traffic_filtering=True,id=a7a9e323-49eb-415e-85cd-322403ba6517,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a9e323-49')#033[00m
Nov 29 03:24:23 np0005539564 nova_compute[226295]: 2025-11-29 08:24:23.060 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:23 np0005539564 nova_compute[226295]: 2025-11-29 08:24:23.060 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:23 np0005539564 nova_compute[226295]: 2025-11-29 08:24:23.060 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No VIF found with MAC fa:16:3e:bc:a3:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:24:23 np0005539564 nova_compute[226295]: 2025-11-29 08:24:23.061 226310 INFO nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Using config drive#033[00m
Nov 29 03:24:23 np0005539564 nova_compute[226295]: 2025-11-29 08:24:23.089 226310 DEBUG nova.storage.rbd_utils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:23 np0005539564 nova_compute[226295]: 2025-11-29 08:24:23.100 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:23.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:23 np0005539564 nova_compute[226295]: 2025-11-29 08:24:23.997 226310 INFO nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Creating config drive at /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config#033[00m
Nov 29 03:24:24 np0005539564 nova_compute[226295]: 2025-11-29 08:24:24.003 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjxp89h0i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:24 np0005539564 nova_compute[226295]: 2025-11-29 08:24:24.069 226310 DEBUG nova.network.neutron [req-35c6fbe5-aae0-40ac-b1c1-49cb7bb2c692 req-bdaf9036-7035-4b1d-af3b-3147ccbffba0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updated VIF entry in instance network info cache for port a7a9e323-49eb-415e-85cd-322403ba6517. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:24 np0005539564 nova_compute[226295]: 2025-11-29 08:24:24.070 226310 DEBUG nova.network.neutron [req-35c6fbe5-aae0-40ac-b1c1-49cb7bb2c692 req-bdaf9036-7035-4b1d-af3b-3147ccbffba0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updating instance_info_cache with network_info: [{"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:24 np0005539564 nova_compute[226295]: 2025-11-29 08:24:24.121 226310 DEBUG oslo_concurrency.lockutils [req-35c6fbe5-aae0-40ac-b1c1-49cb7bb2c692 req-bdaf9036-7035-4b1d-af3b-3147ccbffba0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:24 np0005539564 nova_compute[226295]: 2025-11-29 08:24:24.151 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjxp89h0i" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:24 np0005539564 nova_compute[226295]: 2025-11-29 08:24:24.184 226310 DEBUG nova.storage.rbd_utils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:24 np0005539564 nova_compute[226295]: 2025-11-29 08:24:24.187 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:25.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:26.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:27.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:27 np0005539564 nova_compute[226295]: 2025-11-29 08:24:27.975 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:28 np0005539564 nova_compute[226295]: 2025-11-29 08:24:28.074 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:28 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 03:24:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:28.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:30.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:24:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:31.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:24:32 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 03:24:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:32.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:33 np0005539564 nova_compute[226295]: 2025-11-29 08:24:33.024 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:33 np0005539564 nova_compute[226295]: 2025-11-29 08:24:33.076 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:33.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:33 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 9.106650352s
Nov 29 03:24:33 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.106958389s, txc = 0x55ba4e5a3800
Nov 29 03:24:33 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 9.106650352s
Nov 29 03:24:33 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for omap_get_values, latency = 8.260603905s
Nov 29 03:24:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).paxos(paxos updating c 4519..5190) lease_timeout -- calling new election
Nov 29 03:24:33 np0005539564 ceph-mon[81769]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 03:24:33 np0005539564 ceph-mon[81769]: paxos.2).electionLogic(52) init, last seen epoch 52
Nov 29 03:24:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 03:24:33 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.505393028s, txc = 0x55ba4eada900
Nov 29 03:24:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 03:24:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 03:24:34 np0005539564 nova_compute[226295]: 2025-11-29 08:24:34.592 226310 DEBUG oslo_concurrency.processutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 10.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:34 np0005539564 nova_compute[226295]: 2025-11-29 08:24:34.593 226310 INFO nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Deleting local config drive /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:24:34 np0005539564 kernel: tapa7a9e323-49: entered promiscuous mode
Nov 29 03:24:34 np0005539564 nova_compute[226295]: 2025-11-29 08:24:34.682 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:34Z|00572|binding|INFO|Claiming lport a7a9e323-49eb-415e-85cd-322403ba6517 for this chassis.
Nov 29 03:24:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:34Z|00573|binding|INFO|a7a9e323-49eb-415e-85cd-322403ba6517: Claiming fa:16:3e:bc:a3:e3 10.100.0.8
Nov 29 03:24:34 np0005539564 NetworkManager[48997]: <info>  [1764404674.6858] manager: (tapa7a9e323-49): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Nov 29 03:24:34 np0005539564 nova_compute[226295]: 2025-11-29 08:24:34.687 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.705 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:e3 10.100.0.8'], port_security=['fa:16:3e:bc:a3:e3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3e98a32-fd92-4873-a060-88aaf76bf1fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a7a9e323-49eb-415e-85cd-322403ba6517) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.706 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a7a9e323-49eb-415e-85cd-322403ba6517 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 bound to our chassis#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.707 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7008b597-8de2-4973-801f-fcc733e4f6c9#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.729 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9010204b-da20-4142-bc1a-f443d6dcda20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.731 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7008b597-81 in ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.733 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7008b597-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.734 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b92ed3-7646-4d5f-b3f7-aa9a4dc15782]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 systemd-udevd[282521]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:34 np0005539564 systemd-machined[190128]: New machine qemu-71-instance-0000009c.
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.735 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[acbc4c27-e647-46a8-b7c9-060323922131]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 NetworkManager[48997]: <info>  [1764404674.7529] device (tapa7a9e323-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:34 np0005539564 NetworkManager[48997]: <info>  [1764404674.7542] device (tapa7a9e323-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.755 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[ec663ddc-d7d6-4f2e-abc2-b54d6ce48641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 nova_compute[226295]: 2025-11-29 08:24:34.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:34Z|00574|binding|INFO|Setting lport a7a9e323-49eb-415e-85cd-322403ba6517 ovn-installed in OVS
Nov 29 03:24:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:34Z|00575|binding|INFO|Setting lport a7a9e323-49eb-415e-85cd-322403ba6517 up in Southbound
Nov 29 03:24:34 np0005539564 nova_compute[226295]: 2025-11-29 08:24:34.773 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:34 np0005539564 systemd[1]: Started Virtual Machine qemu-71-instance-0000009c.
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.781 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb8cd38-9242-4bfc-9580-f191f1739267]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.815 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4f982069-33e0-400b-98b2-c296d5c49180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 systemd-udevd[282524]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.823 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[65307946-8277-459b-9490-ccebb8281049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 NetworkManager[48997]: <info>  [1764404674.8241] manager: (tap7008b597-80): new Veth device (/org/freedesktop/NetworkManager/Devices/263)
Nov 29 03:24:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:34.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.873 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c801aed1-2b33-4db7-a076-97e987335822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.878 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1fab882b-5e45-4a45-9e0b-87b67cc9fc73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 NetworkManager[48997]: <info>  [1764404674.9096] device (tap7008b597-80): carrier: link connected
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.917 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3367fffa-dc4f-467c-a87b-d9c722abb847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.944 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8d70d27d-6792-4987-8a14-857bed3fbcd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764959, 'reachable_time': 17770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282553, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:34.976 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae13fd0-2d8a-48f4-a419-a1da19de9c7d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2c65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764959, 'tstamp': 764959}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282554, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.004 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[de1c11c5-7aee-4f1b-a0c0-3454fccf016f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764959, 'reachable_time': 17770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282555, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.055 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21384033-d3a4-4b61-b970-c4b5a46d1a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.142 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc71571-c1cc-42d7-96ab-56dd9796562e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.144 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.144 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.145 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7008b597-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:35 np0005539564 kernel: tap7008b597-80: entered promiscuous mode
Nov 29 03:24:35 np0005539564 NetworkManager[48997]: <info>  [1764404675.1488] manager: (tap7008b597-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Nov 29 03:24:35 np0005539564 nova_compute[226295]: 2025-11-29 08:24:35.148 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.156 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7008b597-80, col_values=(('external_ids', {'iface-id': '42a41b42-1527-4cfa-9dcf-4b7f34b092b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:35 np0005539564 nova_compute[226295]: 2025-11-29 08:24:35.158 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:35Z|00576|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.162 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.163 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[663429e7-fb90-484c-aa5e-04c26a868c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.165 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:24:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:35.166 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'env', 'PROCESS_TAG=haproxy-7008b597-8de2-4973-801f-fcc733e4f6c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7008b597-8de2-4973-801f-fcc733e4f6c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:24:35 np0005539564 nova_compute[226295]: 2025-11-29 08:24:35.177 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539564 nova_compute[226295]: 2025-11-29 08:24:35.225 226310 DEBUG nova.compute.manager [req-f5768f57-f8da-4f5a-bce6-7d13523371a2 req-45c1ecfd-b394-42e6-ad2b-afc4857f5764 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:35 np0005539564 nova_compute[226295]: 2025-11-29 08:24:35.225 226310 DEBUG oslo_concurrency.lockutils [req-f5768f57-f8da-4f5a-bce6-7d13523371a2 req-45c1ecfd-b394-42e6-ad2b-afc4857f5764 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:35 np0005539564 nova_compute[226295]: 2025-11-29 08:24:35.226 226310 DEBUG oslo_concurrency.lockutils [req-f5768f57-f8da-4f5a-bce6-7d13523371a2 req-45c1ecfd-b394-42e6-ad2b-afc4857f5764 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:35 np0005539564 nova_compute[226295]: 2025-11-29 08:24:35.226 226310 DEBUG oslo_concurrency.lockutils [req-f5768f57-f8da-4f5a-bce6-7d13523371a2 req-45c1ecfd-b394-42e6-ad2b-afc4857f5764 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:35 np0005539564 nova_compute[226295]: 2025-11-29 08:24:35.226 226310 DEBUG nova.compute.manager [req-f5768f57-f8da-4f5a-bce6-7d13523371a2 req-45c1ecfd-b394-42e6-ad2b-afc4857f5764 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Processing event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:24:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:35.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:35 np0005539564 ceph-mon[81769]: mon.compute-2 calling monitor election
Nov 29 03:24:35 np0005539564 ceph-mon[81769]: mon.compute-0 calling monitor election
Nov 29 03:24:35 np0005539564 ceph-mon[81769]: mon.compute-1 calling monitor election
Nov 29 03:24:35 np0005539564 ceph-mon[81769]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 03:24:35 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 03:24:35 np0005539564 podman[282601]: 2025-11-29 08:24:35.569405525 +0000 UTC m=+0.032100878 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:24:36 np0005539564 podman[282601]: 2025-11-29 08:24:36.004528049 +0000 UTC m=+0.467223342 container create d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:24:36 np0005539564 systemd[1]: Started libpod-conmon-d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a.scope.
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.144 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404676.1445663, c3e98a32-fd92-4873-a060-88aaf76bf1fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.145 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.148 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.157 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.163 226310 INFO nova.virt.libvirt.driver [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance spawned successfully.#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.163 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:24:36 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:24:36 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d315498782762f6ff5622412321f09ee087f767ce4aef519048f2fce01c39db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.184 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.194 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.200 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.201 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.202 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.202 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.203 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.204 226310 DEBUG nova.virt.libvirt.driver [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:36 np0005539564 podman[282601]: 2025-11-29 08:24:36.21871909 +0000 UTC m=+0.681414383 container init d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:24:36 np0005539564 podman[282601]: 2025-11-29 08:24:36.228573476 +0000 UTC m=+0.691268759 container start d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.237 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.238 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404676.1454153, c3e98a32-fd92-4873-a060-88aaf76bf1fc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.239 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.272 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:36 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[282645]: [NOTICE]   (282649) : New worker (282651) forked
Nov 29 03:24:36 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[282645]: [NOTICE]   (282649) : Loading success.
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.279 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404676.1513152, c3e98a32-fd92-4873-a060-88aaf76bf1fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.280 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.292 226310 INFO nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Took 20.19 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.293 226310 DEBUG nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.305 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.309 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.362 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.401 226310 INFO nova.compute.manager [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Took 22.77 seconds to build instance.#033[00m
Nov 29 03:24:36 np0005539564 nova_compute[226295]: 2025-11-29 08:24:36.439 226310 DEBUG oslo_concurrency.lockutils [None req-0b098a54-aadc-47b8-9287-dbae69677619 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:36.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:37 np0005539564 nova_compute[226295]: 2025-11-29 08:24:37.395 226310 DEBUG nova.compute.manager [req-f4d5c22a-f7a0-4f70-b1ba-253e88d2cb2c req-42c88c8d-3aa7-46c2-9de9-27209d6d0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:37 np0005539564 nova_compute[226295]: 2025-11-29 08:24:37.395 226310 DEBUG oslo_concurrency.lockutils [req-f4d5c22a-f7a0-4f70-b1ba-253e88d2cb2c req-42c88c8d-3aa7-46c2-9de9-27209d6d0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:37 np0005539564 nova_compute[226295]: 2025-11-29 08:24:37.396 226310 DEBUG oslo_concurrency.lockutils [req-f4d5c22a-f7a0-4f70-b1ba-253e88d2cb2c req-42c88c8d-3aa7-46c2-9de9-27209d6d0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:37 np0005539564 nova_compute[226295]: 2025-11-29 08:24:37.396 226310 DEBUG oslo_concurrency.lockutils [req-f4d5c22a-f7a0-4f70-b1ba-253e88d2cb2c req-42c88c8d-3aa7-46c2-9de9-27209d6d0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:37 np0005539564 nova_compute[226295]: 2025-11-29 08:24:37.397 226310 DEBUG nova.compute.manager [req-f4d5c22a-f7a0-4f70-b1ba-253e88d2cb2c req-42c88c8d-3aa7-46c2-9de9-27209d6d0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] No waiting events found dispatching network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:37 np0005539564 nova_compute[226295]: 2025-11-29 08:24:37.397 226310 WARNING nova.compute.manager [req-f4d5c22a-f7a0-4f70-b1ba-253e88d2cb2c req-42c88c8d-3aa7-46c2-9de9-27209d6d0f2b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received unexpected event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:24:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:37.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:37 np0005539564 nova_compute[226295]: 2025-11-29 08:24:37.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:37.727 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:37.728 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:24:38 np0005539564 nova_compute[226295]: 2025-11-29 08:24:38.055 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:38 np0005539564 nova_compute[226295]: 2025-11-29 08:24:38.078 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:38.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:39.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:40 np0005539564 nova_compute[226295]: 2025-11-29 08:24:40.266 226310 INFO nova.compute.manager [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Rescuing#033[00m
Nov 29 03:24:40 np0005539564 nova_compute[226295]: 2025-11-29 08:24:40.267 226310 DEBUG oslo_concurrency.lockutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:40 np0005539564 nova_compute[226295]: 2025-11-29 08:24:40.267 226310 DEBUG oslo_concurrency.lockutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquired lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:40 np0005539564 nova_compute[226295]: 2025-11-29 08:24:40.267 226310 DEBUG nova.network.neutron [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:40.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:24:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3356872865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:24:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:24:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3356872865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:24:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:41.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:42.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:43 np0005539564 nova_compute[226295]: 2025-11-29 08:24:43.058 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539564 nova_compute[226295]: 2025-11-29 08:24:43.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:24:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3743860223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:24:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:24:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3743860223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:24:43 np0005539564 nova_compute[226295]: 2025-11-29 08:24:43.240 226310 DEBUG nova.network.neutron [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updating instance_info_cache with network_info: [{"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:43 np0005539564 nova_compute[226295]: 2025-11-29 08:24:43.271 226310 DEBUG oslo_concurrency.lockutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Releasing lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:43.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:43 np0005539564 nova_compute[226295]: 2025-11-29 08:24:43.703 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:24:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:44.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:45.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:45.731 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:46.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:47.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:48 np0005539564 nova_compute[226295]: 2025-11-29 08:24:48.061 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:48 np0005539564 nova_compute[226295]: 2025-11-29 08:24:48.084 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:48 np0005539564 podman[282662]: 2025-11-29 08:24:48.521012201 +0000 UTC m=+0.071404981 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:24:48 np0005539564 podman[282663]: 2025-11-29 08:24:48.538274479 +0000 UTC m=+0.071070773 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:48 np0005539564 podman[282661]: 2025-11-29 08:24:48.578770463 +0000 UTC m=+0.122145043 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.674844) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688674951, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1958, "num_deletes": 264, "total_data_size": 4258641, "memory_usage": 4310208, "flush_reason": "Manual Compaction"}
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688718852, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 2784471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53375, "largest_seqno": 55327, "table_properties": {"data_size": 2776241, "index_size": 4916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18307, "raw_average_key_size": 20, "raw_value_size": 2759359, "raw_average_value_size": 3146, "num_data_blocks": 213, "num_entries": 877, "num_filter_entries": 877, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404534, "oldest_key_time": 1764404534, "file_creation_time": 1764404688, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 44109 microseconds, and 8245 cpu microseconds.
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.718902) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 2784471 bytes OK
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.718982) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.721738) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.721752) EVENT_LOG_v1 {"time_micros": 1764404688721748, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.721769) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 4249675, prev total WAL file size 4249675, number of live WAL files 2.
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.722868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373633' seq:72057594037927935, type:22 .. '6C6F676D0032303136' seq:0, type:0; will stop at (end)
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(2719KB)], [102(11MB)]
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688722961, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14377233, "oldest_snapshot_seqno": -1}
Nov 29 03:24:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8683 keys, 14207575 bytes, temperature: kUnknown
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688906782, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 14207575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14147388, "index_size": 37355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 225079, "raw_average_key_size": 25, "raw_value_size": 13990564, "raw_average_value_size": 1611, "num_data_blocks": 1470, "num_entries": 8683, "num_filter_entries": 8683, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404688, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.907204) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14207575 bytes
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.909249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.2 rd, 77.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.1 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 9232, records dropped: 549 output_compression: NoCompression
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.909288) EVENT_LOG_v1 {"time_micros": 1764404688909270, "job": 64, "event": "compaction_finished", "compaction_time_micros": 183912, "compaction_time_cpu_micros": 39956, "output_level": 6, "num_output_files": 1, "total_output_size": 14207575, "num_input_records": 9232, "num_output_records": 8683, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688910448, "job": 64, "event": "table_file_deletion", "file_number": 104}
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688914880, "job": 64, "event": "table_file_deletion", "file_number": 102}
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.722723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.914990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.914998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.915002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.915006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:24:48.915010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:50Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:a3:e3 10.100.0.8
Nov 29 03:24:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:50Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:a3:e3 10.100.0.8
Nov 29 03:24:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:51.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:53 np0005539564 nova_compute[226295]: 2025-11-29 08:24:53.106 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:53 np0005539564 nova_compute[226295]: 2025-11-29 08:24:53.108 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:53 np0005539564 nova_compute[226295]: 2025-11-29 08:24:53.108 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5023 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:24:53 np0005539564 nova_compute[226295]: 2025-11-29 08:24:53.108 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:24:53 np0005539564 nova_compute[226295]: 2025-11-29 08:24:53.108 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:24:53 np0005539564 nova_compute[226295]: 2025-11-29 08:24:53.110 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:53.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:53 np0005539564 nova_compute[226295]: 2025-11-29 08:24:53.754 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:24:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:55.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:55 np0005539564 nova_compute[226295]: 2025-11-29 08:24:55.861 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:55 np0005539564 nova_compute[226295]: 2025-11-29 08:24:55.862 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:56 np0005539564 nova_compute[226295]: 2025-11-29 08:24:56.774 226310 INFO nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:24:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:56.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:57 np0005539564 nova_compute[226295]: 2025-11-29 08:24:57.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.110 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:58 np0005539564 kernel: tapa7a9e323-49 (unregistering): left promiscuous mode
Nov 29 03:24:58 np0005539564 NetworkManager[48997]: <info>  [1764404698.4328] device (tapa7a9e323-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.445 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:58 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:58Z|00577|binding|INFO|Releasing lport a7a9e323-49eb-415e-85cd-322403ba6517 from this chassis (sb_readonly=0)
Nov 29 03:24:58 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:58Z|00578|binding|INFO|Setting lport a7a9e323-49eb-415e-85cd-322403ba6517 down in Southbound
Nov 29 03:24:58 np0005539564 ovn_controller[130591]: 2025-11-29T08:24:58Z|00579|binding|INFO|Removing iface tapa7a9e323-49 ovn-installed in OVS
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.449 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.495 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:58 np0005539564 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Nov 29 03:24:58 np0005539564 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009c.scope: Consumed 14.596s CPU time.
Nov 29 03:24:58 np0005539564 systemd-machined[190128]: Machine qemu-71-instance-0000009c terminated.
Nov 29 03:24:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:58.621 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:e3 10.100.0.8'], port_security=['fa:16:3e:bc:a3:e3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3e98a32-fd92-4873-a060-88aaf76bf1fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a7a9e323-49eb-415e-85cd-322403ba6517) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:58.623 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a7a9e323-49eb-415e-85cd-322403ba6517 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:24:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:58.625 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:24:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:58.627 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb0adf5-a3eb-4143-b18b-5ad9ed4cf3b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:58.628 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace which is not needed anymore#033[00m
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.643 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.657 226310 INFO nova.virt.libvirt.driver [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance destroyed successfully.#033[00m
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.658 226310 DEBUG nova.objects.instance [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'numa_topology' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:58 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[282645]: [NOTICE]   (282649) : haproxy version is 2.8.14-c23fe91
Nov 29 03:24:58 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[282645]: [NOTICE]   (282649) : path to executable is /usr/sbin/haproxy
Nov 29 03:24:58 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[282645]: [WARNING]  (282649) : Exiting Master process...
Nov 29 03:24:58 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[282645]: [ALERT]    (282649) : Current worker (282651) exited with code 143 (Terminated)
Nov 29 03:24:58 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[282645]: [WARNING]  (282649) : All workers exited. Exiting... (0)
Nov 29 03:24:58 np0005539564 systemd[1]: libpod-d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a.scope: Deactivated successfully.
Nov 29 03:24:58 np0005539564 podman[282759]: 2025-11-29 08:24:58.853854582 +0000 UTC m=+0.070554408 container died d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:24:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:58.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:58 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:24:58 np0005539564 systemd[1]: var-lib-containers-storage-overlay-6d315498782762f6ff5622412321f09ee087f767ce4aef519048f2fce01c39db-merged.mount: Deactivated successfully.
Nov 29 03:24:58 np0005539564 podman[282759]: 2025-11-29 08:24:58.916109565 +0000 UTC m=+0.132809391 container cleanup d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:24:58 np0005539564 systemd[1]: libpod-conmon-d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a.scope: Deactivated successfully.
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.960 226310 INFO nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Attempting rescue#033[00m
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.962 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.967 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:24:58 np0005539564 nova_compute[226295]: 2025-11-29 08:24:58.968 226310 INFO nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Creating image(s)#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.010 226310 DEBUG nova.storage.rbd_utils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.016 226310 DEBUG nova.objects.instance [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:59 np0005539564 podman[282790]: 2025-11-29 08:24:59.027429605 +0000 UTC m=+0.074659140 container remove d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.037 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[82c573b6-cc7b-406b-ab8f-ae411f2e14ec]: (4, ('Sat Nov 29 08:24:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a)\nd40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a\nSat Nov 29 08:24:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (d40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a)\nd40ff8d44c5b5ff14854425b3a2107e9932c93dd0507a24b95b45ba5ab4b5d5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.040 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef2f5db-957a-48dd-9019-6e65c744b6dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.041 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:59 np0005539564 kernel: tap7008b597-80: left promiscuous mode
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.078 226310 DEBUG nova.storage.rbd_utils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.081 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[57cb6ce4-d5cb-4ebe-951c-c71f37c28e50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.098 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b8c997-1686-415c-bdd3-e87d3e44bc7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.100 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1434e110-15a4-4a5d-8dab-822a39962df2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.127 226310 DEBUG nova.storage.rbd_utils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.130 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[65722a0f-79a7-4052-8a7c-ae9a71da018c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764948, 'reachable_time': 24616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282857, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.135 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:59 np0005539564 systemd[1]: run-netns-ovnmeta\x2d7008b597\x2d8de2\x2d4973\x2d801f\x2dfcc733e4f6c9.mount: Deactivated successfully.
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.134 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:24:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:24:59.135 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[729a7e3c-f233-43ec-ae0a-6dd90f20ec76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.179 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.256 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.257 226310 DEBUG oslo_concurrency.lockutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.258 226310 DEBUG oslo_concurrency.lockutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.258 226310 DEBUG oslo_concurrency.lockutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.290 226310 DEBUG nova.storage.rbd_utils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.295 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:59 np0005539564 nova_compute[226295]: 2025-11-29 08:24:59.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:24:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:24:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:59.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.467 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.467 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.468 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.468 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.690 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.692 226310 DEBUG nova.objects.instance [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'migration_context' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.733 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.734 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Start _get_guest_xml network_info=[{"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:bc:a3:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:00 np0005539564 nova_compute[226295]: 2025-11-29 08:25:00.735 226310 DEBUG nova.objects.instance [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'resources' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.085 226310 WARNING nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.095 226310 DEBUG nova.virt.libvirt.host [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.096 226310 DEBUG nova.virt.libvirt.host [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.101 226310 DEBUG nova.virt.libvirt.host [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.101 226310 DEBUG nova.virt.libvirt.host [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.103 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.103 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.104 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.104 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.104 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.104 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.105 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.105 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.105 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.105 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.105 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.106 226310 DEBUG nova.virt.hardware [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.106 226310 DEBUG nova.objects.instance [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.154 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2812038550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.673 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:01 np0005539564 nova_compute[226295]: 2025-11-29 08:25:01.675 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3360976604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.266 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.268 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/246337387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.733 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.737 226310 DEBUG nova.virt.libvirt.vif [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-871834401',display_name='tempest-ServerRescueNegativeTestJSON-server-871834401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-871834401',id=156,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-gi24u0nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:36Z,user_data=None,user_id='dfcf2db50da745c09bffcf32ec016854',uuid=c3e98a32-fd92-4873-a060-88aaf76bf1fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:bc:a3:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.738 226310 DEBUG nova.network.os_vif_util [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:bc:a3:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.740 226310 DEBUG nova.network.os_vif_util [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a3:e3,bridge_name='br-int',has_traffic_filtering=True,id=a7a9e323-49eb-415e-85cd-322403ba6517,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a9e323-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.742 226310 DEBUG nova.objects.instance [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'pci_devices' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.764 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <uuid>c3e98a32-fd92-4873-a060-88aaf76bf1fc</uuid>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <name>instance-0000009c</name>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-871834401</nova:name>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:25:01</nova:creationTime>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <nova:user uuid="dfcf2db50da745c09bffcf32ec016854">tempest-ServerRescueNegativeTestJSON-754875869-project-member</nova:user>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <nova:project uuid="09cc8c3182d845f597dda064f9013941">tempest-ServerRescueNegativeTestJSON-754875869</nova:project>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <nova:port uuid="a7a9e323-49eb-415e-85cd-322403ba6517">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <entry name="serial">c3e98a32-fd92-4873-a060-88aaf76bf1fc</entry>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <entry name="uuid">c3e98a32-fd92-4873-a060-88aaf76bf1fc</entry>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.rescue">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config.rescue">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:bc:a3:e3"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <target dev="tapa7a9e323-49"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/console.log" append="off"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:25:02 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:25:02 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:25:02 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:25:02 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.772 226310 INFO nova.virt.libvirt.driver [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance destroyed successfully.#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.834 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.835 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.836 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.836 226310 DEBUG nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No VIF found with MAC fa:16:3e:bc:a3:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.837 226310 INFO nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Using config drive#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.864 226310 DEBUG nova.storage.rbd_utils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.902 226310 DEBUG nova.objects.instance [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:02 np0005539564 nova_compute[226295]: 2025-11-29 08:25:02.956 226310 DEBUG nova.objects.instance [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'keypairs' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.159 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:03.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.614 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updating instance_info_cache with network_info: [{"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.637 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.638 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.639 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.640 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.640 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.670 226310 DEBUG nova.compute.manager [req-fbf36fd0-0e33-4e6e-bbed-2fdd58641478 req-c78f673e-34ab-4a72-9b83-b83555955eb8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-unplugged-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.671 226310 DEBUG oslo_concurrency.lockutils [req-fbf36fd0-0e33-4e6e-bbed-2fdd58641478 req-c78f673e-34ab-4a72-9b83-b83555955eb8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.671 226310 DEBUG oslo_concurrency.lockutils [req-fbf36fd0-0e33-4e6e-bbed-2fdd58641478 req-c78f673e-34ab-4a72-9b83-b83555955eb8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.672 226310 DEBUG oslo_concurrency.lockutils [req-fbf36fd0-0e33-4e6e-bbed-2fdd58641478 req-c78f673e-34ab-4a72-9b83-b83555955eb8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.672 226310 DEBUG nova.compute.manager [req-fbf36fd0-0e33-4e6e-bbed-2fdd58641478 req-c78f673e-34ab-4a72-9b83-b83555955eb8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] No waiting events found dispatching network-vif-unplugged-a7a9e323-49eb-415e-85cd-322403ba6517 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.673 226310 WARNING nova.compute.manager [req-fbf36fd0-0e33-4e6e-bbed-2fdd58641478 req-c78f673e-34ab-4a72-9b83-b83555955eb8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received unexpected event network-vif-unplugged-a7a9e323-49eb-415e-85cd-322403ba6517 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.690 226310 INFO nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Creating config drive at /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config.rescue#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.700 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkw4dvy38 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:03.739 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:03.740 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:03.740 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.848 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkw4dvy38" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.891 226310 DEBUG nova.storage.rbd_utils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:03 np0005539564 nova_compute[226295]: 2025-11-29 08:25:03.897 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config.rescue c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.094 226310 DEBUG oslo_concurrency.processutils [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config.rescue c3e98a32-fd92-4873-a060-88aaf76bf1fc_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.095 226310 INFO nova.virt.libvirt.driver [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Deleting local config drive /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:25:04 np0005539564 kernel: tapa7a9e323-49: entered promiscuous mode
Nov 29 03:25:04 np0005539564 NetworkManager[48997]: <info>  [1764404704.1693] manager: (tapa7a9e323-49): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.169 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:04Z|00580|binding|INFO|Claiming lport a7a9e323-49eb-415e-85cd-322403ba6517 for this chassis.
Nov 29 03:25:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:04Z|00581|binding|INFO|a7a9e323-49eb-415e-85cd-322403ba6517: Claiming fa:16:3e:bc:a3:e3 10.100.0.8
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.178 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:e3 10.100.0.8'], port_security=['fa:16:3e:bc:a3:e3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3e98a32-fd92-4873-a060-88aaf76bf1fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a7a9e323-49eb-415e-85cd-322403ba6517) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.180 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a7a9e323-49eb-415e-85cd-322403ba6517 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 bound to our chassis#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.183 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7008b597-8de2-4973-801f-fcc733e4f6c9#033[00m
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.187 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:04Z|00582|binding|INFO|Setting lport a7a9e323-49eb-415e-85cd-322403ba6517 ovn-installed in OVS
Nov 29 03:25:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:04Z|00583|binding|INFO|Setting lport a7a9e323-49eb-415e-85cd-322403ba6517 up in Southbound
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.192 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.197 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2292a5-3729-411d-986c-682def771630]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.198 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7008b597-81 in ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.201 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7008b597-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.201 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cda7b37f-a8fd-4946-8c60-ddf77991810f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.202 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f386f7da-b539-4f21-bc0e-f024999f8320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 systemd-udevd[283043]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.218 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaac1cd-5eab-476b-8486-1044b7cd30e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 systemd-machined[190128]: New machine qemu-72-instance-0000009c.
Nov 29 03:25:04 np0005539564 NetworkManager[48997]: <info>  [1764404704.2276] device (tapa7a9e323-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:04 np0005539564 NetworkManager[48997]: <info>  [1764404704.2283] device (tapa7a9e323-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:04 np0005539564 systemd[1]: Started Virtual Machine qemu-72-instance-0000009c.
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.234 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b53854a6-bb04-4e41-be16-d275b2f793a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.265 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[38a039c0-cf68-48bb-aa01-44ae0408e585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.271 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[05c4d53d-b139-4c79-af1e-e6fa5aed95a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 NetworkManager[48997]: <info>  [1764404704.2722] manager: (tap7008b597-80): new Veth device (/org/freedesktop/NetworkManager/Devices/266)
Nov 29 03:25:04 np0005539564 systemd-udevd[283046]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.304 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[717394aa-fafb-4b62-b88a-11cc6c82bb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.307 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5e53df70-b4bc-45d1-9be5-2dcf7453a941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 NetworkManager[48997]: <info>  [1764404704.3398] device (tap7008b597-80): carrier: link connected
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.346 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c90ada3b-2a1b-4531-8088-1fd3c81d173f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.367 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fa14f56e-f307-4d70-a5b5-faf094c04a15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 767902, 'reachable_time': 28327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283074, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.383 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[46afbc40-20ba-48d8-85f7-6683dade8ecd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2c65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 767902, 'tstamp': 767902}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283075, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.405 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[370d29ff-ce11-4880-9c36-667253e1eb9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 767902, 'reachable_time': 28327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283076, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.434 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[481b3bd7-dba4-49a7-bc6a-587987dcd242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.493 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[39960f08-e3a0-4cef-ab35-3bb3c37ea1ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.494 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.494 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.495 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7008b597-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:04 np0005539564 kernel: tap7008b597-80: entered promiscuous mode
Nov 29 03:25:04 np0005539564 NetworkManager[48997]: <info>  [1764404704.4975] manager: (tap7008b597-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.498 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.499 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7008b597-80, col_values=(('external_ids', {'iface-id': '42a41b42-1527-4cfa-9dcf-4b7f34b092b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.501 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:04Z|00584|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.515 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.518 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.519 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5cd524-7529-43ad-a260-2945f9b3ab66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.520 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:25:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:04.521 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'env', 'PROCESS_TAG=haproxy-7008b597-8de2-4973-801f-fcc733e4f6c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7008b597-8de2-4973-801f-fcc733e4f6c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:25:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:04 np0005539564 podman[283166]: 2025-11-29 08:25:04.919763013 +0000 UTC m=+0.069517680 container create e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.950 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for c3e98a32-fd92-4873-a060-88aaf76bf1fc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.951 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404704.9497209, c3e98a32-fd92-4873-a060-88aaf76bf1fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.952 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.956 226310 DEBUG nova.compute.manager [None req-d5269b95-d093-4124-b1c2-61f21a3e9a1d dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:04 np0005539564 podman[283166]: 2025-11-29 08:25:04.878263451 +0000 UTC m=+0.028018178 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:25:04 np0005539564 systemd[1]: Started libpod-conmon-e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6.scope.
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.992 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:04 np0005539564 nova_compute[226295]: 2025-11-29 08:25:04.996 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:05 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:25:05 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe103fd52a604e98ae48d6723fe8399987b6df35d07b9990afa74fc8c3f1d14b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:05 np0005539564 podman[283166]: 2025-11-29 08:25:05.024475935 +0000 UTC m=+0.174230612 container init e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:05 np0005539564 podman[283166]: 2025-11-29 08:25:05.029789597 +0000 UTC m=+0.179544274 container start e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:25:05 np0005539564 nova_compute[226295]: 2025-11-29 08:25:05.039 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:25:05 np0005539564 nova_compute[226295]: 2025-11-29 08:25:05.040 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404704.9513319, c3e98a32-fd92-4873-a060-88aaf76bf1fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:05 np0005539564 nova_compute[226295]: 2025-11-29 08:25:05.040 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:05 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[283184]: [NOTICE]   (283188) : New worker (283190) forked
Nov 29 03:25:05 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[283184]: [NOTICE]   (283188) : Loading success.
Nov 29 03:25:05 np0005539564 nova_compute[226295]: 2025-11-29 08:25:05.066 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:05 np0005539564 nova_compute[226295]: 2025-11-29 08:25:05.069 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:05.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:06 np0005539564 nova_compute[226295]: 2025-11-29 08:25:06.543 226310 DEBUG nova.compute.manager [req-136480bd-fb29-4149-a7d3-a3bdab0f1641 req-4e53b046-f216-4194-bb25-fad6febb99ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:06 np0005539564 nova_compute[226295]: 2025-11-29 08:25:06.544 226310 DEBUG oslo_concurrency.lockutils [req-136480bd-fb29-4149-a7d3-a3bdab0f1641 req-4e53b046-f216-4194-bb25-fad6febb99ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:06 np0005539564 nova_compute[226295]: 2025-11-29 08:25:06.545 226310 DEBUG oslo_concurrency.lockutils [req-136480bd-fb29-4149-a7d3-a3bdab0f1641 req-4e53b046-f216-4194-bb25-fad6febb99ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:06 np0005539564 nova_compute[226295]: 2025-11-29 08:25:06.545 226310 DEBUG oslo_concurrency.lockutils [req-136480bd-fb29-4149-a7d3-a3bdab0f1641 req-4e53b046-f216-4194-bb25-fad6febb99ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:06 np0005539564 nova_compute[226295]: 2025-11-29 08:25:06.546 226310 DEBUG nova.compute.manager [req-136480bd-fb29-4149-a7d3-a3bdab0f1641 req-4e53b046-f216-4194-bb25-fad6febb99ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] No waiting events found dispatching network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:06 np0005539564 nova_compute[226295]: 2025-11-29 08:25:06.546 226310 WARNING nova.compute.manager [req-136480bd-fb29-4149-a7d3-a3bdab0f1641 req-4e53b046-f216-4194-bb25-fad6febb99ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received unexpected event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:06.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:07.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.160 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.383 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.383 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.384 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.384 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:08 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1154480924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.843 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.946 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.947 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:08 np0005539564 nova_compute[226295]: 2025-11-29 08:25:08.947 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.149 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.150 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4095MB free_disk=20.809967041015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.150 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.150 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.329 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance c3e98a32-fd92-4873-a060-88aaf76bf1fc actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.329 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.330 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.380 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:09.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3431828444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.829 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:09 np0005539564 nova_compute[226295]: 2025-11-29 08:25:09.835 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:10 np0005539564 nova_compute[226295]: 2025-11-29 08:25:10.786 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:10.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:10 np0005539564 nova_compute[226295]: 2025-11-29 08:25:10.888 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:25:10 np0005539564 nova_compute[226295]: 2025-11-29 08:25:10.889 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:11 np0005539564 nova_compute[226295]: 2025-11-29 08:25:11.620 226310 DEBUG nova.compute.manager [req-0cbcb9ef-8895-4e07-b823-32857f855c04 req-69813f92-e514-421d-bc97-7f409f5e8a8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:11 np0005539564 nova_compute[226295]: 2025-11-29 08:25:11.622 226310 DEBUG oslo_concurrency.lockutils [req-0cbcb9ef-8895-4e07-b823-32857f855c04 req-69813f92-e514-421d-bc97-7f409f5e8a8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:11 np0005539564 nova_compute[226295]: 2025-11-29 08:25:11.622 226310 DEBUG oslo_concurrency.lockutils [req-0cbcb9ef-8895-4e07-b823-32857f855c04 req-69813f92-e514-421d-bc97-7f409f5e8a8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:11 np0005539564 nova_compute[226295]: 2025-11-29 08:25:11.623 226310 DEBUG oslo_concurrency.lockutils [req-0cbcb9ef-8895-4e07-b823-32857f855c04 req-69813f92-e514-421d-bc97-7f409f5e8a8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:11 np0005539564 nova_compute[226295]: 2025-11-29 08:25:11.624 226310 DEBUG nova.compute.manager [req-0cbcb9ef-8895-4e07-b823-32857f855c04 req-69813f92-e514-421d-bc97-7f409f5e8a8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] No waiting events found dispatching network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:11 np0005539564 nova_compute[226295]: 2025-11-29 08:25:11.625 226310 WARNING nova.compute.manager [req-0cbcb9ef-8895-4e07-b823-32857f855c04 req-69813f92-e514-421d-bc97-7f409f5e8a8a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received unexpected event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:13 np0005539564 nova_compute[226295]: 2025-11-29 08:25:13.163 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:13.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:14 np0005539564 nova_compute[226295]: 2025-11-29 08:25:14.667 226310 DEBUG nova.compute.manager [req-ddf0f1ed-0f09-4377-a5ac-37c3cf02b6c4 req-622641fb-a968-4a2c-9cda-7c31dde3194e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:14 np0005539564 nova_compute[226295]: 2025-11-29 08:25:14.670 226310 DEBUG oslo_concurrency.lockutils [req-ddf0f1ed-0f09-4377-a5ac-37c3cf02b6c4 req-622641fb-a968-4a2c-9cda-7c31dde3194e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:14 np0005539564 nova_compute[226295]: 2025-11-29 08:25:14.671 226310 DEBUG oslo_concurrency.lockutils [req-ddf0f1ed-0f09-4377-a5ac-37c3cf02b6c4 req-622641fb-a968-4a2c-9cda-7c31dde3194e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:14 np0005539564 nova_compute[226295]: 2025-11-29 08:25:14.672 226310 DEBUG oslo_concurrency.lockutils [req-ddf0f1ed-0f09-4377-a5ac-37c3cf02b6c4 req-622641fb-a968-4a2c-9cda-7c31dde3194e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:14 np0005539564 nova_compute[226295]: 2025-11-29 08:25:14.672 226310 DEBUG nova.compute.manager [req-ddf0f1ed-0f09-4377-a5ac-37c3cf02b6c4 req-622641fb-a968-4a2c-9cda-7c31dde3194e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] No waiting events found dispatching network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:14 np0005539564 nova_compute[226295]: 2025-11-29 08:25:14.673 226310 WARNING nova.compute.manager [req-ddf0f1ed-0f09-4377-a5ac-37c3cf02b6c4 req-622641fb-a968-4a2c-9cda-7c31dde3194e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received unexpected event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:15.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:25:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:16.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:17.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:25:18 np0005539564 nova_compute[226295]: 2025-11-29 08:25:18.165 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:18 np0005539564 nova_compute[226295]: 2025-11-29 08:25:18.169 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:18.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:19.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:19 np0005539564 podman[283377]: 2025-11-29 08:25:19.543003683 +0000 UTC m=+0.086319465 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:19 np0005539564 podman[283378]: 2025-11-29 08:25:19.54474413 +0000 UTC m=+0.080394355 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:25:19 np0005539564 podman[283376]: 2025-11-29 08:25:19.556171079 +0000 UTC m=+0.108661219 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:25:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:21.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:23 np0005539564 nova_compute[226295]: 2025-11-29 08:25:23.171 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:23 np0005539564 nova_compute[226295]: 2025-11-29 08:25:23.172 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:23 np0005539564 nova_compute[226295]: 2025-11-29 08:25:23.172 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:25:23 np0005539564 nova_compute[226295]: 2025-11-29 08:25:23.172 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:25:23 np0005539564 nova_compute[226295]: 2025-11-29 08:25:23.177 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:23 np0005539564 nova_compute[226295]: 2025-11-29 08:25:23.178 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:25:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:23.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:25Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:a3:e3 10.100.0.8
Nov 29 03:25:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:25Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:a3:e3 10.100.0.8
Nov 29 03:25:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:26.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:27.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:28 np0005539564 nova_compute[226295]: 2025-11-29 08:25:28.179 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:28.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:29.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:30.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:31.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:32.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:33 np0005539564 nova_compute[226295]: 2025-11-29 08:25:33.182 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:33.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:34.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:35.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1283844332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:36.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:37.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:37 np0005539564 nova_compute[226295]: 2025-11-29 08:25:37.882 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:38 np0005539564 nova_compute[226295]: 2025-11-29 08:25:38.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539564 nova_compute[226295]: 2025-11-29 08:25:38.569 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:38.569 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:38.571 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:25:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:38.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:39 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:39.574 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e344 e344: 3 total, 3 up, 3 in
Nov 29 03:25:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:40.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.634565) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742634603, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 789, "num_deletes": 251, "total_data_size": 1468149, "memory_usage": 1485696, "flush_reason": "Manual Compaction"}
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742647504, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 968521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55332, "largest_seqno": 56116, "table_properties": {"data_size": 964736, "index_size": 1565, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8869, "raw_average_key_size": 19, "raw_value_size": 957031, "raw_average_value_size": 2131, "num_data_blocks": 68, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404688, "oldest_key_time": 1764404688, "file_creation_time": 1764404742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 12996 microseconds, and 6592 cpu microseconds.
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.647559) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 968521 bytes OK
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.647583) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.649681) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.649702) EVENT_LOG_v1 {"time_micros": 1764404742649695, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.649723) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1463995, prev total WAL file size 1463995, number of live WAL files 2.
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.650771) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(945KB)], [105(13MB)]
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742650870, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 15176096, "oldest_snapshot_seqno": -1}
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 8614 keys, 13314662 bytes, temperature: kUnknown
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742798664, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 13314662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13255765, "index_size": 36257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 224402, "raw_average_key_size": 26, "raw_value_size": 13100985, "raw_average_value_size": 1520, "num_data_blocks": 1417, "num_entries": 8614, "num_filter_entries": 8614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.799085) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 13314662 bytes
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.800867) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.6 rd, 90.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.5 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(29.4) write-amplify(13.7) OK, records in: 9132, records dropped: 518 output_compression: NoCompression
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.800899) EVENT_LOG_v1 {"time_micros": 1764404742800885, "job": 66, "event": "compaction_finished", "compaction_time_micros": 147885, "compaction_time_cpu_micros": 56486, "output_level": 6, "num_output_files": 1, "total_output_size": 13314662, "num_input_records": 9132, "num_output_records": 8614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742801545, "job": 66, "event": "table_file_deletion", "file_number": 107}
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742806847, "job": 66, "event": "table_file_deletion", "file_number": 105}
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.650575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.806986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.806994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.807000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.807004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:25:42.807008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:42.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:43 np0005539564 nova_compute[226295]: 2025-11-29 08:25:43.186 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:43.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:43 np0005539564 ceph-osd[79212]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:25:44 np0005539564 nova_compute[226295]: 2025-11-29 08:25:44.615 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:44 np0005539564 nova_compute[226295]: 2025-11-29 08:25:44.616 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:44 np0005539564 nova_compute[226295]: 2025-11-29 08:25:44.636 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:25:44 np0005539564 nova_compute[226295]: 2025-11-29 08:25:44.735 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:44 np0005539564 nova_compute[226295]: 2025-11-29 08:25:44.736 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:44 np0005539564 nova_compute[226295]: 2025-11-29 08:25:44.746 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:25:44 np0005539564 nova_compute[226295]: 2025-11-29 08:25:44.747 226310 INFO nova.compute.claims [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:25:44 np0005539564 nova_compute[226295]: 2025-11-29 08:25:44.895 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:44.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/781650083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.418 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.428 226310 DEBUG nova.compute.provider_tree [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.454 226310 DEBUG nova.scheduler.client.report [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.506 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.507 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:25:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:45.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.573 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.573 226310 DEBUG nova.network.neutron [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.593 226310 INFO nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.613 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.700 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.701 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.701 226310 INFO nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Creating image(s)#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.732 226310 DEBUG nova.storage.rbd_utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image b86b46f9-7d8f-414f-af87-3822510de392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.774 226310 DEBUG nova.storage.rbd_utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image b86b46f9-7d8f-414f-af87-3822510de392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.807 226310 DEBUG nova.storage.rbd_utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image b86b46f9-7d8f-414f-af87-3822510de392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.811 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "bf5c4d7c97f9d868dc1070f113a186600eb4ee72" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:45 np0005539564 nova_compute[226295]: 2025-11-29 08:25:45.812 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "bf5c4d7c97f9d868dc1070f113a186600eb4ee72" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:46 np0005539564 nova_compute[226295]: 2025-11-29 08:25:46.006 226310 DEBUG nova.policy [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0cbb3ac39ebd4876ad23f2a6d1c50166', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:25:46 np0005539564 nova_compute[226295]: 2025-11-29 08:25:46.031 226310 DEBUG nova.virt.libvirt.imagebackend [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/d9469d26-d189-44ee-a659-1398ee5e0da2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/d9469d26-d189-44ee-a659-1398ee5e0da2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:25:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:46.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.045 226310 DEBUG nova.network.neutron [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Successfully created port: 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.476 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:47.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.562 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72.part --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.563 226310 DEBUG nova.virt.images [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] d9469d26-d189-44ee-a659-1398ee5e0da2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.564 226310 DEBUG nova.privsep.utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.565 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72.part /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.838 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72.part /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72.converted" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.843 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.925 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72.converted --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.927 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "bf5c4d7c97f9d868dc1070f113a186600eb4ee72" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.967 226310 DEBUG nova.storage.rbd_utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image b86b46f9-7d8f-414f-af87-3822510de392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:47 np0005539564 nova_compute[226295]: 2025-11-29 08:25:47.972 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72 b86b46f9-7d8f-414f-af87-3822510de392_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.193 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.232 226310 DEBUG nova.network.neutron [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Successfully updated port: 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.257 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.257 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.257 226310 DEBUG nova.network.neutron [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.332 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72 b86b46f9-7d8f-414f-af87-3822510de392_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.404 226310 DEBUG nova.storage.rbd_utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] resizing rbd image b86b46f9-7d8f-414f-af87-3822510de392_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.529 226310 DEBUG nova.objects.instance [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'migration_context' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.544 226310 DEBUG nova.network.neutron [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.551 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.552 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Ensure instance console log exists: /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.552 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.553 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.553 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.873 226310 DEBUG nova.compute.manager [req-84873315-921e-42bd-8872-c4187902ebb5 req-ee314015-9ddc-4f6d-8a43-ab650ef47fe5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.874 226310 DEBUG nova.compute.manager [req-84873315-921e-42bd-8872-c4187902ebb5 req-ee314015-9ddc-4f6d-8a43-ab650ef47fe5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing instance network info cache due to event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:48 np0005539564 nova_compute[226295]: 2025-11-29 08:25:48.874 226310 DEBUG oslo_concurrency.lockutils [req-84873315-921e-42bd-8872-c4187902ebb5 req-ee314015-9ddc-4f6d-8a43-ab650ef47fe5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:48.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.758 226310 DEBUG nova.network.neutron [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.808 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.809 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Instance network_info: |[{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.810 226310 DEBUG oslo_concurrency.lockutils [req-84873315-921e-42bd-8872-c4187902ebb5 req-ee314015-9ddc-4f6d-8a43-ab650ef47fe5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.810 226310 DEBUG nova.network.neutron [req-84873315-921e-42bd-8872-c4187902ebb5 req-ee314015-9ddc-4f6d-8a43-ab650ef47fe5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.816 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Start _get_guest_xml network_info=[{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T08:25:38Z,direct_url=<?>,disk_format='qcow2',id=d9469d26-d189-44ee-a659-1398ee5e0da2,min_disk=0,min_ram=0,name='tempest-scenario-img--138307583',owner='f9a9decdabb1480da8f7d039e8b3d414',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T08:25:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': 'd9469d26-d189-44ee-a659-1398ee5e0da2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.823 226310 WARNING nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.833 226310 DEBUG nova.virt.libvirt.host [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.834 226310 DEBUG nova.virt.libvirt.host [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.845 226310 DEBUG nova.virt.libvirt.host [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.846 226310 DEBUG nova.virt.libvirt.host [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.848 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.848 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T08:25:38Z,direct_url=<?>,disk_format='qcow2',id=d9469d26-d189-44ee-a659-1398ee5e0da2,min_disk=0,min_ram=0,name='tempest-scenario-img--138307583',owner='f9a9decdabb1480da8f7d039e8b3d414',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T08:25:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.849 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.850 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.850 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.850 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.851 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.851 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.852 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.852 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.853 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.853 226310 DEBUG nova.virt.hardware [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:49 np0005539564 nova_compute[226295]: 2025-11-29 08:25:49.858 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1181668759' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:50 np0005539564 nova_compute[226295]: 2025-11-29 08:25:50.383 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:50 np0005539564 nova_compute[226295]: 2025-11-29 08:25:50.413 226310 DEBUG nova.storage.rbd_utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image b86b46f9-7d8f-414f-af87-3822510de392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:50 np0005539564 nova_compute[226295]: 2025-11-29 08:25:50.417 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:50 np0005539564 podman[283734]: 2025-11-29 08:25:50.526329295 +0000 UTC m=+0.073953030 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:25:50 np0005539564 podman[283735]: 2025-11-29 08:25:50.526825008 +0000 UTC m=+0.066175509 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:25:50 np0005539564 podman[283732]: 2025-11-29 08:25:50.563242483 +0000 UTC m=+0.109858821 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:25:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3091909248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:50.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:50 np0005539564 nova_compute[226295]: 2025-11-29 08:25:50.951 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:50 np0005539564 nova_compute[226295]: 2025-11-29 08:25:50.954 226310 DEBUG nova.virt.libvirt.vif [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-821850339',display_name='tempest-TestMinimumBasicScenario-server-821850339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-821850339',id=160,image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ7gdL4PKUc9tosM7g28DfdZ6SzPuAj8/oAyNdJYivPDpnhrJbV+U75MujLhc3BhkXGqLXqZ+FF7kcJucYEQCJU7T523I/wegT3xL9AQVlLwpt4RmAyZ0AklLUZTVxh90g==',key_name='tempest-TestMinimumBasicScenario-138660977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-dl7txott',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:45Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=b86b46f9-7d8f-414f-af87-3822510de392,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:50 np0005539564 nova_compute[226295]: 2025-11-29 08:25:50.955 226310 DEBUG nova.network.os_vif_util [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:50 np0005539564 nova_compute[226295]: 2025-11-29 08:25:50.957 226310 DEBUG nova.network.os_vif_util [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:50 np0005539564 nova_compute[226295]: 2025-11-29 08:25:50.958 226310 DEBUG nova.objects.instance [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'pci_devices' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.088 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <uuid>b86b46f9-7d8f-414f-af87-3822510de392</uuid>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <name>instance-000000a0</name>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestMinimumBasicScenario-server-821850339</nova:name>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:25:49</nova:creationTime>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <nova:user uuid="0cbb3ac39ebd4876ad23f2a6d1c50166">tempest-TestMinimumBasicScenario-1484268516-project-member</nova:user>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <nova:project uuid="f9a9decdabb1480da8f7d039e8b3d414">tempest-TestMinimumBasicScenario-1484268516</nova:project>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="d9469d26-d189-44ee-a659-1398ee5e0da2"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <nova:port uuid="54e5e343-ed4d-4cf3-9d9f-2ae7ec672def">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <entry name="serial">b86b46f9-7d8f-414f-af87-3822510de392</entry>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <entry name="uuid">b86b46f9-7d8f-414f-af87-3822510de392</entry>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/b86b46f9-7d8f-414f-af87-3822510de392_disk">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/b86b46f9-7d8f-414f-af87-3822510de392_disk.config">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:da:92:75"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <target dev="tap54e5e343-ed"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/console.log" append="off"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:25:51 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:25:51 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:25:51 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:25:51 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.090 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Preparing to wait for external event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.090 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.090 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.091 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.091 226310 DEBUG nova.virt.libvirt.vif [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-821850339',display_name='tempest-TestMinimumBasicScenario-server-821850339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-821850339',id=160,image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ7gdL4PKUc9tosM7g28DfdZ6SzPuAj8/oAyNdJYivPDpnhrJbV+U75MujLhc3BhkXGqLXqZ+FF7kcJucYEQCJU7T523I/wegT3xL9AQVlLwpt4RmAyZ0AklLUZTVxh90g==',key_name='tempest-TestMinimumBasicScenario-138660977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-dl7txott',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:45Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=b86b46f9-7d8f-414f-af87-3822510de392,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.092 226310 DEBUG nova.network.os_vif_util [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.092 226310 DEBUG nova.network.os_vif_util [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.093 226310 DEBUG os_vif [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.094 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.095 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.099 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.099 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54e5e343-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.100 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54e5e343-ed, col_values=(('external_ids', {'iface-id': '54e5e343-ed4d-4cf3-9d9f-2ae7ec672def', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:92:75', 'vm-uuid': 'b86b46f9-7d8f-414f-af87-3822510de392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:51 np0005539564 NetworkManager[48997]: <info>  [1764404751.1029] manager: (tap54e5e343-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.104 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.110 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.111 226310 INFO os_vif [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed')#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.188 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.188 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.188 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No VIF found with MAC fa:16:3e:da:92:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.189 226310 INFO nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Using config drive#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.224 226310 DEBUG nova.storage.rbd_utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image b86b46f9-7d8f-414f-af87-3822510de392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:51.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.832 226310 INFO nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Creating config drive at /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/disk.config#033[00m
Nov 29 03:25:51 np0005539564 nova_compute[226295]: 2025-11-29 08:25:51.843 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqfz6djis execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.003 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqfz6djis" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.055 226310 DEBUG nova.storage.rbd_utils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] rbd image b86b46f9-7d8f-414f-af87-3822510de392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.061 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/disk.config b86b46f9-7d8f-414f-af87-3822510de392_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.401 226310 DEBUG oslo_concurrency.processutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/disk.config b86b46f9-7d8f-414f-af87-3822510de392_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.402 226310 INFO nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Deleting local config drive /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/disk.config because it was imported into RBD.#033[00m
Nov 29 03:25:52 np0005539564 kernel: tap54e5e343-ed: entered promiscuous mode
Nov 29 03:25:52 np0005539564 NetworkManager[48997]: <info>  [1764404752.4732] manager: (tap54e5e343-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Nov 29 03:25:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:52Z|00585|binding|INFO|Claiming lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for this chassis.
Nov 29 03:25:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:52Z|00586|binding|INFO|54e5e343-ed4d-4cf3-9d9f-2ae7ec672def: Claiming fa:16:3e:da:92:75 10.100.0.14
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.526 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.528 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.532 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.544 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:92:75 10.100.0.14'], port_security=['fa:16:3e:da:92:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b86b46f9-7d8f-414f-af87-3822510de392', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf206693-b177-47ba-9c63-2ab4e51898ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'neutron:revision_number': '2', 'neutron:security_group_ids': '140d3240-dbee-4ff7-b341-40a578af5b67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f140b86-0300-440f-be11-680603255cb6, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.546 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def in datapath cf206693-b177-47ba-9c63-2ab4e51898ce bound to our chassis#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.548 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf206693-b177-47ba-9c63-2ab4e51898ce#033[00m
Nov 29 03:25:52 np0005539564 systemd-machined[190128]: New machine qemu-73-instance-000000a0.
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.561 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4606ade3-b8d2-4fd5-a7c0-80db9a33d82b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.562 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf206693-b1 in ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.566 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf206693-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.566 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fc09d1e4-339a-4191-abdd-f7a8c1cd9c9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.567 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f0701016-4c55-420c-ace3-9a9290a608cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 systemd[1]: Started Virtual Machine qemu-73-instance-000000a0.
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.582 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d9cfab4a-2574-4380-bd79-0fa319272c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 systemd-udevd[283891]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:52Z|00587|binding|INFO|Setting lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def ovn-installed in OVS
Nov 29 03:25:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:52Z|00588|binding|INFO|Setting lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def up in Southbound
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.594 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:52 np0005539564 NetworkManager[48997]: <info>  [1764404752.5978] device (tap54e5e343-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:52 np0005539564 NetworkManager[48997]: <info>  [1764404752.5996] device (tap54e5e343-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.602 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a88a51c1-7710-4457-b0b5-3fbb26c05c10]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.636 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5e126e-5434-4e34-b978-3d06025e6eb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 NetworkManager[48997]: <info>  [1764404752.6445] manager: (tapcf206693-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/270)
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.643 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ce60027f-5e17-4ef1-850b-c61d8de5f0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.684 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[147d0823-0874-4658-b76b-571691aaf6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.688 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb96742-4e65-4e15-a01a-577a9330e126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 NetworkManager[48997]: <info>  [1764404752.7129] device (tapcf206693-b0): carrier: link connected
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.719 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cc49c116-3990-4d5f-9eda-912a115622de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.737 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ab58cbec-b7da-42ff-9303-f1dd3e51d755]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf206693-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:f8:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772739, 'reachable_time': 42872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283922, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.753 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef9783c-0cc8-4031-8453-83fde1491a69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:f810'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772739, 'tstamp': 772739}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283930, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.772 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8d3478-5d31-4eda-a8d8-908fbec69b68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf206693-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:f8:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772739, 'reachable_time': 42872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283940, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.809 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7fdc05-3fbc-4863-9468-a4c1581727b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.864 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4e68a20a-2434-469b-8ddb-6087539a0442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.866 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf206693-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.866 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.867 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf206693-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:52 np0005539564 NetworkManager[48997]: <info>  [1764404752.8695] manager: (tapcf206693-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Nov 29 03:25:52 np0005539564 kernel: tapcf206693-b0: entered promiscuous mode
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.869 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.875 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf206693-b0, col_values=(('external_ids', {'iface-id': '5116070e-bd28-42f7-aba2-689a78e19083'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.878 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:25:52Z|00589|binding|INFO|Releasing lport 5116070e-bd28-42f7-aba2-689a78e19083 from this chassis (sb_readonly=0)
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.880 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.882 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.884 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c68c064d-df7c-4426-b0b6-8df7f277c9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.885 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:25:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:25:52.886 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'env', 'PROCESS_TAG=haproxy-cf206693-b177-47ba-9c63-2ab4e51898ce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf206693-b177-47ba-9c63-2ab4e51898ce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.899 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.911 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404752.911172, b86b46f9-7d8f-414f-af87-3822510de392 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.912 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:52.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.942 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.948 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404752.9115415, b86b46f9-7d8f-414f-af87-3822510de392 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.948 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.986 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:52 np0005539564 nova_compute[226295]: 2025-11-29 08:25:52.991 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.021 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.049 226310 DEBUG nova.network.neutron [req-84873315-921e-42bd-8872-c4187902ebb5 req-ee314015-9ddc-4f6d-8a43-ab650ef47fe5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updated VIF entry in instance network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.050 226310 DEBUG nova.network.neutron [req-84873315-921e-42bd-8872-c4187902ebb5 req-ee314015-9ddc-4f6d-8a43-ab650ef47fe5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.064 226310 DEBUG oslo_concurrency.lockutils [req-84873315-921e-42bd-8872-c4187902ebb5 req-ee314015-9ddc-4f6d-8a43-ab650ef47fe5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.192 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.282 226310 DEBUG nova.compute.manager [req-e7c5e2b3-ba57-4106-8c45-4c390c370b95 req-d43c95ee-3d83-4605-b905-4cde790cf337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.283 226310 DEBUG oslo_concurrency.lockutils [req-e7c5e2b3-ba57-4106-8c45-4c390c370b95 req-d43c95ee-3d83-4605-b905-4cde790cf337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.283 226310 DEBUG oslo_concurrency.lockutils [req-e7c5e2b3-ba57-4106-8c45-4c390c370b95 req-d43c95ee-3d83-4605-b905-4cde790cf337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.284 226310 DEBUG oslo_concurrency.lockutils [req-e7c5e2b3-ba57-4106-8c45-4c390c370b95 req-d43c95ee-3d83-4605-b905-4cde790cf337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.284 226310 DEBUG nova.compute.manager [req-e7c5e2b3-ba57-4106-8c45-4c390c370b95 req-d43c95ee-3d83-4605-b905-4cde790cf337 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Processing event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.285 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.289 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404753.2888007, b86b46f9-7d8f-414f-af87-3822510de392 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.289 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.291 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.295 226310 INFO nova.virt.libvirt.driver [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Instance spawned successfully.#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.295 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.317 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.324 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.325 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.325 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.326 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.326 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.327 226310 DEBUG nova.virt.libvirt.driver [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.332 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:53 np0005539564 podman[283997]: 2025-11-29 08:25:53.345367687 +0000 UTC m=+0.061104872 container create d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.368 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:53 np0005539564 systemd[1]: Started libpod-conmon-d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26.scope.
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.394 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.406 226310 INFO nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Took 7.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.406 226310 DEBUG nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:53 np0005539564 podman[283997]: 2025-11-29 08:25:53.314509933 +0000 UTC m=+0.030247178 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:25:53 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:25:53 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5916a910de3b8279f19fcf02bd4cf800e7326552628a10b85fea4abffc047287/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:53 np0005539564 podman[283997]: 2025-11-29 08:25:53.447215242 +0000 UTC m=+0.162952477 container init d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:25:53 np0005539564 podman[283997]: 2025-11-29 08:25:53.458468736 +0000 UTC m=+0.174205931 container start d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:53 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[284011]: [NOTICE]   (284015) : New worker (284017) forked
Nov 29 03:25:53 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[284011]: [NOTICE]   (284015) : Loading success.
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.512 226310 INFO nova.compute.manager [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Took 8.81 seconds to build instance.#033[00m
Nov 29 03:25:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:53.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:53 np0005539564 nova_compute[226295]: 2025-11-29 08:25:53.540 226310 DEBUG oslo_concurrency.lockutils [None req-f88b1353-8b71-4986-8179-a231197b791f 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:54.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:55 np0005539564 nova_compute[226295]: 2025-11-29 08:25:55.483 226310 DEBUG nova.compute.manager [req-af2c9a6c-8bfe-4199-aa2f-f39c3020c5af req-4ed7ac2d-d561-49c6-9829-e9c77d35149c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:55 np0005539564 nova_compute[226295]: 2025-11-29 08:25:55.485 226310 DEBUG oslo_concurrency.lockutils [req-af2c9a6c-8bfe-4199-aa2f-f39c3020c5af req-4ed7ac2d-d561-49c6-9829-e9c77d35149c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:55 np0005539564 nova_compute[226295]: 2025-11-29 08:25:55.486 226310 DEBUG oslo_concurrency.lockutils [req-af2c9a6c-8bfe-4199-aa2f-f39c3020c5af req-4ed7ac2d-d561-49c6-9829-e9c77d35149c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:55 np0005539564 nova_compute[226295]: 2025-11-29 08:25:55.486 226310 DEBUG oslo_concurrency.lockutils [req-af2c9a6c-8bfe-4199-aa2f-f39c3020c5af req-4ed7ac2d-d561-49c6-9829-e9c77d35149c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:55 np0005539564 nova_compute[226295]: 2025-11-29 08:25:55.487 226310 DEBUG nova.compute.manager [req-af2c9a6c-8bfe-4199-aa2f-f39c3020c5af req-4ed7ac2d-d561-49c6-9829-e9c77d35149c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] No waiting events found dispatching network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:55 np0005539564 nova_compute[226295]: 2025-11-29 08:25:55.487 226310 WARNING nova.compute.manager [req-af2c9a6c-8bfe-4199-aa2f-f39c3020c5af req-4ed7ac2d-d561-49c6-9829-e9c77d35149c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received unexpected event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for instance with vm_state active and task_state None.#033[00m
Nov 29 03:25:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:55.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:56 np0005539564 nova_compute[226295]: 2025-11-29 08:25:56.102 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:56.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:57 np0005539564 nova_compute[226295]: 2025-11-29 08:25:57.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:57.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:58 np0005539564 nova_compute[226295]: 2025-11-29 08:25:58.229 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:58 np0005539564 nova_compute[226295]: 2025-11-29 08:25:58.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:58.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.334 226310 DEBUG oslo_concurrency.lockutils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.335 226310 DEBUG oslo_concurrency.lockutils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.353 226310 DEBUG nova.objects.instance [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'flavor' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.394 226310 DEBUG oslo_concurrency.lockutils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:25:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:59.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.660 226310 DEBUG oslo_concurrency.lockutils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.660 226310 DEBUG oslo_concurrency.lockutils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.660 226310 INFO nova.compute.manager [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Attaching volume 9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a to /dev/vdb#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.900 226310 DEBUG os_brick.utils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.903 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.919 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.920 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[0b270a97-1489-4abc-be4b-24ead4737257]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.922 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.933 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.934 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6c4c4a-bc57-451e-9115-32756134de13]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.935 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.951 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.951 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[c8701cd2-a67e-4041-969e-448d2d2b4b65]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.953 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[5bed6da6-c423-4f13-8e8f-dc274c219d08]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.953 226310 DEBUG oslo_concurrency.processutils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:59 np0005539564 nova_compute[226295]: 2025-11-29 08:25:59.997 226310 DEBUG oslo_concurrency.processutils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "nvme version" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:00 np0005539564 nova_compute[226295]: 2025-11-29 08:26:00.000 226310 DEBUG os_brick.initiator.connectors.lightos [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:26:00 np0005539564 nova_compute[226295]: 2025-11-29 08:26:00.000 226310 DEBUG os_brick.initiator.connectors.lightos [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:26:00 np0005539564 nova_compute[226295]: 2025-11-29 08:26:00.000 226310 DEBUG os_brick.initiator.connectors.lightos [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:26:00 np0005539564 nova_compute[226295]: 2025-11-29 08:26:00.000 226310 DEBUG os_brick.utils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:26:00 np0005539564 nova_compute[226295]: 2025-11-29 08:26:00.000 226310 DEBUG nova.virt.block_device [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating existing volume attachment record: b1ae05d8-ccb0-4b94-ac08-8333b1650f3f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:26:00 np0005539564 nova_compute[226295]: 2025-11-29 08:26:00.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548668422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:00.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.106 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.115 226310 DEBUG nova.objects.instance [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'flavor' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.151 226310 DEBUG nova.virt.libvirt.driver [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Attempting to attach volume 9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.154 226310 DEBUG nova.virt.libvirt.guest [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:26:01 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:26:01 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a">
Nov 29 03:26:01 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:01 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:01 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:01 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:26:01 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:26:01 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:01 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:26:01 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:26:01 np0005539564 nova_compute[226295]:  <serial>9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a</serial>
Nov 29 03:26:01 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:26:01 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.334 226310 DEBUG nova.virt.libvirt.driver [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.335 226310 DEBUG nova.virt.libvirt.driver [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.335 226310 DEBUG nova.virt.libvirt.driver [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.336 226310 DEBUG nova.virt.libvirt.driver [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] No VIF found with MAC fa:16:3e:da:92:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:26:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:01.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.655 226310 DEBUG oslo_concurrency.lockutils [None req-2f9f91d5-f437-4f40-b32d-d41f2918d1b3 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.666 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.667 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.667 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:26:01 np0005539564 nova_compute[226295]: 2025-11-29 08:26:01.667 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:02 np0005539564 nova_compute[226295]: 2025-11-29 08:26:02.222 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:02 np0005539564 nova_compute[226295]: 2025-11-29 08:26:02.222 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:02 np0005539564 nova_compute[226295]: 2025-11-29 08:26:02.275 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:26:02 np0005539564 nova_compute[226295]: 2025-11-29 08:26:02.427 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:02 np0005539564 nova_compute[226295]: 2025-11-29 08:26:02.427 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:02 np0005539564 nova_compute[226295]: 2025-11-29 08:26:02.436 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:26:02 np0005539564 nova_compute[226295]: 2025-11-29 08:26:02.436 226310 INFO nova.compute.claims [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:26:02 np0005539564 nova_compute[226295]: 2025-11-29 08:26:02.692 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:02.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1752763440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.169 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.179 226310 DEBUG nova.compute.provider_tree [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.229 226310 DEBUG nova.scheduler.client.report [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.234 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.261 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.262 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.400 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.401 226310 DEBUG nova.network.neutron [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.425 226310 INFO nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.454 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:26:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:03.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.576 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.577 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.577 226310 INFO nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Creating image(s)#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.606 226310 DEBUG nova.storage.rbd_utils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.639 226310 DEBUG nova.storage.rbd_utils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.679 226310 DEBUG nova.storage.rbd_utils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.683 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e345 e345: 3 total, 3 up, 3 in
Nov 29 03:26:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:03.740 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:03.741 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:03.742 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.769 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.770 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.771 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.772 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.803 226310 DEBUG nova.storage.rbd_utils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:03 np0005539564 nova_compute[226295]: 2025-11-29 08:26:03.809 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.020 226310 DEBUG nova.policy [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4f4d28745dd46e586642c84c051db39', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23450c2eaf4442459dec94c6d29f0412', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.025 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updating instance_info_cache with network_info: [{"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.052 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-c3e98a32-fd92-4873-a060-88aaf76bf1fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.053 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.053 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.054 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.138 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.229 226310 DEBUG nova.storage.rbd_utils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] resizing rbd image 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.365 226310 DEBUG nova.objects.instance [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'migration_context' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.405 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.405 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Ensure instance console log exists: /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.406 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.406 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:04 np0005539564 nova_compute[226295]: 2025-11-29 08:26:04.407 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:04.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:05.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:06 np0005539564 nova_compute[226295]: 2025-11-29 08:26:06.028 226310 DEBUG nova.network.neutron [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Successfully created port: f095bbfd-d901-4dd4-8831-72dab1104494 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:26:06 np0005539564 nova_compute[226295]: 2025-11-29 08:26:06.108 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:06.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:07 np0005539564 nova_compute[226295]: 2025-11-29 08:26:07.188 226310 DEBUG nova.network.neutron [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Successfully updated port: f095bbfd-d901-4dd4-8831-72dab1104494 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:26:07 np0005539564 nova_compute[226295]: 2025-11-29 08:26:07.214 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:07 np0005539564 nova_compute[226295]: 2025-11-29 08:26:07.214 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:07 np0005539564 nova_compute[226295]: 2025-11-29 08:26:07.215 226310 DEBUG nova.network.neutron [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:07 np0005539564 nova_compute[226295]: 2025-11-29 08:26:07.371 226310 DEBUG nova.compute.manager [req-8f59a440-9baa-4f5d-9598-eee2f3730450 req-cdc384ed-0bbc-4bbe-aa75-2373ec139b13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:07 np0005539564 nova_compute[226295]: 2025-11-29 08:26:07.371 226310 DEBUG nova.compute.manager [req-8f59a440-9baa-4f5d-9598-eee2f3730450 req-cdc384ed-0bbc-4bbe-aa75-2373ec139b13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing instance network info cache due to event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:07 np0005539564 nova_compute[226295]: 2025-11-29 08:26:07.372 226310 DEBUG oslo_concurrency.lockutils [req-8f59a440-9baa-4f5d-9598-eee2f3730450 req-cdc384ed-0bbc-4bbe-aa75-2373ec139b13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:07.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:07 np0005539564 nova_compute[226295]: 2025-11-29 08:26:07.651 226310 DEBUG nova.network.neutron [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:26:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e346 e346: 3 total, 3 up, 3 in
Nov 29 03:26:07 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:07Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:92:75 10.100.0.14
Nov 29 03:26:07 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:07Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:92:75 10.100.0.14
Nov 29 03:26:08 np0005539564 nova_compute[226295]: 2025-11-29 08:26:08.284 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:08 np0005539564 NetworkManager[48997]: <info>  [1764404768.3928] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Nov 29 03:26:08 np0005539564 nova_compute[226295]: 2025-11-29 08:26:08.377 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:08 np0005539564 NetworkManager[48997]: <info>  [1764404768.3942] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 29 03:26:08 np0005539564 nova_compute[226295]: 2025-11-29 08:26:08.540 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:08Z|00590|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 03:26:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:08Z|00591|binding|INFO|Releasing lport 5116070e-bd28-42f7-aba2-689a78e19083 from this chassis (sb_readonly=0)
Nov 29 03:26:08 np0005539564 nova_compute[226295]: 2025-11-29 08:26:08.568 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e347 e347: 3 total, 3 up, 3 in
Nov 29 03:26:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:08.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:09 np0005539564 nova_compute[226295]: 2025-11-29 08:26:09.483 226310 DEBUG nova.network.neutron [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:09.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e348 e348: 3 total, 3 up, 3 in
Nov 29 03:26:09 np0005539564 nova_compute[226295]: 2025-11-29 08:26:09.979 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:09 np0005539564 nova_compute[226295]: 2025-11-29 08:26:09.980 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance network_info: |[{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:26:09 np0005539564 nova_compute[226295]: 2025-11-29 08:26:09.982 226310 DEBUG oslo_concurrency.lockutils [req-8f59a440-9baa-4f5d-9598-eee2f3730450 req-cdc384ed-0bbc-4bbe-aa75-2373ec139b13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:09 np0005539564 nova_compute[226295]: 2025-11-29 08:26:09.983 226310 DEBUG nova.network.neutron [req-8f59a440-9baa-4f5d-9598-eee2f3730450 req-cdc384ed-0bbc-4bbe-aa75-2373ec139b13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:09 np0005539564 nova_compute[226295]: 2025-11-29 08:26:09.988 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Start _get_guest_xml network_info=[{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:26:09 np0005539564 nova_compute[226295]: 2025-11-29 08:26:09.996 226310 WARNING nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.005 226310 DEBUG nova.virt.libvirt.host [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.006 226310 DEBUG nova.virt.libvirt.host [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.020 226310 DEBUG nova.virt.libvirt.host [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.021 226310 DEBUG nova.virt.libvirt.host [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.022 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.023 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.023 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.024 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.024 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.024 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.025 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.025 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.025 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.026 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.026 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.026 226310 DEBUG nova.virt.hardware [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.031 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:10 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2199665804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.477 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.517 226310 DEBUG nova.storage.rbd_utils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:10 np0005539564 nova_compute[226295]: 2025-11-29 08:26:10.524 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:10 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1724115301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:10.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.110 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.130 226310 DEBUG nova.compute.manager [req-81999c48-713e-4a14-9b96-eb9c3facd020 req-a327b1cd-47f4-4e4c-8be8-fc0e403df249 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.131 226310 DEBUG nova.compute.manager [req-81999c48-713e-4a14-9b96-eb9c3facd020 req-a327b1cd-47f4-4e4c-8be8-fc0e403df249 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing instance network info cache due to event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.131 226310 DEBUG oslo_concurrency.lockutils [req-81999c48-713e-4a14-9b96-eb9c3facd020 req-a327b1cd-47f4-4e4c-8be8-fc0e403df249 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.132 226310 DEBUG oslo_concurrency.lockutils [req-81999c48-713e-4a14-9b96-eb9c3facd020 req-a327b1cd-47f4-4e4c-8be8-fc0e403df249 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.132 226310 DEBUG nova.network.neutron [req-81999c48-713e-4a14-9b96-eb9c3facd020 req-a327b1cd-47f4-4e4c-8be8-fc0e403df249 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.145 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.146 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.146 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.146 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.147 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.206 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.209 226310 DEBUG nova.virt.libvirt.vif [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=162,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-6fx7zmqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=554ea6a4-8de1-41bf-8772-b15e95a7fd05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.210 226310 DEBUG nova.network.os_vif_util [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.211 226310 DEBUG nova.network.os_vif_util [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.212 226310 DEBUG nova.objects.instance [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.250 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <uuid>554ea6a4-8de1-41bf-8772-b15e95a7fd05</uuid>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <name>instance-000000a2</name>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <nova:name>multiattach-server-0</nova:name>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:26:09</nova:creationTime>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <nova:user uuid="b4f4d28745dd46e586642c84c051db39">tempest-AttachVolumeMultiAttachTest-1454477111-project-member</nova:user>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <nova:project uuid="23450c2eaf4442459dec94c6d29f0412">tempest-AttachVolumeMultiAttachTest-1454477111</nova:project>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <nova:port uuid="f095bbfd-d901-4dd4-8831-72dab1104494">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <entry name="serial">554ea6a4-8de1-41bf-8772-b15e95a7fd05</entry>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <entry name="uuid">554ea6a4-8de1-41bf-8772-b15e95a7fd05</entry>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk.config">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:7b:13:85"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <target dev="tapf095bbfd-d9"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/console.log" append="off"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:26:11 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:26:11 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:26:11 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:26:11 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.252 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Preparing to wait for external event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.252 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.252 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.253 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.254 226310 DEBUG nova.virt.libvirt.vif [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=162,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-6fx7zmqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=554ea6a4-8de1-41bf-8772-b15e95a7fd05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.254 226310 DEBUG nova.network.os_vif_util [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.255 226310 DEBUG nova.network.os_vif_util [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.255 226310 DEBUG os_vif [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.256 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.257 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.261 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.261 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf095bbfd-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.262 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf095bbfd-d9, col_values=(('external_ids', {'iface-id': 'f095bbfd-d901-4dd4-8831-72dab1104494', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:13:85', 'vm-uuid': '554ea6a4-8de1-41bf-8772-b15e95a7fd05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.263 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:11 np0005539564 NetworkManager[48997]: <info>  [1764404771.2657] manager: (tapf095bbfd-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.266 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.277 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.279 226310 INFO os_vif [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9')#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.354 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.354 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.354 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:7b:13:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.355 226310 INFO nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Using config drive#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.385 226310 DEBUG nova.storage.rbd_utils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:11.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2742457055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.636 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.746 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.747 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.747 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.754 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.755 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.755 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.761 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:11 np0005539564 nova_compute[226295]: 2025-11-29 08:26:11.761 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.005 226310 INFO nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Creating config drive at /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/disk.config#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.014 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_kxo_gbv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.102 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.104 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3750MB free_disk=20.652362823486328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.105 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.105 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.175 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_kxo_gbv" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.215 226310 DEBUG nova.storage.rbd_utils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.221 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/disk.config 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.325 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance c3e98a32-fd92-4873-a060-88aaf76bf1fc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.326 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance b86b46f9-7d8f-414f-af87-3822510de392 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.326 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.327 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.327 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.441 226310 DEBUG oslo_concurrency.processutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/disk.config 554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.442 226310 INFO nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Deleting local config drive /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/disk.config because it was imported into RBD.#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.507 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:12 np0005539564 kernel: tapf095bbfd-d9: entered promiscuous mode
Nov 29 03:26:12 np0005539564 NetworkManager[48997]: <info>  [1764404772.5242] manager: (tapf095bbfd-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Nov 29 03:26:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:12Z|00592|binding|INFO|Claiming lport f095bbfd-d901-4dd4-8831-72dab1104494 for this chassis.
Nov 29 03:26:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:12Z|00593|binding|INFO|f095bbfd-d901-4dd4-8831-72dab1104494: Claiming fa:16:3e:7b:13:85 10.100.0.12
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.542 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:13:85 10.100.0.12'], port_security=['fa:16:3e:7b:13:85 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '554ea6a4-8de1-41bf-8772-b15e95a7fd05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=f095bbfd-d901-4dd4-8831-72dab1104494) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.545 139780 INFO neutron.agent.ovn.metadata.agent [-] Port f095bbfd-d901-4dd4-8831-72dab1104494 in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 bound to our chassis#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.550 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.557 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:12Z|00594|binding|INFO|Setting lport f095bbfd-d901-4dd4-8831-72dab1104494 ovn-installed in OVS
Nov 29 03:26:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:12Z|00595|binding|INFO|Setting lport f095bbfd-d901-4dd4-8831-72dab1104494 up in Southbound
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.577 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[381f0dfe-9a57-4073-bd11-e671691cfed6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.578 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabbc8daa-d1 in ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.587 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabbc8daa-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.587 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[661d8613-e300-44d1-a870-09848c666bf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.589 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf6af82-6092-40b0-bbe5-e97a7cb75b47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 systemd-udevd[284405]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:12 np0005539564 systemd-machined[190128]: New machine qemu-74-instance-000000a2.
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.604 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdba300-d6ac-4863-bab7-e8a23651dbb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 systemd[1]: Started Virtual Machine qemu-74-instance-000000a2.
Nov 29 03:26:12 np0005539564 NetworkManager[48997]: <info>  [1764404772.6202] device (tapf095bbfd-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:12 np0005539564 NetworkManager[48997]: <info>  [1764404772.6219] device (tapf095bbfd-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.639 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fd40227f-da55-4b7e-ac76-50d13a5feaf1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.667 226310 DEBUG nova.network.neutron [req-8f59a440-9baa-4f5d-9598-eee2f3730450 req-cdc384ed-0bbc-4bbe-aa75-2373ec139b13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updated VIF entry in instance network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.668 226310 DEBUG nova.network.neutron [req-8f59a440-9baa-4f5d-9598-eee2f3730450 req-cdc384ed-0bbc-4bbe-aa75-2373ec139b13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.685 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[20992ce4-c442-4bf2-a555-2fc3b3dbdc90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 NetworkManager[48997]: <info>  [1764404772.6932] manager: (tapabbc8daa-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.691 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f24c93ee-6732-49fc-8d79-97e7c3f37310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.699 226310 DEBUG oslo_concurrency.lockutils [req-8f59a440-9baa-4f5d-9598-eee2f3730450 req-cdc384ed-0bbc-4bbe-aa75-2373ec139b13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.738 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c1deed18-4533-4f6d-87db-c8cac50dfe11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.742 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2f15aa-2793-4aae-88d6-15243dba4ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 NetworkManager[48997]: <info>  [1764404772.7743] device (tapabbc8daa-d0): carrier: link connected
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.788 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b8950c-7d2a-450e-9646-0b877d1433cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.818 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bc882139-b131-4751-9baf-018b15463177]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774745, 'reachable_time': 31871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284470, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.838 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[43b7a348-4457-44ad-a650-732f0d7da42e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:892d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 774745, 'tstamp': 774745}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284475, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.865 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[866a670d-79cb-4a52-8bbd-810aa74fc964]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774745, 'reachable_time': 31871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284490, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.902 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b005cc28-98f8-409b-96bf-14552b563b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:12.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.978 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fc72e1bc-1d19-4e5f-b004-f70bb68b4b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.980 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.980 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.981 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:12 np0005539564 kernel: tapabbc8daa-d0: entered promiscuous mode
Nov 29 03:26:12 np0005539564 NetworkManager[48997]: <info>  [1764404772.9850] manager: (tapabbc8daa-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.984 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.986 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1175812981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:12.988 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:12 np0005539564 nova_compute[226295]: 2025-11-29 08:26:12.989 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:12Z|00596|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.008 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.009 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.012 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:13.012 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:13.013 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4531db9d-769e-4011-9082-db8a062625fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:13.014 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:13.015 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'env', 'PROCESS_TAG=haproxy-abbc8daa-d665-4e2f-bf74-9e57db481441', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abbc8daa-d665-4e2f-bf74-9e57db481441.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.018 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.031 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404773.0303702, 554ea6a4-8de1-41bf-8772-b15e95a7fd05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.031 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.199 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.240 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.241 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.275 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.281 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404773.0332391, 554ea6a4-8de1-41bf-8772-b15e95a7fd05 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.282 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.288 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.302 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.307 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.340 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:13 np0005539564 podman[284534]: 2025-11-29 08:26:13.476803281 +0000 UTC m=+0.073453217 container create 0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:13 np0005539564 systemd[1]: Started libpod-conmon-0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f.scope.
Nov 29 03:26:13 np0005539564 podman[284534]: 2025-11-29 08:26:13.437078068 +0000 UTC m=+0.033728034 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:13 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:26:13 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ca7f56b5a170cbd7a16fd1e395d440fcbf7ef390a658a0f981e7b6164422c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:13 np0005539564 podman[284534]: 2025-11-29 08:26:13.55705026 +0000 UTC m=+0.153700126 container init 0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:26:13 np0005539564 podman[284534]: 2025-11-29 08:26:13.562274702 +0000 UTC m=+0.158924568 container start 0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:26:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:13.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:13 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[284550]: [NOTICE]   (284554) : New worker (284556) forked
Nov 29 03:26:13 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[284550]: [NOTICE]   (284554) : Loading success.
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.779 226310 DEBUG nova.compute.manager [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.782 226310 DEBUG nova.compute.manager [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing instance network info cache due to event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:13 np0005539564 nova_compute[226295]: 2025-11-29 08:26:13.783 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:14.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.850 226310 DEBUG nova.network.neutron [req-81999c48-713e-4a14-9b96-eb9c3facd020 req-a327b1cd-47f4-4e4c-8be8-fc0e403df249 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updated VIF entry in instance network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.850 226310 DEBUG nova.network.neutron [req-81999c48-713e-4a14-9b96-eb9c3facd020 req-a327b1cd-47f4-4e4c-8be8-fc0e403df249 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.877 226310 DEBUG oslo_concurrency.lockutils [req-81999c48-713e-4a14-9b96-eb9c3facd020 req-a327b1cd-47f4-4e4c-8be8-fc0e403df249 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.878 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.878 226310 DEBUG nova.network.neutron [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.924 226310 DEBUG nova.compute.manager [req-bffdb01f-c2ad-4b95-b200-1949819dc57f req-aa4a32a1-dfe4-4f90-99ee-a94bb86beb76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.925 226310 DEBUG oslo_concurrency.lockutils [req-bffdb01f-c2ad-4b95-b200-1949819dc57f req-aa4a32a1-dfe4-4f90-99ee-a94bb86beb76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.925 226310 DEBUG oslo_concurrency.lockutils [req-bffdb01f-c2ad-4b95-b200-1949819dc57f req-aa4a32a1-dfe4-4f90-99ee-a94bb86beb76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.926 226310 DEBUG oslo_concurrency.lockutils [req-bffdb01f-c2ad-4b95-b200-1949819dc57f req-aa4a32a1-dfe4-4f90-99ee-a94bb86beb76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.926 226310 DEBUG nova.compute.manager [req-bffdb01f-c2ad-4b95-b200-1949819dc57f req-aa4a32a1-dfe4-4f90-99ee-a94bb86beb76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Processing event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.927 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.933 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404775.9324858, 554ea6a4-8de1-41bf-8772-b15e95a7fd05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.933 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.937 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.942 226310 INFO nova.virt.libvirt.driver [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance spawned successfully.#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.942 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.958 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.967 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.972 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.973 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.973 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.974 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.975 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:15 np0005539564 nova_compute[226295]: 2025-11-29 08:26:15.975 226310 DEBUG nova.virt.libvirt.driver [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:16 np0005539564 nova_compute[226295]: 2025-11-29 08:26:16.011 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:16 np0005539564 nova_compute[226295]: 2025-11-29 08:26:16.049 226310 INFO nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Took 12.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:26:16 np0005539564 nova_compute[226295]: 2025-11-29 08:26:16.049 226310 DEBUG nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:16 np0005539564 nova_compute[226295]: 2025-11-29 08:26:16.181 226310 INFO nova.compute.manager [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Took 13.84 seconds to build instance.#033[00m
Nov 29 03:26:16 np0005539564 nova_compute[226295]: 2025-11-29 08:26:16.205 226310 DEBUG oslo_concurrency.lockutils [None req-f1b18709-f4a7-4a79-9cdc-f800d1999144 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:16 np0005539564 nova_compute[226295]: 2025-11-29 08:26:16.264 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:16.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:17.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:18 np0005539564 nova_compute[226295]: 2025-11-29 08:26:18.291 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e349 e349: 3 total, 3 up, 3 in
Nov 29 03:26:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:19.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.266 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.473 226310 DEBUG nova.network.neutron [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updated VIF entry in instance network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.474 226310 DEBUG nova.network.neutron [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.510 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.511 226310 DEBUG nova.compute.manager [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.511 226310 DEBUG nova.compute.manager [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing instance network info cache due to event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.512 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.512 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:21 np0005539564 nova_compute[226295]: 2025-11-29 08:26:21.512 226310 DEBUG nova.network.neutron [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:21 np0005539564 podman[284568]: 2025-11-29 08:26:21.549785912 +0000 UTC m=+0.077494176 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:26:21 np0005539564 podman[284567]: 2025-11-29 08:26:21.549972248 +0000 UTC m=+0.088855774 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:26:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:21.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:21 np0005539564 podman[284566]: 2025-11-29 08:26:21.583764201 +0000 UTC m=+0.127692623 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:26:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:22.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:23 np0005539564 nova_compute[226295]: 2025-11-29 08:26:23.295 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539564 nova_compute[226295]: 2025-11-29 08:26:23.547 226310 DEBUG nova.compute.manager [req-716f5b99-a83e-48ca-b44b-71bfc28ff7ea req-bd822238-427c-471f-bd57-7802093f53ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:23 np0005539564 nova_compute[226295]: 2025-11-29 08:26:23.548 226310 DEBUG nova.compute.manager [req-716f5b99-a83e-48ca-b44b-71bfc28ff7ea req-bd822238-427c-471f-bd57-7802093f53ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing instance network info cache due to event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:23 np0005539564 nova_compute[226295]: 2025-11-29 08:26:23.548 226310 DEBUG oslo_concurrency.lockutils [req-716f5b99-a83e-48ca-b44b-71bfc28ff7ea req-bd822238-427c-471f-bd57-7802093f53ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:23 np0005539564 nova_compute[226295]: 2025-11-29 08:26:23.549 226310 DEBUG oslo_concurrency.lockutils [req-716f5b99-a83e-48ca-b44b-71bfc28ff7ea req-bd822238-427c-471f-bd57-7802093f53ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:23 np0005539564 nova_compute[226295]: 2025-11-29 08:26:23.549 226310 DEBUG nova.network.neutron [req-716f5b99-a83e-48ca-b44b-71bfc28ff7ea req-bd822238-427c-471f-bd57-7802093f53ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:23.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:25.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.268 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.552 226310 DEBUG nova.network.neutron [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updated VIF entry in instance network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.552 226310 DEBUG nova.network.neutron [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.580 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.580 226310 DEBUG nova.compute.manager [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.581 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.581 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.581 226310 DEBUG oslo_concurrency.lockutils [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.581 226310 DEBUG nova.compute.manager [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:26 np0005539564 nova_compute[226295]: 2025-11-29 08:26:26.581 226310 WARNING nova.compute.manager [req-2e269313-ef54-484d-a3bb-d5fc78422e61 req-15ae58a9-e827-4a4b-af5e-ab56ba3468a4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:26:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:26.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:26:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:26:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:26:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:27 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:26:27 np0005539564 nova_compute[226295]: 2025-11-29 08:26:27.102 226310 DEBUG nova.network.neutron [req-716f5b99-a83e-48ca-b44b-71bfc28ff7ea req-bd822238-427c-471f-bd57-7802093f53ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updated VIF entry in instance network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:27 np0005539564 nova_compute[226295]: 2025-11-29 08:26:27.103 226310 DEBUG nova.network.neutron [req-716f5b99-a83e-48ca-b44b-71bfc28ff7ea req-bd822238-427c-471f-bd57-7802093f53ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:27 np0005539564 nova_compute[226295]: 2025-11-29 08:26:27.154 226310 DEBUG oslo_concurrency.lockutils [req-716f5b99-a83e-48ca-b44b-71bfc28ff7ea req-bd822238-427c-471f-bd57-7802093f53ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:27.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:28 np0005539564 nova_compute[226295]: 2025-11-29 08:26:28.083 226310 DEBUG nova.compute.manager [req-d326c977-1227-4c1f-bf12-a855d73e8f18 req-e43ade97-a6e7-4fad-921c-04db448ea98d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:28 np0005539564 nova_compute[226295]: 2025-11-29 08:26:28.084 226310 DEBUG nova.compute.manager [req-d326c977-1227-4c1f-bf12-a855d73e8f18 req-e43ade97-a6e7-4fad-921c-04db448ea98d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing instance network info cache due to event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:28 np0005539564 nova_compute[226295]: 2025-11-29 08:26:28.084 226310 DEBUG oslo_concurrency.lockutils [req-d326c977-1227-4c1f-bf12-a855d73e8f18 req-e43ade97-a6e7-4fad-921c-04db448ea98d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:28 np0005539564 nova_compute[226295]: 2025-11-29 08:26:28.084 226310 DEBUG oslo_concurrency.lockutils [req-d326c977-1227-4c1f-bf12-a855d73e8f18 req-e43ade97-a6e7-4fad-921c-04db448ea98d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:28 np0005539564 nova_compute[226295]: 2025-11-29 08:26:28.085 226310 DEBUG nova.network.neutron [req-d326c977-1227-4c1f-bf12-a855d73e8f18 req-e43ade97-a6e7-4fad-921c-04db448ea98d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:28 np0005539564 nova_compute[226295]: 2025-11-29 08:26:28.297 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:28.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:29.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e350 e350: 3 total, 3 up, 3 in
Nov 29 03:26:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:30Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:13:85 10.100.0.12
Nov 29 03:26:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:30Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:13:85 10.100.0.12
Nov 29 03:26:30 np0005539564 nova_compute[226295]: 2025-11-29 08:26:30.550 226310 DEBUG nova.network.neutron [req-d326c977-1227-4c1f-bf12-a855d73e8f18 req-e43ade97-a6e7-4fad-921c-04db448ea98d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updated VIF entry in instance network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:30 np0005539564 nova_compute[226295]: 2025-11-29 08:26:30.551 226310 DEBUG nova.network.neutron [req-d326c977-1227-4c1f-bf12-a855d73e8f18 req-e43ade97-a6e7-4fad-921c-04db448ea98d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:30 np0005539564 nova_compute[226295]: 2025-11-29 08:26:30.578 226310 DEBUG oslo_concurrency.lockutils [req-d326c977-1227-4c1f-bf12-a855d73e8f18 req-e43ade97-a6e7-4fad-921c-04db448ea98d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:30.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:31 np0005539564 nova_compute[226295]: 2025-11-29 08:26:31.270 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:31 np0005539564 nova_compute[226295]: 2025-11-29 08:26:31.287 226310 DEBUG oslo_concurrency.lockutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:31 np0005539564 nova_compute[226295]: 2025-11-29 08:26:31.288 226310 DEBUG oslo_concurrency.lockutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:31 np0005539564 nova_compute[226295]: 2025-11-29 08:26:31.288 226310 INFO nova.compute.manager [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Rebooting instance#033[00m
Nov 29 03:26:31 np0005539564 nova_compute[226295]: 2025-11-29 08:26:31.308 226310 DEBUG oslo_concurrency.lockutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:31 np0005539564 nova_compute[226295]: 2025-11-29 08:26:31.308 226310 DEBUG oslo_concurrency.lockutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:31 np0005539564 nova_compute[226295]: 2025-11-29 08:26:31.308 226310 DEBUG nova.network.neutron [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:32.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:33 np0005539564 nova_compute[226295]: 2025-11-29 08:26:33.299 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:33.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:34 np0005539564 nova_compute[226295]: 2025-11-29 08:26:34.570 226310 DEBUG nova.network.neutron [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:34 np0005539564 nova_compute[226295]: 2025-11-29 08:26:34.601 226310 DEBUG oslo_concurrency.lockutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:34 np0005539564 nova_compute[226295]: 2025-11-29 08:26:34.603 226310 DEBUG nova.compute.manager [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:34 np0005539564 kernel: tap54e5e343-ed (unregistering): left promiscuous mode
Nov 29 03:26:34 np0005539564 NetworkManager[48997]: <info>  [1764404794.8318] device (tap54e5e343-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:26:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:34Z|00597|binding|INFO|Releasing lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def from this chassis (sb_readonly=0)
Nov 29 03:26:34 np0005539564 nova_compute[226295]: 2025-11-29 08:26:34.845 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:34Z|00598|binding|INFO|Setting lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def down in Southbound
Nov 29 03:26:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:34Z|00599|binding|INFO|Removing iface tap54e5e343-ed ovn-installed in OVS
Nov 29 03:26:34 np0005539564 nova_compute[226295]: 2025-11-29 08:26:34.848 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:34.854 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:92:75 10.100.0.14'], port_security=['fa:16:3e:da:92:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b86b46f9-7d8f-414f-af87-3822510de392', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf206693-b177-47ba-9c63-2ab4e51898ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'neutron:revision_number': '5', 'neutron:security_group_ids': '140d3240-dbee-4ff7-b341-40a578af5b67 8317ef68-26fb-4cb8-afa5-f6a1bc395d96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f140b86-0300-440f-be11-680603255cb6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:34.857 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def in datapath cf206693-b177-47ba-9c63-2ab4e51898ce unbound from our chassis#033[00m
Nov 29 03:26:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:34.860 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf206693-b177-47ba-9c63-2ab4e51898ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:34.861 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[966c54da-50ca-433e-82d4-8c865c8d94ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:34.863 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce namespace which is not needed anymore#033[00m
Nov 29 03:26:34 np0005539564 nova_compute[226295]: 2025-11-29 08:26:34.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:34 np0005539564 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a0.scope: Deactivated successfully.
Nov 29 03:26:34 np0005539564 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a0.scope: Consumed 15.953s CPU time.
Nov 29 03:26:34 np0005539564 systemd-machined[190128]: Machine qemu-73-instance-000000a0 terminated.
Nov 29 03:26:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:34.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:34 np0005539564 nova_compute[226295]: 2025-11-29 08:26:34.986 226310 INFO nova.virt.libvirt.driver [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Instance destroyed successfully.#033[00m
Nov 29 03:26:34 np0005539564 nova_compute[226295]: 2025-11-29 08:26:34.987 226310 DEBUG nova.objects.instance [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'resources' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:35 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[284011]: [NOTICE]   (284015) : haproxy version is 2.8.14-c23fe91
Nov 29 03:26:35 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[284011]: [NOTICE]   (284015) : path to executable is /usr/sbin/haproxy
Nov 29 03:26:35 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[284011]: [WARNING]  (284015) : Exiting Master process...
Nov 29 03:26:35 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[284011]: [ALERT]    (284015) : Current worker (284017) exited with code 143 (Terminated)
Nov 29 03:26:35 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[284011]: [WARNING]  (284015) : All workers exited. Exiting... (0)
Nov 29 03:26:35 np0005539564 systemd[1]: libpod-d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26.scope: Deactivated successfully.
Nov 29 03:26:35 np0005539564 podman[284836]: 2025-11-29 08:26:35.038564682 +0000 UTC m=+0.055687177 container died d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.057 226310 DEBUG nova.virt.libvirt.vif [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-821850339',display_name='tempest-TestMinimumBasicScenario-server-821850339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-821850339',id=160,image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ7gdL4PKUc9tosM7g28DfdZ6SzPuAj8/oAyNdJYivPDpnhrJbV+U75MujLhc3BhkXGqLXqZ+FF7kcJucYEQCJU7T523I/wegT3xL9AQVlLwpt4RmAyZ0AklLUZTVxh90g==',key_name='tempest-TestMinimumBasicScenario-138660977',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-dl7txott',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:34Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=b86b46f9-7d8f-414f-af87-3822510de392,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.058 226310 DEBUG nova.network.os_vif_util [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.059 226310 DEBUG nova.network.os_vif_util [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.059 226310 DEBUG os_vif [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.062 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54e5e343-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.102 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.104 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:35 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26-userdata-shm.mount: Deactivated successfully.
Nov 29 03:26:35 np0005539564 systemd[1]: var-lib-containers-storage-overlay-5916a910de3b8279f19fcf02bd4cf800e7326552628a10b85fea4abffc047287-merged.mount: Deactivated successfully.
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.109 226310 INFO os_vif [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed')#033[00m
Nov 29 03:26:35 np0005539564 podman[284836]: 2025-11-29 08:26:35.117488635 +0000 UTC m=+0.134611120 container cleanup d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.118 226310 DEBUG nova.virt.libvirt.driver [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Start _get_guest_xml network_info=[{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=d9469d26-d189-44ee-a659-1398ee5e0da2,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': 'd9469d26-d189-44ee-a659-1398ee5e0da2'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b86b46f9-7d8f-414f-af87-3822510de392', 'attached_at': '', 'detached_at': '', 'volume_id': '9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a', 'serial': '9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a'}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'b1ae05d8-ccb0-4b94-ac08-8333b1650f3f', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.122 226310 WARNING nova.virt.libvirt.driver [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.129 226310 DEBUG nova.virt.libvirt.host [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.130 226310 DEBUG nova.virt.libvirt.host [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:26:35 np0005539564 systemd[1]: libpod-conmon-d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26.scope: Deactivated successfully.
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.133 226310 DEBUG nova.virt.libvirt.host [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.133 226310 DEBUG nova.virt.libvirt.host [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.134 226310 DEBUG nova.virt.libvirt.driver [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.134 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=d9469d26-d189-44ee-a659-1398ee5e0da2,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.135 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.135 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.135 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.135 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.135 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.136 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.136 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.136 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.136 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.136 226310 DEBUG nova.virt.hardware [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.136 226310 DEBUG nova.objects.instance [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:35 np0005539564 podman[284876]: 2025-11-29 08:26:35.181783773 +0000 UTC m=+0.043429245 container remove d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.190 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1038a1d2-d647-42bb-96f8-f0b23e93c791]: (4, ('Sat Nov 29 08:26:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce (d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26)\nd3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26\nSat Nov 29 08:26:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce (d3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26)\nd3035cdaa0a882d597c4f14b25f69639f64eb5aa10c37907eb284e8bb012dd26\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.192 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fccc0601-a085-416a-8e5c-c532e4fcdc95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.193 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf206693-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.195 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:35 np0005539564 kernel: tapcf206693-b0: left promiscuous mode
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.197 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.200 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5ad428-098a-4496-976a-c334a6219196]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.202 226310 DEBUG oslo_concurrency.processutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.217 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3279ad6e-2df0-44b1-98ef-fcac8a3d9c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.219 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef3fbfe-24f9-4a36-a2b4-533ab5961843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.233 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.244 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfb3243-ce0a-4b09-8791-0efe800af0bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772731, 'reachable_time': 32321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284891, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:35 np0005539564 systemd[1]: run-netns-ovnmeta\x2dcf206693\x2db177\x2d47ba\x2d9c63\x2d2ab4e51898ce.mount: Deactivated successfully.
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.247 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:26:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:35.247 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[a0281559-8dec-4e51-a680-a0c6d0b642f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:35.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1904656645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.660 226310 DEBUG oslo_concurrency.processutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:35 np0005539564 nova_compute[226295]: 2025-11-29 08:26:35.699 226310 DEBUG oslo_concurrency.processutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2312093543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.177 226310 DEBUG oslo_concurrency.processutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.233 226310 DEBUG nova.virt.libvirt.vif [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-821850339',display_name='tempest-TestMinimumBasicScenario-server-821850339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-821850339',id=160,image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ7gdL4PKUc9tosM7g28DfdZ6SzPuAj8/oAyNdJYivPDpnhrJbV+U75MujLhc3BhkXGqLXqZ+FF7kcJucYEQCJU7T523I/wegT3xL9AQVlLwpt4RmAyZ0AklLUZTVxh90g==',key_name='tempest-TestMinimumBasicScenario-138660977',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-dl7txott',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:34Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=b86b46f9-7d8f-414f-af87-3822510de392,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.234 226310 DEBUG nova.network.os_vif_util [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.235 226310 DEBUG nova.network.os_vif_util [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.237 226310 DEBUG nova.objects.instance [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'pci_devices' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.279 226310 DEBUG nova.virt.libvirt.driver [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <uuid>b86b46f9-7d8f-414f-af87-3822510de392</uuid>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <name>instance-000000a0</name>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestMinimumBasicScenario-server-821850339</nova:name>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:26:35</nova:creationTime>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <nova:user uuid="0cbb3ac39ebd4876ad23f2a6d1c50166">tempest-TestMinimumBasicScenario-1484268516-project-member</nova:user>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <nova:project uuid="f9a9decdabb1480da8f7d039e8b3d414">tempest-TestMinimumBasicScenario-1484268516</nova:project>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="d9469d26-d189-44ee-a659-1398ee5e0da2"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <nova:port uuid="54e5e343-ed4d-4cf3-9d9f-2ae7ec672def">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <entry name="serial">b86b46f9-7d8f-414f-af87-3822510de392</entry>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <entry name="uuid">b86b46f9-7d8f-414f-af87-3822510de392</entry>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/b86b46f9-7d8f-414f-af87-3822510de392_disk">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/b86b46f9-7d8f-414f-af87-3822510de392_disk.config">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <serial>9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a</serial>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:da:92:75"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <target dev="tap54e5e343-ed"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392/console.log" append="off"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:26:36 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:26:36 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:26:36 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:26:36 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.280 226310 DEBUG nova.virt.libvirt.driver [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.280 226310 DEBUG nova.virt.libvirt.driver [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.281 226310 DEBUG nova.virt.libvirt.driver [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.282 226310 DEBUG nova.virt.libvirt.vif [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-821850339',display_name='tempest-TestMinimumBasicScenario-server-821850339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-821850339',id=160,image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ7gdL4PKUc9tosM7g28DfdZ6SzPuAj8/oAyNdJYivPDpnhrJbV+U75MujLhc3BhkXGqLXqZ+FF7kcJucYEQCJU7T523I/wegT3xL9AQVlLwpt4RmAyZ0AklLUZTVxh90g==',key_name='tempest-TestMinimumBasicScenario-138660977',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-dl7txott',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:34Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=b86b46f9-7d8f-414f-af87-3822510de392,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.282 226310 DEBUG nova.network.os_vif_util [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.283 226310 DEBUG nova.network.os_vif_util [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.283 226310 DEBUG os_vif [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.284 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.284 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.285 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.288 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.289 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54e5e343-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.289 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54e5e343-ed, col_values=(('external_ids', {'iface-id': '54e5e343-ed4d-4cf3-9d9f-2ae7ec672def', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:92:75', 'vm-uuid': 'b86b46f9-7d8f-414f-af87-3822510de392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.291 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 NetworkManager[48997]: <info>  [1764404796.2922] manager: (tap54e5e343-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.295 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.297 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.298 226310 INFO os_vif [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed')#033[00m
Nov 29 03:26:36 np0005539564 kernel: tap54e5e343-ed: entered promiscuous mode
Nov 29 03:26:36 np0005539564 systemd-udevd[284815]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:36Z|00600|binding|INFO|Claiming lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for this chassis.
Nov 29 03:26:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:36Z|00601|binding|INFO|54e5e343-ed4d-4cf3-9d9f-2ae7ec672def: Claiming fa:16:3e:da:92:75 10.100.0.14
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.374 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 NetworkManager[48997]: <info>  [1764404796.3759] manager: (tap54e5e343-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.375 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 NetworkManager[48997]: <info>  [1764404796.3888] device (tap54e5e343-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:36 np0005539564 NetworkManager[48997]: <info>  [1764404796.3912] device (tap54e5e343-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.392 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:36Z|00602|binding|INFO|Setting lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def ovn-installed in OVS
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.395 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.408 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:92:75 10.100.0.14'], port_security=['fa:16:3e:da:92:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b86b46f9-7d8f-414f-af87-3822510de392', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf206693-b177-47ba-9c63-2ab4e51898ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'neutron:revision_number': '6', 'neutron:security_group_ids': '140d3240-dbee-4ff7-b341-40a578af5b67 8317ef68-26fb-4cb8-afa5-f6a1bc395d96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f140b86-0300-440f-be11-680603255cb6, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:36Z|00603|binding|INFO|Setting lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def up in Southbound
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.410 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def in datapath cf206693-b177-47ba-9c63-2ab4e51898ce bound to our chassis#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.413 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf206693-b177-47ba-9c63-2ab4e51898ce#033[00m
Nov 29 03:26:36 np0005539564 systemd-machined[190128]: New machine qemu-75-instance-000000a0.
Nov 29 03:26:36 np0005539564 systemd[1]: Started Virtual Machine qemu-75-instance-000000a0.
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.434 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[38e1da15-55ca-46fa-8529-6f5cfd5fbf81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.435 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf206693-b1 in ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.439 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf206693-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.439 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1794da-f81d-49e5-afce-538ff533051c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.441 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e342432a-3bef-49a3-9a59-a6211ecdfde2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.458 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[83c71812-67e5-4cfc-9130-cc62efebc599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.475 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4786a33a-851b-4da4-9d7d-e60836c13c34]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.523 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[20427075-dc71-46b4-99b3-c6e770b043c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.529 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[26a8b799-fc6e-493e-bb95-a2d90fee3a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 NetworkManager[48997]: <info>  [1764404796.5317] manager: (tapcf206693-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.569 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6d3d3a-f228-49f3-b39a-01f037d2baf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.573 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f05ea5cc-ad45-43b7-ba95-22313c23bdfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 NetworkManager[48997]: <info>  [1764404796.6003] device (tapcf206693-b0): carrier: link connected
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.608 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[de8a3969-9c0d-425c-b270-2b1637dc0f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.637 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e686feda-f9ca-41d8-9513-a82bc73b7853]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf206693-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:f8:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777128, 'reachable_time': 15415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284999, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.659 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[adc39ff9-0060-431e-b6d0-b4d0f7ccdaf5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:f810'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777128, 'tstamp': 777128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285000, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.699 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[67ba93dd-d71a-4cff-b24d-6de6ac20c756]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf206693-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:f8:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777128, 'reachable_time': 15415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285016, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.751 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2a71c5-1b86-49a7-a784-169f7b4ba704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.837 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[44ecab98-3c25-4690-b1b0-097171f23689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.838 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf206693-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.839 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.839 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf206693-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:36 np0005539564 NetworkManager[48997]: <info>  [1764404796.8426] manager: (tapcf206693-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.841 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 kernel: tapcf206693-b0: entered promiscuous mode
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.847 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf206693-b0, col_values=(('external_ids', {'iface-id': '5116070e-bd28-42f7-aba2-689a78e19083'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:36Z|00604|binding|INFO|Releasing lport 5116070e-bd28-42f7-aba2-689a78e19083 from this chassis (sb_readonly=0)
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.848 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.852 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.853 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[181bdaa9-1960-4e9d-a15f-9229fccf878f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.854 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/cf206693-b177-47ba-9c63-2ab4e51898ce.pid.haproxy
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID cf206693-b177-47ba-9c63-2ab4e51898ce
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:36.855 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'env', 'PROCESS_TAG=haproxy-cf206693-b177-47ba-9c63-2ab4e51898ce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf206693-b177-47ba-9c63-2ab4e51898ce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.863 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.974 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for b86b46f9-7d8f-414f-af87-3822510de392 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.975 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404796.9743655, b86b46f9-7d8f-414f-af87-3822510de392 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.984 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.988 226310 DEBUG nova.compute.manager [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:26:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:36.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.993 226310 INFO nova.virt.libvirt.driver [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Instance rebooted successfully.#033[00m
Nov 29 03:26:36 np0005539564 nova_compute[226295]: 2025-11-29 08:26:36.994 226310 DEBUG nova.compute.manager [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:37 np0005539564 nova_compute[226295]: 2025-11-29 08:26:37.025 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:37 np0005539564 nova_compute[226295]: 2025-11-29 08:26:37.028 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:37 np0005539564 nova_compute[226295]: 2025-11-29 08:26:37.077 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 29 03:26:37 np0005539564 nova_compute[226295]: 2025-11-29 08:26:37.078 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404796.9840598, b86b46f9-7d8f-414f-af87-3822510de392 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:37 np0005539564 nova_compute[226295]: 2025-11-29 08:26:37.078 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:37 np0005539564 nova_compute[226295]: 2025-11-29 08:26:37.103 226310 DEBUG oslo_concurrency.lockutils [None req-bb280ff3-d5ab-437d-b2c5-a9f7fab0e8a2 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:37 np0005539564 nova_compute[226295]: 2025-11-29 08:26:37.107 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:37 np0005539564 nova_compute[226295]: 2025-11-29 08:26:37.112 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:37 np0005539564 podman[285093]: 2025-11-29 08:26:37.295512897 +0000 UTC m=+0.061464082 container create 9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:37 np0005539564 systemd[1]: Started libpod-conmon-9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c.scope.
Nov 29 03:26:37 np0005539564 podman[285093]: 2025-11-29 08:26:37.266018781 +0000 UTC m=+0.031969976 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:37 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:26:37 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93576809bbe5f438a07fcf34f648098d1b20c2c53736d9c4a052b147db84cecd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:37 np0005539564 podman[285093]: 2025-11-29 08:26:37.386281012 +0000 UTC m=+0.152232207 container init 9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:26:37 np0005539564 podman[285093]: 2025-11-29 08:26:37.394147414 +0000 UTC m=+0.160098569 container start 9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:37 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[285108]: [NOTICE]   (285112) : New worker (285114) forked
Nov 29 03:26:37 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[285108]: [NOTICE]   (285112) : Loading success.
Nov 29 03:26:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:37.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.301 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.477 226310 DEBUG nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-unplugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.477 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.478 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.478 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.478 226310 DEBUG nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] No waiting events found dispatching network-vif-unplugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.478 226310 WARNING nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received unexpected event network-vif-unplugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.478 226310 DEBUG nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.479 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.479 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.479 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.479 226310 DEBUG nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] No waiting events found dispatching network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.479 226310 WARNING nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received unexpected event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.479 226310 DEBUG nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.480 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.480 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.480 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.480 226310 DEBUG nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] No waiting events found dispatching network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.480 226310 WARNING nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received unexpected event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.480 226310 DEBUG nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.481 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.481 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.481 226310 DEBUG oslo_concurrency.lockutils [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.481 226310 DEBUG nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] No waiting events found dispatching network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.481 226310 WARNING nova.compute.manager [req-eaf84006-e63e-40e2-85b4-8143dcd52ff6 req-bf2aaca3-6609-4b49-99b2-de864088168f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received unexpected event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e351 e351: 3 total, 3 up, 3 in
Nov 29 03:26:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:38 np0005539564 nova_compute[226295]: 2025-11-29 08:26:38.739 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:38.740 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:38.742 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:26:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:39.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:40.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:41 np0005539564 nova_compute[226295]: 2025-11-29 08:26:41.291 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:42.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:43 np0005539564 nova_compute[226295]: 2025-11-29 08:26:43.353 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:43.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:44.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:45.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:26:45.744 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:46 np0005539564 nova_compute[226295]: 2025-11-29 08:26:46.293 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e352 e352: 3 total, 3 up, 3 in
Nov 29 03:26:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:46.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:47.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:48 np0005539564 nova_compute[226295]: 2025-11-29 08:26:48.357 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.536722) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808536781, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1058, "num_deletes": 252, "total_data_size": 1982409, "memory_usage": 2013168, "flush_reason": "Manual Compaction"}
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808561636, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 907975, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56122, "largest_seqno": 57174, "table_properties": {"data_size": 903802, "index_size": 1761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11203, "raw_average_key_size": 21, "raw_value_size": 894787, "raw_average_value_size": 1714, "num_data_blocks": 76, "num_entries": 522, "num_filter_entries": 522, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404742, "oldest_key_time": 1764404742, "file_creation_time": 1764404808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 24948 microseconds, and 4623 cpu microseconds.
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.561674) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 907975 bytes OK
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.561698) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.566345) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.566356) EVENT_LOG_v1 {"time_micros": 1764404808566353, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.566372) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1977175, prev total WAL file size 1977175, number of live WAL files 2.
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.566959) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373634' seq:72057594037927935, type:22 .. '6D6772737461740032303135' seq:0, type:0; will stop at (end)
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(886KB)], [108(12MB)]
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808567003, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 14222637, "oldest_snapshot_seqno": -1}
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:48 np0005539564 nova_compute[226295]: 2025-11-29 08:26:48.593 226310 DEBUG nova.compute.manager [req-641bf7ed-0b1a-421b-816f-cecaaa2697d3 req-175f2177-04da-4dfc-a08e-f0d13d80033c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:48 np0005539564 nova_compute[226295]: 2025-11-29 08:26:48.593 226310 DEBUG nova.compute.manager [req-641bf7ed-0b1a-421b-816f-cecaaa2697d3 req-175f2177-04da-4dfc-a08e-f0d13d80033c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing instance network info cache due to event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:48 np0005539564 nova_compute[226295]: 2025-11-29 08:26:48.594 226310 DEBUG oslo_concurrency.lockutils [req-641bf7ed-0b1a-421b-816f-cecaaa2697d3 req-175f2177-04da-4dfc-a08e-f0d13d80033c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:48 np0005539564 nova_compute[226295]: 2025-11-29 08:26:48.594 226310 DEBUG oslo_concurrency.lockutils [req-641bf7ed-0b1a-421b-816f-cecaaa2697d3 req-175f2177-04da-4dfc-a08e-f0d13d80033c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:48 np0005539564 nova_compute[226295]: 2025-11-29 08:26:48.594 226310 DEBUG nova.network.neutron [req-641bf7ed-0b1a-421b-816f-cecaaa2697d3 req-175f2177-04da-4dfc-a08e-f0d13d80033c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8639 keys, 10768248 bytes, temperature: kUnknown
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808718977, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10768248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10712925, "index_size": 32619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 225293, "raw_average_key_size": 26, "raw_value_size": 10561418, "raw_average_value_size": 1222, "num_data_blocks": 1266, "num_entries": 8639, "num_filter_entries": 8639, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.719370) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10768248 bytes
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.721451) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.5 rd, 70.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(27.5) write-amplify(11.9) OK, records in: 9136, records dropped: 497 output_compression: NoCompression
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.721470) EVENT_LOG_v1 {"time_micros": 1764404808721460, "job": 68, "event": "compaction_finished", "compaction_time_micros": 152113, "compaction_time_cpu_micros": 24564, "output_level": 6, "num_output_files": 1, "total_output_size": 10768248, "num_input_records": 9136, "num_output_records": 8639, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808722139, "job": 68, "event": "table_file_deletion", "file_number": 110}
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808724300, "job": 68, "event": "table_file_deletion", "file_number": 108}
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.566883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.724353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.724359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.724360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.724362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:26:48.724363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:49.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:49.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:26:51Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:92:75 10.100.0.14
Nov 29 03:26:51 np0005539564 nova_compute[226295]: 2025-11-29 08:26:51.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:51.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:52 np0005539564 nova_compute[226295]: 2025-11-29 08:26:52.502 226310 DEBUG nova.network.neutron [req-641bf7ed-0b1a-421b-816f-cecaaa2697d3 req-175f2177-04da-4dfc-a08e-f0d13d80033c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updated VIF entry in instance network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:52 np0005539564 nova_compute[226295]: 2025-11-29 08:26:52.503 226310 DEBUG nova.network.neutron [req-641bf7ed-0b1a-421b-816f-cecaaa2697d3 req-175f2177-04da-4dfc-a08e-f0d13d80033c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:52 np0005539564 podman[285125]: 2025-11-29 08:26:52.536030235 +0000 UTC m=+0.074465144 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:52 np0005539564 nova_compute[226295]: 2025-11-29 08:26:52.539 226310 DEBUG oslo_concurrency.lockutils [req-641bf7ed-0b1a-421b-816f-cecaaa2697d3 req-175f2177-04da-4dfc-a08e-f0d13d80033c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:52 np0005539564 podman[285124]: 2025-11-29 08:26:52.554150685 +0000 UTC m=+0.095615766 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:52 np0005539564 podman[285123]: 2025-11-29 08:26:52.585223905 +0000 UTC m=+0.126576273 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:26:52 np0005539564 ceph-osd[79212]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:26:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:53.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:53 np0005539564 nova_compute[226295]: 2025-11-29 08:26:53.358 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e353 e353: 3 total, 3 up, 3 in
Nov 29 03:26:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:53.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:54 np0005539564 nova_compute[226295]: 2025-11-29 08:26:54.356 226310 DEBUG oslo_concurrency.lockutils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:54 np0005539564 nova_compute[226295]: 2025-11-29 08:26:54.357 226310 DEBUG oslo_concurrency.lockutils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:54 np0005539564 nova_compute[226295]: 2025-11-29 08:26:54.806 226310 DEBUG nova.objects.instance [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:55.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:55.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.140 226310 DEBUG oslo_concurrency.lockutils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 1.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.234 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.298 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.701 226310 DEBUG oslo_concurrency.lockutils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.702 226310 DEBUG oslo_concurrency.lockutils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.702 226310 INFO nova.compute.manager [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Attaching volume ff1e082f-e768-4c5f-850b-5e8ce6b839d1 to /dev/vdb#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.923 226310 DEBUG os_brick.utils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.924 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.936 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.936 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[b42adfd2-38d0-4c89-8111-d137df9af1e2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.938 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.949 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.949 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[433d0a23-fd7b-4d4e-8b23-df944c80ee51]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.951 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.963 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.963 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[17d6e142-732a-40fb-8a0f-dffdd666accd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.966 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[d532edda-c35a-4e68-8a19-41be17141fb4]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:56 np0005539564 nova_compute[226295]: 2025-11-29 08:26:56.967 226310 DEBUG oslo_concurrency.processutils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:57.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:57 np0005539564 nova_compute[226295]: 2025-11-29 08:26:57.015 226310 DEBUG oslo_concurrency.processutils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "nvme version" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:57 np0005539564 nova_compute[226295]: 2025-11-29 08:26:57.018 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:26:57 np0005539564 nova_compute[226295]: 2025-11-29 08:26:57.019 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:26:57 np0005539564 nova_compute[226295]: 2025-11-29 08:26:57.019 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:26:57 np0005539564 nova_compute[226295]: 2025-11-29 08:26:57.019 226310 DEBUG os_brick.utils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:26:57 np0005539564 nova_compute[226295]: 2025-11-29 08:26:57.020 226310 DEBUG nova.virt.block_device [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating existing volume attachment record: 9f8b0e94-930f-4b62-b242-de18df49f4d7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:26:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:57.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:58 np0005539564 nova_compute[226295]: 2025-11-29 08:26:58.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:58 np0005539564 nova_compute[226295]: 2025-11-29 08:26:58.363 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3349961666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:58 np0005539564 nova_compute[226295]: 2025-11-29 08:26:58.943 226310 DEBUG nova.objects.instance [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:58 np0005539564 nova_compute[226295]: 2025-11-29 08:26:58.977 226310 DEBUG nova.virt.libvirt.driver [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Attempting to attach volume ff1e082f-e768-4c5f-850b-5e8ce6b839d1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:26:58 np0005539564 nova_compute[226295]: 2025-11-29 08:26:58.993 226310 DEBUG nova.virt.libvirt.guest [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:26:58 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1">
Nov 29 03:26:58 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:26:58 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:  <serial>ff1e082f-e768-4c5f-850b-5e8ce6b839d1</serial>
Nov 29 03:26:58 np0005539564 nova_compute[226295]:  <shareable/>
Nov 29 03:26:58 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:26:58 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:26:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:59.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:59 np0005539564 nova_compute[226295]: 2025-11-29 08:26:59.155 226310 DEBUG nova.virt.libvirt.driver [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:59 np0005539564 nova_compute[226295]: 2025-11-29 08:26:59.156 226310 DEBUG nova.virt.libvirt.driver [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:59 np0005539564 nova_compute[226295]: 2025-11-29 08:26:59.156 226310 DEBUG nova.virt.libvirt.driver [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:59 np0005539564 nova_compute[226295]: 2025-11-29 08:26:59.157 226310 DEBUG nova.virt.libvirt.driver [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:7b:13:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:59 np0005539564 nova_compute[226295]: 2025-11-29 08:26:59.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:59 np0005539564 nova_compute[226295]: 2025-11-29 08:26:59.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:26:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:26:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:59 np0005539564 nova_compute[226295]: 2025-11-29 08:26:59.655 226310 DEBUG oslo_concurrency.lockutils [None req-0493cc8c-9c3d-408b-a866-3b22da277299 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:00 np0005539564 nova_compute[226295]: 2025-11-29 08:27:00.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:01.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:01 np0005539564 nova_compute[226295]: 2025-11-29 08:27:01.300 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539564 nova_compute[226295]: 2025-11-29 08:27:01.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:01 np0005539564 nova_compute[226295]: 2025-11-29 08:27:01.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:27:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:01.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:02 np0005539564 nova_compute[226295]: 2025-11-29 08:27:02.215 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:02 np0005539564 nova_compute[226295]: 2025-11-29 08:27:02.215 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:02 np0005539564 nova_compute[226295]: 2025-11-29 08:27:02.216 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:27:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:03 np0005539564 nova_compute[226295]: 2025-11-29 08:27:03.365 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:03.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:03.741 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:03.742 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:03.744 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:05.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:05.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:05 np0005539564 nova_compute[226295]: 2025-11-29 08:27:05.978 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:06 np0005539564 nova_compute[226295]: 2025-11-29 08:27:06.009 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:06 np0005539564 nova_compute[226295]: 2025-11-29 08:27:06.010 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:27:06 np0005539564 nova_compute[226295]: 2025-11-29 08:27:06.010 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:06 np0005539564 nova_compute[226295]: 2025-11-29 08:27:06.010 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:06 np0005539564 nova_compute[226295]: 2025-11-29 08:27:06.011 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:06 np0005539564 nova_compute[226295]: 2025-11-29 08:27:06.334 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:07.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:07.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:07 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 03:27:08 np0005539564 nova_compute[226295]: 2025-11-29 08:27:08.368 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:09.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:09.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:10 np0005539564 nova_compute[226295]: 2025-11-29 08:27:10.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:10 np0005539564 nova_compute[226295]: 2025-11-29 08:27:10.562 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:10 np0005539564 nova_compute[226295]: 2025-11-29 08:27:10.563 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:10 np0005539564 nova_compute[226295]: 2025-11-29 08:27:10.564 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:10 np0005539564 nova_compute[226295]: 2025-11-29 08:27:10.564 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:27:10 np0005539564 nova_compute[226295]: 2025-11-29 08:27:10.565 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:11.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3326238135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.092 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.213 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.214 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.214 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.220 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.221 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.221 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.226 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.226 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.226 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.336 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.451 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.452 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3592MB free_disk=20.556934356689453GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.452 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.452 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.535 226310 INFO nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating resource usage from migration eb7ab834-06eb-430d-b4a7-c662625ee1a3#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.648 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance c3e98a32-fd92-4873-a060-88aaf76bf1fc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.648 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance b86b46f9-7d8f-414f-af87-3822510de392 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.648 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Migration eb7ab834-06eb-430d-b4a7-c662625ee1a3 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.648 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.649 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:27:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.852 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.971 226310 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.972 226310 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:11 np0005539564 nova_compute[226295]: 2025-11-29 08:27:11.973 226310 DEBUG nova.network.neutron [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1163314426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.321 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.327 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.341 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.422 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.423 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.424 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.425 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.476 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.477 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:12 np0005539564 nova_compute[226295]: 2025-11-29 08:27:12.478 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:27:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:13 np0005539564 nova_compute[226295]: 2025-11-29 08:27:13.404 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:14 np0005539564 nova_compute[226295]: 2025-11-29 08:27:14.324 226310 DEBUG nova.compute.manager [req-3fdbce47-21e0-4b91-a51b-e6e0fb898167 req-05d8ab7e-15b7-4758-ae57-852348fd3b87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:14 np0005539564 nova_compute[226295]: 2025-11-29 08:27:14.325 226310 DEBUG nova.compute.manager [req-3fdbce47-21e0-4b91-a51b-e6e0fb898167 req-05d8ab7e-15b7-4758-ae57-852348fd3b87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing instance network info cache due to event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:14 np0005539564 nova_compute[226295]: 2025-11-29 08:27:14.325 226310 DEBUG oslo_concurrency.lockutils [req-3fdbce47-21e0-4b91-a51b-e6e0fb898167 req-05d8ab7e-15b7-4758-ae57-852348fd3b87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:14 np0005539564 nova_compute[226295]: 2025-11-29 08:27:14.325 226310 DEBUG oslo_concurrency.lockutils [req-3fdbce47-21e0-4b91-a51b-e6e0fb898167 req-05d8ab7e-15b7-4758-ae57-852348fd3b87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:14 np0005539564 nova_compute[226295]: 2025-11-29 08:27:14.325 226310 DEBUG nova.network.neutron [req-3fdbce47-21e0-4b91-a51b-e6e0fb898167 req-05d8ab7e-15b7-4758-ae57-852348fd3b87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:15.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:15 np0005539564 nova_compute[226295]: 2025-11-29 08:27:15.421 226310 DEBUG nova.network.neutron [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:15 np0005539564 nova_compute[226295]: 2025-11-29 08:27:15.459 226310 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:15.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:15 np0005539564 nova_compute[226295]: 2025-11-29 08:27:15.699 226310 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:27:15 np0005539564 nova_compute[226295]: 2025-11-29 08:27:15.700 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Creating file /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/952d39e646254944af9973b20ca2c0a8.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:27:15 np0005539564 nova_compute[226295]: 2025-11-29 08:27:15.700 226310 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/952d39e646254944af9973b20ca2c0a8.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:16 np0005539564 nova_compute[226295]: 2025-11-29 08:27:16.181 226310 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/952d39e646254944af9973b20ca2c0a8.tmp" returned: 1 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:16 np0005539564 nova_compute[226295]: 2025-11-29 08:27:16.182 226310 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/952d39e646254944af9973b20ca2c0a8.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:27:16 np0005539564 nova_compute[226295]: 2025-11-29 08:27:16.182 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Creating directory /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:27:16 np0005539564 nova_compute[226295]: 2025-11-29 08:27:16.183 226310 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:16 np0005539564 nova_compute[226295]: 2025-11-29 08:27:16.338 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:16 np0005539564 nova_compute[226295]: 2025-11-29 08:27:16.432 226310 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:16 np0005539564 nova_compute[226295]: 2025-11-29 08:27:16.438 226310 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:27:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:17 np0005539564 nova_compute[226295]: 2025-11-29 08:27:17.351 226310 DEBUG nova.network.neutron [req-3fdbce47-21e0-4b91-a51b-e6e0fb898167 req-05d8ab7e-15b7-4758-ae57-852348fd3b87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updated VIF entry in instance network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:17 np0005539564 nova_compute[226295]: 2025-11-29 08:27:17.353 226310 DEBUG nova.network.neutron [req-3fdbce47-21e0-4b91-a51b-e6e0fb898167 req-05d8ab7e-15b7-4758-ae57-852348fd3b87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:17 np0005539564 nova_compute[226295]: 2025-11-29 08:27:17.405 226310 DEBUG oslo_concurrency.lockutils [req-3fdbce47-21e0-4b91-a51b-e6e0fb898167 req-05d8ab7e-15b7-4758-ae57-852348fd3b87 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:17.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:18 np0005539564 nova_compute[226295]: 2025-11-29 08:27:18.408 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:18 np0005539564 kernel: tapf095bbfd-d9 (unregistering): left promiscuous mode
Nov 29 03:27:18 np0005539564 NetworkManager[48997]: <info>  [1764404838.8181] device (tapf095bbfd-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:18 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:18Z|00605|binding|INFO|Releasing lport f095bbfd-d901-4dd4-8831-72dab1104494 from this chassis (sb_readonly=0)
Nov 29 03:27:18 np0005539564 nova_compute[226295]: 2025-11-29 08:27:18.829 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:18Z|00606|binding|INFO|Setting lport f095bbfd-d901-4dd4-8831-72dab1104494 down in Southbound
Nov 29 03:27:18 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:18Z|00607|binding|INFO|Removing iface tapf095bbfd-d9 ovn-installed in OVS
Nov 29 03:27:18 np0005539564 nova_compute[226295]: 2025-11-29 08:27:18.832 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:18.846 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:13:85 10.100.0.12'], port_security=['fa:16:3e:7b:13:85 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '554ea6a4-8de1-41bf-8772-b15e95a7fd05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=f095bbfd-d901-4dd4-8831-72dab1104494) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:18.847 139780 INFO neutron.agent.ovn.metadata.agent [-] Port f095bbfd-d901-4dd4-8831-72dab1104494 in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis#033[00m
Nov 29 03:27:18 np0005539564 nova_compute[226295]: 2025-11-29 08:27:18.850 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:18.850 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abbc8daa-d665-4e2f-bf74-9e57db481441, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:18.852 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[357eaa76-deae-4d85-b3f7-b1c66cdc5d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:18.853 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 namespace which is not needed anymore#033[00m
Nov 29 03:27:18 np0005539564 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Nov 29 03:27:18 np0005539564 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a2.scope: Consumed 18.023s CPU time.
Nov 29 03:27:18 np0005539564 systemd-machined[190128]: Machine qemu-74-instance-000000a2 terminated.
Nov 29 03:27:19 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[284550]: [NOTICE]   (284554) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:19 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[284550]: [NOTICE]   (284554) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:19 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[284550]: [WARNING]  (284554) : Exiting Master process...
Nov 29 03:27:19 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[284550]: [ALERT]    (284554) : Current worker (284556) exited with code 143 (Terminated)
Nov 29 03:27:19 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[284550]: [WARNING]  (284554) : All workers exited. Exiting... (0)
Nov 29 03:27:19 np0005539564 systemd[1]: libpod-0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f.scope: Deactivated successfully.
Nov 29 03:27:19 np0005539564 podman[285279]: 2025-11-29 08:27:19.04089714 +0000 UTC m=+0.062686438 container died 0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:19.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.074 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.082 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay-87ca7f56b5a170cbd7a16fd1e395d440fcbf7ef390a658a0f981e7b6164422c5-merged.mount: Deactivated successfully.
Nov 29 03:27:19 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:19 np0005539564 podman[285279]: 2025-11-29 08:27:19.105244771 +0000 UTC m=+0.127034019 container cleanup 0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:27:19 np0005539564 systemd[1]: libpod-conmon-0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f.scope: Deactivated successfully.
Nov 29 03:27:19 np0005539564 podman[285321]: 2025-11-29 08:27:19.190275961 +0000 UTC m=+0.053773556 container remove 0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.200 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[09465445-5005-4bc1-a226-f58c4c31bf86]: (4, ('Sat Nov 29 08:27:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 (0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f)\n0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f\nSat Nov 29 08:27:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 (0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f)\n0d7f9342d8766956cc7b3a8237becbb4d1c1a2d2c0f214963157c1f10b4f6e1f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.203 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[abb0f3a3-8339-479f-a9d1-48232a9034d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.205 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.208 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539564 kernel: tapabbc8daa-d0: left promiscuous mode
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.230 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f52b70-707d-42be-8694-4b9d7f64f2c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.249 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dc61803a-da59-46c2-afbe-159b4bc87bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.250 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[255a978f-f3f6-41e2-8429-4e85a94f1542]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.267 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[460bf87b-08f5-4b68-9919-1fffd58dbe22]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774735, 'reachable_time': 28762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285341, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.270 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.270 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[e297242f-2a14-45a9-9412-3b2412149ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:19 np0005539564 systemd[1]: run-netns-ovnmeta\x2dabbc8daa\x2dd665\x2d4e2f\x2dbf74\x2d9e57db481441.mount: Deactivated successfully.
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.458 226310 INFO nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.465 226310 INFO nova.virt.libvirt.driver [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance destroyed successfully.#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.466 226310 DEBUG nova.virt.libvirt.vif [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=162,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-6fx7zmqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=554ea6a4-8de1-41bf-8772-b15e95a7fd05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:7b:13:85"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.466 226310 DEBUG nova.network.os_vif_util [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:7b:13:85"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.467 226310 DEBUG nova.network.os_vif_util [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.467 226310 DEBUG os_vif [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.469 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.469 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf095bbfd-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.471 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.473 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.473 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.476 226310 INFO os_vif [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9')#033[00m
Nov 29 03:27:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.750 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:19.751 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.816 226310 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.816 226310 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.817 226310 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.827 226310 DEBUG nova.compute.manager [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-unplugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.828 226310 DEBUG oslo_concurrency.lockutils [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.829 226310 DEBUG oslo_concurrency.lockutils [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.829 226310 DEBUG oslo_concurrency.lockutils [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.830 226310 DEBUG nova.compute.manager [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-unplugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:19 np0005539564 nova_compute[226295]: 2025-11-29 08:27:19.831 226310 WARNING nova.compute.manager [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-unplugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:27:20 np0005539564 nova_compute[226295]: 2025-11-29 08:27:20.738 226310 DEBUG neutronclient.v2_0.client [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port f095bbfd-d901-4dd4-8831-72dab1104494 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:27:20 np0005539564 nova_compute[226295]: 2025-11-29 08:27:20.892 226310 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:20 np0005539564 nova_compute[226295]: 2025-11-29 08:27:20.893 226310 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:20 np0005539564 nova_compute[226295]: 2025-11-29 08:27:20.894 226310 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:21.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:21 np0005539564 nova_compute[226295]: 2025-11-29 08:27:21.533 226310 DEBUG nova.compute.manager [req-01b7622f-b516-4fa7-b451-6c0b95b753f0 req-d8d6c719-b8e5-40a0-b8d7-cdfe22d2089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:21 np0005539564 nova_compute[226295]: 2025-11-29 08:27:21.533 226310 DEBUG nova.compute.manager [req-01b7622f-b516-4fa7-b451-6c0b95b753f0 req-d8d6c719-b8e5-40a0-b8d7-cdfe22d2089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing instance network info cache due to event network-changed-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:21 np0005539564 nova_compute[226295]: 2025-11-29 08:27:21.534 226310 DEBUG oslo_concurrency.lockutils [req-01b7622f-b516-4fa7-b451-6c0b95b753f0 req-d8d6c719-b8e5-40a0-b8d7-cdfe22d2089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:21 np0005539564 nova_compute[226295]: 2025-11-29 08:27:21.534 226310 DEBUG oslo_concurrency.lockutils [req-01b7622f-b516-4fa7-b451-6c0b95b753f0 req-d8d6c719-b8e5-40a0-b8d7-cdfe22d2089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:21 np0005539564 nova_compute[226295]: 2025-11-29 08:27:21.535 226310 DEBUG nova.network.neutron [req-01b7622f-b516-4fa7-b451-6c0b95b753f0 req-d8d6c719-b8e5-40a0-b8d7-cdfe22d2089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Refreshing network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:21.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:21.754 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.278 226310 DEBUG nova.compute.manager [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.279 226310 DEBUG oslo_concurrency.lockutils [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.279 226310 DEBUG oslo_concurrency.lockutils [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.279 226310 DEBUG oslo_concurrency.lockutils [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.280 226310 DEBUG nova.compute.manager [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.280 226310 WARNING nova.compute.manager [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.857 226310 DEBUG nova.compute.manager [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.857 226310 DEBUG nova.compute.manager [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing instance network info cache due to event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.857 226310 DEBUG oslo_concurrency.lockutils [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.858 226310 DEBUG oslo_concurrency.lockutils [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:22 np0005539564 nova_compute[226295]: 2025-11-29 08:27:22.858 226310 DEBUG nova.network.neutron [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:23 np0005539564 nova_compute[226295]: 2025-11-29 08:27:23.412 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:23 np0005539564 podman[285343]: 2025-11-29 08:27:23.553094522 +0000 UTC m=+0.079232985 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:27:23 np0005539564 podman[285344]: 2025-11-29 08:27:23.553720669 +0000 UTC m=+0.078943426 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:27:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:23 np0005539564 podman[285342]: 2025-11-29 08:27:23.59185008 +0000 UTC m=+0.125647439 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 29 03:27:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:23.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.438 226310 DEBUG nova.network.neutron [req-01b7622f-b516-4fa7-b451-6c0b95b753f0 req-d8d6c719-b8e5-40a0-b8d7-cdfe22d2089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updated VIF entry in instance network info cache for port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.439 226310 DEBUG nova.network.neutron [req-01b7622f-b516-4fa7-b451-6c0b95b753f0 req-d8d6c719-b8e5-40a0-b8d7-cdfe22d2089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [{"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.472 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.487 226310 DEBUG oslo_concurrency.lockutils [req-01b7622f-b516-4fa7-b451-6c0b95b753f0 req-d8d6c719-b8e5-40a0-b8d7-cdfe22d2089c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b86b46f9-7d8f-414f-af87-3822510de392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.561 226310 DEBUG oslo_concurrency.lockutils [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.562 226310 DEBUG oslo_concurrency.lockutils [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.591 226310 INFO nova.compute.manager [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Detaching volume 9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.818 226310 INFO nova.virt.block_device [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Attempting to driver detach volume 9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a from mountpoint /dev/vdb#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.830 226310 DEBUG nova.virt.libvirt.driver [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Attempting to detach device vdb from instance b86b46f9-7d8f-414f-af87-3822510de392 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.831 226310 DEBUG nova.virt.libvirt.guest [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a">
Nov 29 03:27:24 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <serial>9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a</serial>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:27:24 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.842 226310 INFO nova.virt.libvirt.driver [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully detached device vdb from instance b86b46f9-7d8f-414f-af87-3822510de392 from the persistent domain config.#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.842 226310 DEBUG nova.virt.libvirt.driver [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b86b46f9-7d8f-414f-af87-3822510de392 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.843 226310 DEBUG nova.virt.libvirt.guest [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a">
Nov 29 03:27:24 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <serial>9a3eb32a-193c-49dc-a1e2-9c3bb4895a1a</serial>
Nov 29 03:27:24 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:27:24 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:27:24 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.908 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764404844.907863, b86b46f9-7d8f-414f-af87-3822510de392 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.911 226310 DEBUG nova.virt.libvirt.driver [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b86b46f9-7d8f-414f-af87-3822510de392 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:27:24 np0005539564 nova_compute[226295]: 2025-11-29 08:27:24.915 226310 INFO nova.virt.libvirt.driver [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully detached device vdb from instance b86b46f9-7d8f-414f-af87-3822510de392 from the live domain config.#033[00m
Nov 29 03:27:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:25 np0005539564 nova_compute[226295]: 2025-11-29 08:27:25.269 226310 DEBUG nova.objects.instance [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'flavor' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:25 np0005539564 nova_compute[226295]: 2025-11-29 08:27:25.319 226310 DEBUG oslo_concurrency.lockutils [None req-f632059d-6264-4e89-9760-b112a184ce3a 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:25 np0005539564 nova_compute[226295]: 2025-11-29 08:27:25.631 226310 DEBUG nova.network.neutron [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updated VIF entry in instance network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:25 np0005539564 nova_compute[226295]: 2025-11-29 08:27:25.631 226310 DEBUG nova.network.neutron [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:25 np0005539564 nova_compute[226295]: 2025-11-29 08:27:25.652 226310 DEBUG oslo_concurrency.lockutils [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2019255988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:27.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:27:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:27.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:27:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e354 e354: 3 total, 3 up, 3 in
Nov 29 03:27:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:27:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/982805940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:27:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:27:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/982805940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:27:28 np0005539564 nova_compute[226295]: 2025-11-29 08:27:28.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:28 np0005539564 nova_compute[226295]: 2025-11-29 08:27:28.415 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:29.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:29 np0005539564 nova_compute[226295]: 2025-11-29 08:27:29.475 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:29.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.367 226310 DEBUG nova.compute.manager [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.367 226310 DEBUG oslo_concurrency.lockutils [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.367 226310 DEBUG oslo_concurrency.lockutils [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.368 226310 DEBUG oslo_concurrency.lockutils [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.368 226310 DEBUG nova.compute.manager [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.368 226310 WARNING nova.compute.manager [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.578 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.579 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.580 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.580 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.581 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.583 226310 INFO nova.compute.manager [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Terminating instance#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.585 226310 DEBUG nova.compute.manager [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:27:30 np0005539564 kernel: tap54e5e343-ed (unregistering): left promiscuous mode
Nov 29 03:27:30 np0005539564 NetworkManager[48997]: <info>  [1764404850.6462] device (tap54e5e343-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:30Z|00608|binding|INFO|Releasing lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def from this chassis (sb_readonly=0)
Nov 29 03:27:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:30Z|00609|binding|INFO|Setting lport 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def down in Southbound
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.659 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:30Z|00610|binding|INFO|Removing iface tap54e5e343-ed ovn-installed in OVS
Nov 29 03:27:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:30.669 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:92:75 10.100.0.14'], port_security=['fa:16:3e:da:92:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b86b46f9-7d8f-414f-af87-3822510de392', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf206693-b177-47ba-9c63-2ab4e51898ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9a9decdabb1480da8f7d039e8b3d414', 'neutron:revision_number': '8', 'neutron:security_group_ids': '140d3240-dbee-4ff7-b341-40a578af5b67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f140b86-0300-440f-be11-680603255cb6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:30.670 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 54e5e343-ed4d-4cf3-9d9f-2ae7ec672def in datapath cf206693-b177-47ba-9c63-2ab4e51898ce unbound from our chassis#033[00m
Nov 29 03:27:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:30.672 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf206693-b177-47ba-9c63-2ab4e51898ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:30.673 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[99bcb14b-cfaf-4673-819b-cd96c01e0c25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:30.674 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce namespace which is not needed anymore#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.675 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:30 np0005539564 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a0.scope: Deactivated successfully.
Nov 29 03:27:30 np0005539564 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a0.scope: Consumed 16.724s CPU time.
Nov 29 03:27:30 np0005539564 systemd-machined[190128]: Machine qemu-75-instance-000000a0 terminated.
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.851 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.859 226310 INFO nova.virt.libvirt.driver [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Instance destroyed successfully.#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.859 226310 DEBUG nova.objects.instance [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lazy-loading 'resources' on Instance uuid b86b46f9-7d8f-414f-af87-3822510de392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.899 226310 DEBUG nova.virt.libvirt.vif [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-821850339',display_name='tempest-TestMinimumBasicScenario-server-821850339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-821850339',id=160,image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ7gdL4PKUc9tosM7g28DfdZ6SzPuAj8/oAyNdJYivPDpnhrJbV+U75MujLhc3BhkXGqLXqZ+FF7kcJucYEQCJU7T523I/wegT3xL9AQVlLwpt4RmAyZ0AklLUZTVxh90g==',key_name='tempest-TestMinimumBasicScenario-138660977',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f9a9decdabb1480da8f7d039e8b3d414',ramdisk_id='',reservation_id='r-dl7txott',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d9469d26-d189-44ee-a659-1398ee5e0da2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1484268516',owner_user_name='tempest-TestMinimumBasicScenario-1484268516-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:37Z,user_data=None,user_id='0cbb3ac39ebd4876ad23f2a6d1c50166',uuid=b86b46f9-7d8f-414f-af87-3822510de392,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.900 226310 DEBUG nova.network.os_vif_util [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converting VIF {"id": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "address": "fa:16:3e:da:92:75", "network": {"id": "cf206693-b177-47ba-9c63-2ab4e51898ce", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1661358754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9a9decdabb1480da8f7d039e8b3d414", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e5e343-ed", "ovs_interfaceid": "54e5e343-ed4d-4cf3-9d9f-2ae7ec672def", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.900 226310 DEBUG nova.network.os_vif_util [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.901 226310 DEBUG os_vif [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.903 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54e5e343-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.905 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.907 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:30 np0005539564 nova_compute[226295]: 2025-11-29 08:27:30.910 226310 INFO os_vif [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:92:75,bridge_name='br-int',has_traffic_filtering=True,id=54e5e343-ed4d-4cf3-9d9f-2ae7ec672def,network=Network(cf206693-b177-47ba-9c63-2ab4e51898ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e5e343-ed')#033[00m
Nov 29 03:27:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:31 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[285108]: [NOTICE]   (285112) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:31 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[285108]: [NOTICE]   (285112) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:31 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[285108]: [WARNING]  (285112) : Exiting Master process...
Nov 29 03:27:31 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[285108]: [ALERT]    (285112) : Current worker (285114) exited with code 143 (Terminated)
Nov 29 03:27:31 np0005539564 neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce[285108]: [WARNING]  (285112) : All workers exited. Exiting... (0)
Nov 29 03:27:31 np0005539564 systemd[1]: libpod-9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c.scope: Deactivated successfully.
Nov 29 03:27:31 np0005539564 podman[285431]: 2025-11-29 08:27:31.160619716 +0000 UTC m=+0.381534723 container died 9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:27:31 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:31 np0005539564 systemd[1]: var-lib-containers-storage-overlay-93576809bbe5f438a07fcf34f648098d1b20c2c53736d9c4a052b147db84cecd-merged.mount: Deactivated successfully.
Nov 29 03:27:31 np0005539564 podman[285431]: 2025-11-29 08:27:31.221159184 +0000 UTC m=+0.442074121 container cleanup 9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.244 226310 DEBUG nova.compute.manager [req-a28590d8-e7b1-413c-b198-1f757390e684 req-36cc0657-003a-4fd8-9bee-3dd62b5a6af2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-unplugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.245 226310 DEBUG oslo_concurrency.lockutils [req-a28590d8-e7b1-413c-b198-1f757390e684 req-36cc0657-003a-4fd8-9bee-3dd62b5a6af2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.246 226310 DEBUG oslo_concurrency.lockutils [req-a28590d8-e7b1-413c-b198-1f757390e684 req-36cc0657-003a-4fd8-9bee-3dd62b5a6af2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.246 226310 DEBUG oslo_concurrency.lockutils [req-a28590d8-e7b1-413c-b198-1f757390e684 req-36cc0657-003a-4fd8-9bee-3dd62b5a6af2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.246 226310 DEBUG nova.compute.manager [req-a28590d8-e7b1-413c-b198-1f757390e684 req-36cc0657-003a-4fd8-9bee-3dd62b5a6af2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] No waiting events found dispatching network-vif-unplugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.246 226310 DEBUG nova.compute.manager [req-a28590d8-e7b1-413c-b198-1f757390e684 req-36cc0657-003a-4fd8-9bee-3dd62b5a6af2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-unplugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:27:31 np0005539564 systemd[1]: libpod-conmon-9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c.scope: Deactivated successfully.
Nov 29 03:27:31 np0005539564 podman[285490]: 2025-11-29 08:27:31.310180182 +0000 UTC m=+0.056397347 container remove 9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.318 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[70f2fc51-9691-4c78-860d-618592144000]: (4, ('Sat Nov 29 08:27:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce (9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c)\n9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c\nSat Nov 29 08:27:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce (9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c)\n9d96ec88039e110c3acc39a79eb5393f92226c4527a748242a30f0230c61c19c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.321 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6e351541-c1b6-4d5b-86d8-6d6b81b2dd34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.322 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf206693-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:31 np0005539564 kernel: tapcf206693-b0: left promiscuous mode
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.324 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.331 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[15f4a334-34b7-4273-a15a-715551c193c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.341 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.343 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[02150039-67a3-4d15-8542-252882dc756c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.345 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6e832df7-2474-4355-9d99-57fb83d16f00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.365 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[55925529-d6ec-451d-9ccf-9cc03d2ed55d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777119, 'reachable_time': 41253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285505, 'error': None, 'target': 'ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.369 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf206693-b177-47ba-9c63-2ab4e51898ce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:31.369 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d533242d-fed5-401a-baea-015881324994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:31 np0005539564 systemd[1]: run-netns-ovnmeta\x2dcf206693\x2db177\x2d47ba\x2d9c63\x2d2ab4e51898ce.mount: Deactivated successfully.
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.580 226310 INFO nova.virt.libvirt.driver [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Deleting instance files /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392_del#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.582 226310 INFO nova.virt.libvirt.driver [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Deletion of /var/lib/nova/instances/b86b46f9-7d8f-414f-af87-3822510de392_del complete#033[00m
Nov 29 03:27:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:31.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.781 226310 INFO nova.compute.manager [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.781 226310 DEBUG oslo.service.loopingcall [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.782 226310 DEBUG nova.compute.manager [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:27:31 np0005539564 nova_compute[226295]: 2025-11-29 08:27:31.782 226310 DEBUG nova.network.neutron [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:27:32 np0005539564 nova_compute[226295]: 2025-11-29 08:27:32.787 226310 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:32 np0005539564 nova_compute[226295]: 2025-11-29 08:27:32.788 226310 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:32 np0005539564 nova_compute[226295]: 2025-11-29 08:27:32.788 226310 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:32 np0005539564 nova_compute[226295]: 2025-11-29 08:27:32.788 226310 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:32 np0005539564 nova_compute[226295]: 2025-11-29 08:27:32.788 226310 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:32 np0005539564 nova_compute[226295]: 2025-11-29 08:27:32.789 226310 WARNING nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:27:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:33.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:33 np0005539564 nova_compute[226295]: 2025-11-29 08:27:33.417 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:33 np0005539564 nova_compute[226295]: 2025-11-29 08:27:33.623 226310 DEBUG nova.network.neutron [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:33 np0005539564 nova_compute[226295]: 2025-11-29 08:27:33.685 226310 INFO nova.compute.manager [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Took 1.90 seconds to deallocate network for instance.#033[00m
Nov 29 03:27:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:33.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:33 np0005539564 nova_compute[226295]: 2025-11-29 08:27:33.774 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:33 np0005539564 nova_compute[226295]: 2025-11-29 08:27:33.775 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:33 np0005539564 nova_compute[226295]: 2025-11-29 08:27:33.917 226310 DEBUG nova.compute.manager [req-e1d67c5c-4f5c-45c3-8c8b-4e7a2d338b10 req-1f40b17b-3a13-4a7b-bdcf-e4655ead435b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-deleted-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:33 np0005539564 nova_compute[226295]: 2025-11-29 08:27:33.982 226310 DEBUG oslo_concurrency.processutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.025 226310 DEBUG nova.compute.manager [req-19abe738-7cc5-43b3-af8a-ce618fb0e589 req-33223b9f-2d33-426f-8e8d-4eedbe84f922 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.026 226310 DEBUG oslo_concurrency.lockutils [req-19abe738-7cc5-43b3-af8a-ce618fb0e589 req-33223b9f-2d33-426f-8e8d-4eedbe84f922 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b86b46f9-7d8f-414f-af87-3822510de392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.026 226310 DEBUG oslo_concurrency.lockutils [req-19abe738-7cc5-43b3-af8a-ce618fb0e589 req-33223b9f-2d33-426f-8e8d-4eedbe84f922 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.027 226310 DEBUG oslo_concurrency.lockutils [req-19abe738-7cc5-43b3-af8a-ce618fb0e589 req-33223b9f-2d33-426f-8e8d-4eedbe84f922 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.027 226310 DEBUG nova.compute.manager [req-19abe738-7cc5-43b3-af8a-ce618fb0e589 req-33223b9f-2d33-426f-8e8d-4eedbe84f922 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] No waiting events found dispatching network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.027 226310 WARNING nova.compute.manager [req-19abe738-7cc5-43b3-af8a-ce618fb0e589 req-33223b9f-2d33-426f-8e8d-4eedbe84f922 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Received unexpected event network-vif-plugged-54e5e343-ed4d-4cf3-9d9f-2ae7ec672def for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.084 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404839.0838113, 554ea6a4-8de1-41bf-8772-b15e95a7fd05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.085 226310 INFO nova.compute.manager [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.269 226310 DEBUG nova.compute.manager [None req-97ab2533-e6b5-45ee-8d97-cf91fa2a5b8e - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.276 226310 DEBUG nova.compute.manager [None req-97ab2533-e6b5-45ee-8d97-cf91fa2a5b8e - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.304 226310 INFO nova.compute.manager [None req-97ab2533-e6b5-45ee-8d97-cf91fa2a5b8e - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Nov 29 03:27:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2507905717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.438 226310 DEBUG oslo_concurrency.processutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.446 226310 DEBUG nova.compute.provider_tree [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.505 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.506 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.506 226310 DEBUG nova.compute.manager [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Going to confirm migration 19 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.520 226310 DEBUG nova.scheduler.client.report [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.583 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.667 226310 INFO nova.scheduler.client.report [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Deleted allocations for instance b86b46f9-7d8f-414f-af87-3822510de392#033[00m
Nov 29 03:27:34 np0005539564 nova_compute[226295]: 2025-11-29 08:27:34.802 226310 DEBUG oslo_concurrency.lockutils [None req-33a3f74f-5dd1-4481-bd7c-5dd5d66ede47 0cbb3ac39ebd4876ad23f2a6d1c50166 f9a9decdabb1480da8f7d039e8b3d414 - - default default] Lock "b86b46f9-7d8f-414f-af87-3822510de392" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:35 np0005539564 nova_compute[226295]: 2025-11-29 08:27:35.046 226310 DEBUG neutronclient.v2_0.client [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port f095bbfd-d901-4dd4-8831-72dab1104494 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:27:35 np0005539564 nova_compute[226295]: 2025-11-29 08:27:35.048 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:35 np0005539564 nova_compute[226295]: 2025-11-29 08:27:35.048 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:35 np0005539564 nova_compute[226295]: 2025-11-29 08:27:35.049 226310 DEBUG nova.network.neutron [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:35 np0005539564 nova_compute[226295]: 2025-11-29 08:27:35.050 226310 DEBUG nova.objects.instance [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'info_cache' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:35.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:27:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:27:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:35.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:35 np0005539564 nova_compute[226295]: 2025-11-29 08:27:35.905 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:36 np0005539564 nova_compute[226295]: 2025-11-29 08:27:36.465 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:36 np0005539564 nova_compute[226295]: 2025-11-29 08:27:36.787 226310 DEBUG nova.network.neutron [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:36 np0005539564 nova_compute[226295]: 2025-11-29 08:27:36.822 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:36 np0005539564 nova_compute[226295]: 2025-11-29 08:27:36.822 226310 DEBUG nova.objects.instance [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'migration_context' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:37.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e355 e355: 3 total, 3 up, 3 in
Nov 29 03:27:37 np0005539564 nova_compute[226295]: 2025-11-29 08:27:37.434 226310 DEBUG nova.storage.rbd_utils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] removing snapshot(nova-resize) on rbd image(554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:27:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:37.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.420 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e356 e356: 3 total, 3 up, 3 in
Nov 29 03:27:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.858 226310 DEBUG nova.virt.libvirt.vif [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=162,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-6fx7zmqe',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=554ea6a4-8de1-41bf-8772-b15e95a7fd05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.859 226310 DEBUG nova.network.os_vif_util [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.860 226310 DEBUG nova.network.os_vif_util [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.860 226310 DEBUG os_vif [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.862 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf095bbfd-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.863 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.865 226310 INFO os_vif [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9')#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.865 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.865 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:38 np0005539564 nova_compute[226295]: 2025-11-29 08:27:38.972 226310 DEBUG oslo_concurrency.processutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:39.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1711814554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:39 np0005539564 nova_compute[226295]: 2025-11-29 08:27:39.452 226310 DEBUG oslo_concurrency.processutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:39 np0005539564 nova_compute[226295]: 2025-11-29 08:27:39.462 226310 DEBUG nova.compute.provider_tree [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:39 np0005539564 nova_compute[226295]: 2025-11-29 08:27:39.487 226310 DEBUG nova.scheduler.client.report [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:39 np0005539564 nova_compute[226295]: 2025-11-29 08:27:39.556 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:39 np0005539564 nova_compute[226295]: 2025-11-29 08:27:39.664 226310 INFO nova.scheduler.client.report [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Deleted allocation for migration eb7ab834-06eb-430d-b4a7-c662625ee1a3#033[00m
Nov 29 03:27:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:39.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:39 np0005539564 nova_compute[226295]: 2025-11-29 08:27:39.742 226310 DEBUG oslo_concurrency.lockutils [None req-31a7c5c2-0ff1-458e-96e8-2619345cf208 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.361 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.362 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.363 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.363 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.363 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.365 226310 INFO nova.compute.manager [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Terminating instance#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.366 226310 DEBUG nova.compute.manager [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:27:40 np0005539564 kernel: tapa7a9e323-49 (unregistering): left promiscuous mode
Nov 29 03:27:40 np0005539564 NetworkManager[48997]: <info>  [1764404860.4677] device (tapa7a9e323-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.479 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:40Z|00611|binding|INFO|Releasing lport a7a9e323-49eb-415e-85cd-322403ba6517 from this chassis (sb_readonly=0)
Nov 29 03:27:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:40Z|00612|binding|INFO|Setting lport a7a9e323-49eb-415e-85cd-322403ba6517 down in Southbound
Nov 29 03:27:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:40Z|00613|binding|INFO|Removing iface tapa7a9e323-49 ovn-installed in OVS
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.483 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.492 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:e3 10.100.0.8'], port_security=['fa:16:3e:bc:a3:e3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3e98a32-fd92-4873-a060-88aaf76bf1fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a7a9e323-49eb-415e-85cd-322403ba6517) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.493 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a7a9e323-49eb-415e-85cd-322403ba6517 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.495 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.496 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2bce67f8-c41e-439e-8f23-e02572575f75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.496 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace which is not needed anymore#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.504 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Nov 29 03:27:40 np0005539564 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009c.scope: Consumed 22.437s CPU time.
Nov 29 03:27:40 np0005539564 systemd-machined[190128]: Machine qemu-72-instance-0000009c terminated.
Nov 29 03:27:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e357 e357: 3 total, 3 up, 3 in
Nov 29 03:27:40 np0005539564 kernel: tapa7a9e323-49: entered promiscuous mode
Nov 29 03:27:40 np0005539564 kernel: tapa7a9e323-49 (unregistering): left promiscuous mode
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.598 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:40Z|00614|binding|INFO|Claiming lport a7a9e323-49eb-415e-85cd-322403ba6517 for this chassis.
Nov 29 03:27:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:40Z|00615|binding|INFO|a7a9e323-49eb-415e-85cd-322403ba6517: Claiming fa:16:3e:bc:a3:e3 10.100.0.8
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.613 226310 INFO nova.virt.libvirt.driver [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Instance destroyed successfully.#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.614 226310 DEBUG nova.objects.instance [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'resources' on Instance uuid c3e98a32-fd92-4873-a060-88aaf76bf1fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.620 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:e3 10.100.0.8'], port_security=['fa:16:3e:bc:a3:e3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3e98a32-fd92-4873-a060-88aaf76bf1fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a7a9e323-49eb-415e-85cd-322403ba6517) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.620 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:27:40Z|00616|binding|INFO|Releasing lport a7a9e323-49eb-415e-85cd-322403ba6517 from this chassis (sb_readonly=0)
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.638 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:e3 10.100.0.8'], port_security=['fa:16:3e:bc:a3:e3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3e98a32-fd92-4873-a060-88aaf76bf1fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a7a9e323-49eb-415e-85cd-322403ba6517) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.642 226310 DEBUG nova.virt.libvirt.vif [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-871834401',display_name='tempest-ServerRescueNegativeTestJSON-server-871834401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-871834401',id=156,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-gi24u0nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:25:05Z,user_data=None,user_id='dfcf2db50da745c09bffcf32ec016854',uuid=c3e98a32-fd92-4873-a060-88aaf76bf1fc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.643 226310 DEBUG nova.network.os_vif_util [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "a7a9e323-49eb-415e-85cd-322403ba6517", "address": "fa:16:3e:bc:a3:e3", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a9e323-49", "ovs_interfaceid": "a7a9e323-49eb-415e-85cd-322403ba6517", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.644 226310 DEBUG nova.network.os_vif_util [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a3:e3,bridge_name='br-int',has_traffic_filtering=True,id=a7a9e323-49eb-415e-85cd-322403ba6517,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a9e323-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.645 226310 DEBUG os_vif [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a3:e3,bridge_name='br-int',has_traffic_filtering=True,id=a7a9e323-49eb-415e-85cd-322403ba6517,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a9e323-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.647 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.647 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7a9e323-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.649 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.652 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.655 226310 INFO os_vif [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a3:e3,bridge_name='br-int',has_traffic_filtering=True,id=a7a9e323-49eb-415e-85cd-322403ba6517,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a9e323-49')#033[00m
Nov 29 03:27:40 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[283184]: [NOTICE]   (283188) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:40 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[283184]: [NOTICE]   (283188) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:40 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[283184]: [WARNING]  (283188) : Exiting Master process...
Nov 29 03:27:40 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[283184]: [WARNING]  (283188) : Exiting Master process...
Nov 29 03:27:40 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[283184]: [ALERT]    (283188) : Current worker (283190) exited with code 143 (Terminated)
Nov 29 03:27:40 np0005539564 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[283184]: [WARNING]  (283188) : All workers exited. Exiting... (0)
Nov 29 03:27:40 np0005539564 systemd[1]: libpod-e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6.scope: Deactivated successfully.
Nov 29 03:27:40 np0005539564 podman[285745]: 2025-11-29 08:27:40.678314258 +0000 UTC m=+0.052751479 container died e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:27:40 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:40 np0005539564 systemd[1]: var-lib-containers-storage-overlay-fe103fd52a604e98ae48d6723fe8399987b6df35d07b9990afa74fc8c3f1d14b-merged.mount: Deactivated successfully.
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.720 226310 DEBUG nova.compute.manager [req-12e2563a-202a-4003-b278-c66c2179478a req-18ecf0b7-aca5-4bd1-831c-b9269c3f7f59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-unplugged-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.720 226310 DEBUG oslo_concurrency.lockutils [req-12e2563a-202a-4003-b278-c66c2179478a req-18ecf0b7-aca5-4bd1-831c-b9269c3f7f59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.720 226310 DEBUG oslo_concurrency.lockutils [req-12e2563a-202a-4003-b278-c66c2179478a req-18ecf0b7-aca5-4bd1-831c-b9269c3f7f59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.720 226310 DEBUG oslo_concurrency.lockutils [req-12e2563a-202a-4003-b278-c66c2179478a req-18ecf0b7-aca5-4bd1-831c-b9269c3f7f59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.721 226310 DEBUG nova.compute.manager [req-12e2563a-202a-4003-b278-c66c2179478a req-18ecf0b7-aca5-4bd1-831c-b9269c3f7f59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] No waiting events found dispatching network-vif-unplugged-a7a9e323-49eb-415e-85cd-322403ba6517 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.721 226310 DEBUG nova.compute.manager [req-12e2563a-202a-4003-b278-c66c2179478a req-18ecf0b7-aca5-4bd1-831c-b9269c3f7f59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-unplugged-a7a9e323-49eb-415e-85cd-322403ba6517 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:27:40 np0005539564 podman[285745]: 2025-11-29 08:27:40.721418343 +0000 UTC m=+0.095855574 container cleanup e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:27:40 np0005539564 systemd[1]: libpod-conmon-e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6.scope: Deactivated successfully.
Nov 29 03:27:40 np0005539564 podman[285790]: 2025-11-29 08:27:40.789126185 +0000 UTC m=+0.045205014 container remove e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.799 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[471a2c86-d8f1-4544-96b8-ca5b90dd4cf5]: (4, ('Sat Nov 29 08:27:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6)\ne30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6\nSat Nov 29 08:27:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (e30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6)\ne30bff3ebde152bfe9d1ac4e665394fa1cd6b0cfd5c50bc3f82d976b60a5aea6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.801 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[90a96d3d-59a4-4e2e-83fe-1100f07a3f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.803 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.806 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 kernel: tap7008b597-80: left promiscuous mode
Nov 29 03:27:40 np0005539564 nova_compute[226295]: 2025-11-29 08:27:40.824 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.827 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[01d6efcf-1844-4478-bf71-c3836811ba16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.838 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[99b1c4a9-11de-47f8-ad31-ef8d10544f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.839 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9e6905-92f0-4a9e-9564-c82978671080]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.856 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2c681ee3-09cf-4bb5-b736-2029c32e31f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 767893, 'reachable_time': 21940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285805, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 systemd[1]: run-netns-ovnmeta\x2d7008b597\x2d8de2\x2d4973\x2d801f\x2dfcc733e4f6c9.mount: Deactivated successfully.
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.860 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.860 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ab30b3-9cfb-4948-a633-c389e45e54f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.861 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a7a9e323-49eb-415e-85cd-322403ba6517 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.863 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.864 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6171a6de-bcb8-4885-b019-13b0d33fbe63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.864 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a7a9e323-49eb-415e-85cd-322403ba6517 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.866 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:27:40.867 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e01509e6-77ef-4b56-88da-fa994387315b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539564 nova_compute[226295]: 2025-11-29 08:27:41.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:41.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:41 np0005539564 nova_compute[226295]: 2025-11-29 08:27:41.345 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:41.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.285 226310 INFO nova.virt.libvirt.driver [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Deleting instance files /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc_del#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.287 226310 INFO nova.virt.libvirt.driver [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Deletion of /var/lib/nova/instances/c3e98a32-fd92-4873-a060-88aaf76bf1fc_del complete#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.368 226310 INFO nova.compute.manager [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Took 2.00 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.369 226310 DEBUG oslo.service.loopingcall [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.369 226310 DEBUG nova.compute.manager [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.370 226310 DEBUG nova.network.neutron [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.815 226310 DEBUG nova.compute.manager [req-499ef317-3e77-4cf1-8874-c103e580d9db req-8d8e7694-9c16-4bd0-bad5-7cfd802b2f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.815 226310 DEBUG oslo_concurrency.lockutils [req-499ef317-3e77-4cf1-8874-c103e580d9db req-8d8e7694-9c16-4bd0-bad5-7cfd802b2f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.816 226310 DEBUG oslo_concurrency.lockutils [req-499ef317-3e77-4cf1-8874-c103e580d9db req-8d8e7694-9c16-4bd0-bad5-7cfd802b2f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.816 226310 DEBUG oslo_concurrency.lockutils [req-499ef317-3e77-4cf1-8874-c103e580d9db req-8d8e7694-9c16-4bd0-bad5-7cfd802b2f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.816 226310 DEBUG nova.compute.manager [req-499ef317-3e77-4cf1-8874-c103e580d9db req-8d8e7694-9c16-4bd0-bad5-7cfd802b2f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] No waiting events found dispatching network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:42 np0005539564 nova_compute[226295]: 2025-11-29 08:27:42.816 226310 WARNING nova.compute.manager [req-499ef317-3e77-4cf1-8874-c103e580d9db req-8d8e7694-9c16-4bd0-bad5-7cfd802b2f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received unexpected event network-vif-plugged-a7a9e323-49eb-415e-85cd-322403ba6517 for instance with vm_state rescued and task_state deleting.#033[00m
Nov 29 03:27:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:43.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:43 np0005539564 nova_compute[226295]: 2025-11-29 08:27:43.320 226310 DEBUG nova.network.neutron [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:43 np0005539564 nova_compute[226295]: 2025-11-29 08:27:43.424 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e358 e358: 3 total, 3 up, 3 in
Nov 29 03:27:43 np0005539564 nova_compute[226295]: 2025-11-29 08:27:43.585 226310 INFO nova.compute.manager [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Took 1.22 seconds to deallocate network for instance.#033[00m
Nov 29 03:27:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:43 np0005539564 nova_compute[226295]: 2025-11-29 08:27:43.637 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:43 np0005539564 nova_compute[226295]: 2025-11-29 08:27:43.637 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:43 np0005539564 nova_compute[226295]: 2025-11-29 08:27:43.700 226310 DEBUG oslo_concurrency.processutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:43.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2016664186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:44 np0005539564 nova_compute[226295]: 2025-11-29 08:27:44.107 226310 DEBUG oslo_concurrency.processutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:44 np0005539564 nova_compute[226295]: 2025-11-29 08:27:44.116 226310 DEBUG nova.compute.provider_tree [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:44 np0005539564 nova_compute[226295]: 2025-11-29 08:27:44.155 226310 DEBUG nova.scheduler.client.report [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:44 np0005539564 nova_compute[226295]: 2025-11-29 08:27:44.202 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:44 np0005539564 nova_compute[226295]: 2025-11-29 08:27:44.245 226310 INFO nova.scheduler.client.report [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Deleted allocations for instance c3e98a32-fd92-4873-a060-88aaf76bf1fc#033[00m
Nov 29 03:27:44 np0005539564 nova_compute[226295]: 2025-11-29 08:27:44.332 226310 DEBUG oslo_concurrency.lockutils [None req-e4b0a651-72ae-4be7-998b-32372155a1a0 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "c3e98a32-fd92-4873-a060-88aaf76bf1fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:44 np0005539564 nova_compute[226295]: 2025-11-29 08:27:44.993 226310 DEBUG nova.compute.manager [req-0cc3384e-34b8-445f-bdc9-f7c8e53c418b req-595204de-6b29-43e7-90c5-b99775c7d047 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Received event network-vif-deleted-a7a9e323-49eb-415e-85cd-322403ba6517 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:45.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:45 np0005539564 nova_compute[226295]: 2025-11-29 08:27:45.652 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:45.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:45 np0005539564 nova_compute[226295]: 2025-11-29 08:27:45.857 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404850.856239, b86b46f9-7d8f-414f-af87-3822510de392 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:45 np0005539564 nova_compute[226295]: 2025-11-29 08:27:45.858 226310 INFO nova.compute.manager [-] [instance: b86b46f9-7d8f-414f-af87-3822510de392] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:27:45 np0005539564 nova_compute[226295]: 2025-11-29 08:27:45.903 226310 DEBUG nova.compute.manager [None req-0b7ba64c-8c64-4134-bc41-284e1cd690d9 - - - - - -] [instance: b86b46f9-7d8f-414f-af87-3822510de392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:47.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:47.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:48 np0005539564 nova_compute[226295]: 2025-11-29 08:27:48.424 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:49.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e359 e359: 3 total, 3 up, 3 in
Nov 29 03:27:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:49.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:50 np0005539564 nova_compute[226295]: 2025-11-29 08:27:50.656 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:51.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:53 np0005539564 nova_compute[226295]: 2025-11-29 08:27:53.425 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:53.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:54 np0005539564 nova_compute[226295]: 2025-11-29 08:27:54.356 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:54 np0005539564 podman[285881]: 2025-11-29 08:27:54.518376615 +0000 UTC m=+0.067479637 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 03:27:54 np0005539564 podman[285880]: 2025-11-29 08:27:54.526753492 +0000 UTC m=+0.081096116 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:27:54 np0005539564 podman[285879]: 2025-11-29 08:27:54.553723342 +0000 UTC m=+0.109707719 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:27:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:55 np0005539564 nova_compute[226295]: 2025-11-29 08:27:55.611 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404860.6096127, c3e98a32-fd92-4873-a060-88aaf76bf1fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:55 np0005539564 nova_compute[226295]: 2025-11-29 08:27:55.612 226310 INFO nova.compute.manager [-] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:27:55 np0005539564 nova_compute[226295]: 2025-11-29 08:27:55.632 226310 DEBUG nova.compute.manager [None req-260ad7bb-eed9-47e4-a8f0-517d0fa5dea0 - - - - - -] [instance: c3e98a32-fd92-4873-a060-88aaf76bf1fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:55 np0005539564 nova_compute[226295]: 2025-11-29 08:27:55.658 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:55.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:57.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:57.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:58 np0005539564 nova_compute[226295]: 2025-11-29 08:27:58.427 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:58 np0005539564 nova_compute[226295]: 2025-11-29 08:27:58.506 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:59.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:59 np0005539564 nova_compute[226295]: 2025-11-29 08:27:59.442 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:27:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:59.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:00 np0005539564 nova_compute[226295]: 2025-11-29 08:28:00.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:00 np0005539564 nova_compute[226295]: 2025-11-29 08:28:00.661 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:28:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:01.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:28:01 np0005539564 nova_compute[226295]: 2025-11-29 08:28:01.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:01 np0005539564 nova_compute[226295]: 2025-11-29 08:28:01.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:01 np0005539564 nova_compute[226295]: 2025-11-29 08:28:01.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:28:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:01.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:03.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:03 np0005539564 nova_compute[226295]: 2025-11-29 08:28:03.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:03 np0005539564 nova_compute[226295]: 2025-11-29 08:28:03.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:28:03 np0005539564 nova_compute[226295]: 2025-11-29 08:28:03.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:28:03 np0005539564 nova_compute[226295]: 2025-11-29 08:28:03.378 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:28:03 np0005539564 nova_compute[226295]: 2025-11-29 08:28:03.379 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:03 np0005539564 nova_compute[226295]: 2025-11-29 08:28:03.430 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:03.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:03.742 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:03.742 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:03.742 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:05.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:05 np0005539564 nova_compute[226295]: 2025-11-29 08:28:05.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:05 np0005539564 nova_compute[226295]: 2025-11-29 08:28:05.664 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:05.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774689083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:07.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:07.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:08 np0005539564 nova_compute[226295]: 2025-11-29 08:28:08.433 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:09.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:09.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:10 np0005539564 nova_compute[226295]: 2025-11-29 08:28:10.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e360 e360: 3 total, 3 up, 3 in
Nov 29 03:28:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:11.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:11.878 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:11.879 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:28:11 np0005539564 nova_compute[226295]: 2025-11-29 08:28:11.881 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539564 nova_compute[226295]: 2025-11-29 08:28:12.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:12 np0005539564 nova_compute[226295]: 2025-11-29 08:28:12.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:12 np0005539564 nova_compute[226295]: 2025-11-29 08:28:12.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:12 np0005539564 nova_compute[226295]: 2025-11-29 08:28:12.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:12 np0005539564 nova_compute[226295]: 2025-11-29 08:28:12.374 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:28:12 np0005539564 nova_compute[226295]: 2025-11-29 08:28:12.374 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1984822392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:12 np0005539564 nova_compute[226295]: 2025-11-29 08:28:12.865 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.104 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.106 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4277MB free_disk=20.77997589111328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.107 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.108 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:13.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.171 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.171 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.360 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.399 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.400 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.414 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.433 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.439 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.447 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:13.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:13 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1456162095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.975 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:13 np0005539564 nova_compute[226295]: 2025-11-29 08:28:13.984 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:14 np0005539564 nova_compute[226295]: 2025-11-29 08:28:14.013 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:14 np0005539564 nova_compute[226295]: 2025-11-29 08:28:14.056 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:28:14 np0005539564 nova_compute[226295]: 2025-11-29 08:28:14.057 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:15.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:15 np0005539564 nova_compute[226295]: 2025-11-29 08:28:15.672 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:15.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:15.881 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:17.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:17.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:18 np0005539564 nova_compute[226295]: 2025-11-29 08:28:18.445 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e361 e361: 3 total, 3 up, 3 in
Nov 29 03:28:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:19.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:19 np0005539564 nova_compute[226295]: 2025-11-29 08:28:19.629 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:19 np0005539564 nova_compute[226295]: 2025-11-29 08:28:19.630 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:19 np0005539564 nova_compute[226295]: 2025-11-29 08:28:19.663 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:28:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:19.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:19 np0005539564 nova_compute[226295]: 2025-11-29 08:28:19.791 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:19 np0005539564 nova_compute[226295]: 2025-11-29 08:28:19.791 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:19 np0005539564 nova_compute[226295]: 2025-11-29 08:28:19.798 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:28:19 np0005539564 nova_compute[226295]: 2025-11-29 08:28:19.799 226310 INFO nova.compute.claims [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:28:19 np0005539564 nova_compute[226295]: 2025-11-29 08:28:19.937 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3296194091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.421 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.430 226310 DEBUG nova.compute.provider_tree [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.455 226310 DEBUG nova.scheduler.client.report [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.480 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.481 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.527 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.528 226310 DEBUG nova.network.neutron [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.550 226310 INFO nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.575 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.675 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.685 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.687 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.688 226310 INFO nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Creating image(s)#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.728 226310 DEBUG nova.storage.rbd_utils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image bf4c2292-18d7-4c4b-b97e-abb227923156_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.761 226310 DEBUG nova.storage.rbd_utils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image bf4c2292-18d7-4c4b-b97e-abb227923156_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.786 226310 DEBUG nova.storage.rbd_utils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image bf4c2292-18d7-4c4b-b97e-abb227923156_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.791 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.821 226310 DEBUG nova.policy [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a362a419f6a492aae2f102ad2bbd5e9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.855 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.856 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.856 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.857 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.887 226310 DEBUG nova.storage.rbd_utils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image bf4c2292-18d7-4c4b-b97e-abb227923156_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:20 np0005539564 nova_compute[226295]: 2025-11-29 08:28:20.891 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf bf4c2292-18d7-4c4b-b97e-abb227923156_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:21.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:21 np0005539564 nova_compute[226295]: 2025-11-29 08:28:21.635 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf bf4c2292-18d7-4c4b-b97e-abb227923156_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.743s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:21 np0005539564 nova_compute[226295]: 2025-11-29 08:28:21.748 226310 DEBUG nova.storage.rbd_utils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] resizing rbd image bf4c2292-18d7-4c4b-b97e-abb227923156_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:28:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:21.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:21 np0005539564 nova_compute[226295]: 2025-11-29 08:28:21.891 226310 DEBUG nova.objects.instance [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'migration_context' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:21 np0005539564 nova_compute[226295]: 2025-11-29 08:28:21.917 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:28:21 np0005539564 nova_compute[226295]: 2025-11-29 08:28:21.918 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Ensure instance console log exists: /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:28:21 np0005539564 nova_compute[226295]: 2025-11-29 08:28:21.918 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:21 np0005539564 nova_compute[226295]: 2025-11-29 08:28:21.919 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:21 np0005539564 nova_compute[226295]: 2025-11-29 08:28:21.919 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:22 np0005539564 nova_compute[226295]: 2025-11-29 08:28:22.122 226310 DEBUG nova.network.neutron [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Successfully created port: bec1fbdf-d4dc-4b2c-af66-9ba123464651 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:28:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e362 e362: 3 total, 3 up, 3 in
Nov 29 03:28:22 np0005539564 nova_compute[226295]: 2025-11-29 08:28:22.854 226310 DEBUG nova.network.neutron [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Successfully updated port: bec1fbdf-d4dc-4b2c-af66-9ba123464651 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:28:22 np0005539564 nova_compute[226295]: 2025-11-29 08:28:22.874 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:22 np0005539564 nova_compute[226295]: 2025-11-29 08:28:22.874 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquired lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:22 np0005539564 nova_compute[226295]: 2025-11-29 08:28:22.874 226310 DEBUG nova.network.neutron [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:28:22 np0005539564 nova_compute[226295]: 2025-11-29 08:28:22.987 226310 DEBUG nova.compute.manager [req-b4ea3047-a9d9-4109-b08a-b9b20d39945e req-42b29505-ad6a-416b-b9c6-2cc29732d6d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-changed-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:22 np0005539564 nova_compute[226295]: 2025-11-29 08:28:22.988 226310 DEBUG nova.compute.manager [req-b4ea3047-a9d9-4109-b08a-b9b20d39945e req-42b29505-ad6a-416b-b9c6-2cc29732d6d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Refreshing instance network info cache due to event network-changed-bec1fbdf-d4dc-4b2c-af66-9ba123464651. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:22 np0005539564 nova_compute[226295]: 2025-11-29 08:28:22.989 226310 DEBUG oslo_concurrency.lockutils [req-b4ea3047-a9d9-4109-b08a-b9b20d39945e req-42b29505-ad6a-416b-b9c6-2cc29732d6d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:23 np0005539564 nova_compute[226295]: 2025-11-29 08:28:23.137 226310 DEBUG nova.network.neutron [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:28:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:23 np0005539564 nova_compute[226295]: 2025-11-29 08:28:23.481 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:23.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.231 226310 DEBUG nova.network.neutron [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating instance_info_cache with network_info: [{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.283 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Releasing lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.284 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance network_info: |[{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.285 226310 DEBUG oslo_concurrency.lockutils [req-b4ea3047-a9d9-4109-b08a-b9b20d39945e req-42b29505-ad6a-416b-b9c6-2cc29732d6d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.285 226310 DEBUG nova.network.neutron [req-b4ea3047-a9d9-4109-b08a-b9b20d39945e req-42b29505-ad6a-416b-b9c6-2cc29732d6d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Refreshing network info cache for port bec1fbdf-d4dc-4b2c-af66-9ba123464651 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.292 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Start _get_guest_xml network_info=[{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.297 226310 WARNING nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.302 226310 DEBUG nova.virt.libvirt.host [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.303 226310 DEBUG nova.virt.libvirt.host [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.306 226310 DEBUG nova.virt.libvirt.host [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.307 226310 DEBUG nova.virt.libvirt.host [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.309 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.309 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.310 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.310 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.310 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.311 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.311 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.311 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.312 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.312 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.312 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.312 226310 DEBUG nova.virt.hardware [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.316 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3816501715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.905 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.946 226310 DEBUG nova.storage.rbd_utils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image bf4c2292-18d7-4c4b-b97e-abb227923156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:24 np0005539564 nova_compute[226295]: 2025-11-29 08:28:24.952 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:25.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:25 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/303436852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.442 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.445 226310 DEBUG nova.virt.libvirt.vif [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.445 226310 DEBUG nova.network.os_vif_util [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.446 226310 DEBUG nova.network.os_vif_util [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.448 226310 DEBUG nova.objects.instance [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'pci_devices' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.519 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <uuid>bf4c2292-18d7-4c4b-b97e-abb227923156</uuid>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <name>instance-000000a5</name>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <nova:name>tempest-AttachVolumeTestJSON-server-958672118</nova:name>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:28:24</nova:creationTime>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <nova:user uuid="7a362a419f6a492aae2f102ad2bbd5e9">tempest-AttachVolumeTestJSON-942041170-project-member</nova:user>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <nova:project uuid="eb0810bf6f5b4eb59638b7a2cf59ed5b">tempest-AttachVolumeTestJSON-942041170</nova:project>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <nova:port uuid="bec1fbdf-d4dc-4b2c-af66-9ba123464651">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <entry name="serial">bf4c2292-18d7-4c4b-b97e-abb227923156</entry>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <entry name="uuid">bf4c2292-18d7-4c4b-b97e-abb227923156</entry>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/bf4c2292-18d7-4c4b-b97e-abb227923156_disk">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/bf4c2292-18d7-4c4b-b97e-abb227923156_disk.config">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:68:a4:7d"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <target dev="tapbec1fbdf-d4"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/console.log" append="off"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:28:25 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:28:25 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:28:25 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:28:25 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.520 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Preparing to wait for external event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.520 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.520 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.520 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.521 226310 DEBUG nova.virt.libvirt.vif [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.521 226310 DEBUG nova.network.os_vif_util [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.521 226310 DEBUG nova.network.os_vif_util [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.522 226310 DEBUG os_vif [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:28:25 np0005539564 podman[286241]: 2025-11-29 08:28:25.523450748 +0000 UTC m=+0.073788507 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.522 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.523 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.523 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.527 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.528 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbec1fbdf-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.528 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbec1fbdf-d4, col_values=(('external_ids', {'iface-id': 'bec1fbdf-d4dc-4b2c-af66-9ba123464651', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:a4:7d', 'vm-uuid': 'bf4c2292-18d7-4c4b-b97e-abb227923156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.529 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:25 np0005539564 NetworkManager[48997]: <info>  [1764404905.5307] manager: (tapbec1fbdf-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.531 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.538 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.539 226310 INFO os_vif [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4')#033[00m
Nov 29 03:28:25 np0005539564 podman[286237]: 2025-11-29 08:28:25.541364802 +0000 UTC m=+0.101164187 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:28:25 np0005539564 podman[286240]: 2025-11-29 08:28:25.553661246 +0000 UTC m=+0.100460359 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.589 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.589 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.589 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No VIF found with MAC fa:16:3e:68:a4:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.590 226310 INFO nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Using config drive#033[00m
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.617 226310 DEBUG nova.storage.rbd_utils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image bf4c2292-18d7-4c4b-b97e-abb227923156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:25.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:25 np0005539564 nova_compute[226295]: 2025-11-29 08:28:25.998 226310 INFO nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Creating config drive at /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/disk.config#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.013 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptl31tp9f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.156 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptl31tp9f" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.193 226310 DEBUG nova.storage.rbd_utils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image bf4c2292-18d7-4c4b-b97e-abb227923156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.199 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/disk.config bf4c2292-18d7-4c4b-b97e-abb227923156_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.352 226310 DEBUG nova.network.neutron [req-b4ea3047-a9d9-4109-b08a-b9b20d39945e req-42b29505-ad6a-416b-b9c6-2cc29732d6d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updated VIF entry in instance network info cache for port bec1fbdf-d4dc-4b2c-af66-9ba123464651. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.354 226310 DEBUG nova.network.neutron [req-b4ea3047-a9d9-4109-b08a-b9b20d39945e req-42b29505-ad6a-416b-b9c6-2cc29732d6d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating instance_info_cache with network_info: [{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.375 226310 DEBUG oslo_concurrency.lockutils [req-b4ea3047-a9d9-4109-b08a-b9b20d39945e req-42b29505-ad6a-416b-b9c6-2cc29732d6d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.415 226310 DEBUG oslo_concurrency.processutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/disk.config bf4c2292-18d7-4c4b-b97e-abb227923156_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.416 226310 INFO nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Deleting local config drive /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/disk.config because it was imported into RBD.#033[00m
Nov 29 03:28:26 np0005539564 NetworkManager[48997]: <info>  [1764404906.5043] manager: (tapbec1fbdf-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Nov 29 03:28:26 np0005539564 kernel: tapbec1fbdf-d4: entered promiscuous mode
Nov 29 03:28:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:28:26Z|00617|binding|INFO|Claiming lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 for this chassis.
Nov 29 03:28:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:28:26Z|00618|binding|INFO|bec1fbdf-d4dc-4b2c-af66-9ba123464651: Claiming fa:16:3e:68:a4:7d 10.100.0.7
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.508 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.518 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.531 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a4:7d 10.100.0.7'], port_security=['fa:16:3e:68:a4:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf4c2292-18d7-4c4b-b97e-abb227923156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02abc3ce-c8f1-4034-8c00-97d80a9dca82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1429573d-31ea-4b00-8580-1031fbde1ea5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=bec1fbdf-d4dc-4b2c-af66-9ba123464651) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.532 139780 INFO neutron.agent.ovn.metadata.agent [-] Port bec1fbdf-d4dc-4b2c-af66-9ba123464651 in datapath a259ebcb-7cce-4363-8e50-c25ed4a3daec bound to our chassis#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.534 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a259ebcb-7cce-4363-8e50-c25ed4a3daec#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.552 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3c72a54a-cdb7-4f38-9f8d-c9d0e6e744e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.553 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa259ebcb-71 in ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:28:26 np0005539564 systemd-machined[190128]: New machine qemu-76-instance-000000a5.
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.557 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa259ebcb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.557 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f2123b34-54c4-4002-ad3f-c99197a824ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.558 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0435f341-3a8c-4ca2-a954-f77ef59d1493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.573 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c349f8-0be2-4c14-9cc6-f41eac6a6b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 systemd[1]: Started Virtual Machine qemu-76-instance-000000a5.
Nov 29 03:28:26 np0005539564 systemd-udevd[286379]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.607 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.609 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b6199b-acf6-45d2-ad8a-c01a9e02793b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:28:26Z|00619|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 ovn-installed in OVS
Nov 29 03:28:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:28:26Z|00620|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 up in Southbound
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.613 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539564 NetworkManager[48997]: <info>  [1764404906.6228] device (tapbec1fbdf-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:28:26 np0005539564 NetworkManager[48997]: <info>  [1764404906.6237] device (tapbec1fbdf-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.656 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbdef18-2328-4db7-9c6f-deb4ffce16b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 systemd-udevd[286382]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:28:26 np0005539564 NetworkManager[48997]: <info>  [1764404906.6651] manager: (tapa259ebcb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/284)
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.664 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6387769d-1c0a-4f7c-bbf5-7c5bc40c1d50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.714 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5911a5-985e-4819-8a01-9e29be56a4f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.719 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[af77e05f-028d-46cb-99b1-e2cbc6ab1cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 NetworkManager[48997]: <info>  [1764404906.7531] device (tapa259ebcb-70): carrier: link connected
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.762 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0864a469-2d14-49db-b987-62c73bbf1cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.789 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7ba999-1fae-4528-9162-71ead23ddaa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa259ebcb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:16:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788143, 'reachable_time': 24119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286409, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.814 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e3515113-de47-414c-b617-40e42aff5450]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:1647'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 788143, 'tstamp': 788143}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286410, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.838 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[75623b5c-f6a2-4308-8875-0c2abdb19704]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa259ebcb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:16:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788143, 'reachable_time': 24119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286411, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.879 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c13b8ed0-f46a-4e25-9bcb-cd10b3a3ecf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.980 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8b03a374-e070-438f-a167-934e1bcf71f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.982 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa259ebcb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.982 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.983 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa259ebcb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:26 np0005539564 NetworkManager[48997]: <info>  [1764404906.9855] manager: (tapa259ebcb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.985 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539564 kernel: tapa259ebcb-70: entered promiscuous mode
Nov 29 03:28:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:26.991 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa259ebcb-70, col_values=(('external_ids', {'iface-id': '5c5c4b01-f2eb-4ea5-9341-cfa577051cf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:28:26Z|00621|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.994 226310 DEBUG nova.compute.manager [req-93d6e514-2b7d-4362-8f2a-1ed7500dac0d req-4571d0b7-bb6c-4453-9482-f6ac4e97cf18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.995 226310 DEBUG oslo_concurrency.lockutils [req-93d6e514-2b7d-4362-8f2a-1ed7500dac0d req-4571d0b7-bb6c-4453-9482-f6ac4e97cf18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.995 226310 DEBUG oslo_concurrency.lockutils [req-93d6e514-2b7d-4362-8f2a-1ed7500dac0d req-4571d0b7-bb6c-4453-9482-f6ac4e97cf18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.996 226310 DEBUG oslo_concurrency.lockutils [req-93d6e514-2b7d-4362-8f2a-1ed7500dac0d req-4571d0b7-bb6c-4453-9482-f6ac4e97cf18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.996 226310 DEBUG nova.compute.manager [req-93d6e514-2b7d-4362-8f2a-1ed7500dac0d req-4571d0b7-bb6c-4453-9482-f6ac4e97cf18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Processing event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:28:26 np0005539564 nova_compute[226295]: 2025-11-29 08:28:26.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.028 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:27.030 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:27.031 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ab4b4e-acda-40aa-a780-125b9d245bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:27.033 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-a259ebcb-7cce-4363-8e50-c25ed4a3daec
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID a259ebcb-7cce-4363-8e50-c25ed4a3daec
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:28:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:27.034 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'env', 'PROCESS_TAG=haproxy-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a259ebcb-7cce-4363-8e50-c25ed4a3daec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:28:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:27.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.316 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.318 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404907.315205, bf4c2292-18d7-4c4b-b97e-abb227923156 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.320 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] VM Started (Lifecycle Event)#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.324 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.330 226310 INFO nova.virt.libvirt.driver [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance spawned successfully.#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.330 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.350 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.357 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.363 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.363 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.364 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.364 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.365 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.366 226310 DEBUG nova.virt.libvirt.driver [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.391 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.391 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404907.3156111, bf4c2292-18d7-4c4b-b97e-abb227923156 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.391 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.407 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.413 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404907.3211231, bf4c2292-18d7-4c4b-b97e-abb227923156 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.413 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.436 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.442 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.454 226310 INFO nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Took 6.77 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.455 226310 DEBUG nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.468 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:28:27 np0005539564 podman[286484]: 2025-11-29 08:28:27.475441008 +0000 UTC m=+0.085030713 container create 0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.515 226310 INFO nova.compute.manager [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Took 7.76 seconds to build instance.#033[00m
Nov 29 03:28:27 np0005539564 systemd[1]: Started libpod-conmon-0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524.scope.
Nov 29 03:28:27 np0005539564 podman[286484]: 2025-11-29 08:28:27.436426832 +0000 UTC m=+0.046016607 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:28:27 np0005539564 nova_compute[226295]: 2025-11-29 08:28:27.529 226310 DEBUG oslo_concurrency.lockutils [None req-3b17cf1c-7580-4779-bbad-12c52b24086a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:27 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:28:27 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf8c257fbb2016fea3315b4f0572b30acc3750345478adb5f8feec71cf042f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:27 np0005539564 podman[286484]: 2025-11-29 08:28:27.584468277 +0000 UTC m=+0.194057982 container init 0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:28:27 np0005539564 podman[286484]: 2025-11-29 08:28:27.595351471 +0000 UTC m=+0.204941166 container start 0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:28:27 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[286499]: [NOTICE]   (286503) : New worker (286505) forked
Nov 29 03:28:27 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[286499]: [NOTICE]   (286503) : Loading success.
Nov 29 03:28:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:27.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:28 np0005539564 nova_compute[226295]: 2025-11-29 08:28:28.484 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 e363: 3 total, 3 up, 3 in
Nov 29 03:28:29 np0005539564 nova_compute[226295]: 2025-11-29 08:28:29.092 226310 DEBUG nova.compute.manager [req-f4ec3bd8-b02f-4500-aa0b-60239f37fac5 req-0bae2b29-d74a-4f37-ab95-6a56e4a2a203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:29 np0005539564 nova_compute[226295]: 2025-11-29 08:28:29.092 226310 DEBUG oslo_concurrency.lockutils [req-f4ec3bd8-b02f-4500-aa0b-60239f37fac5 req-0bae2b29-d74a-4f37-ab95-6a56e4a2a203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:29 np0005539564 nova_compute[226295]: 2025-11-29 08:28:29.093 226310 DEBUG oslo_concurrency.lockutils [req-f4ec3bd8-b02f-4500-aa0b-60239f37fac5 req-0bae2b29-d74a-4f37-ab95-6a56e4a2a203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:29 np0005539564 nova_compute[226295]: 2025-11-29 08:28:29.093 226310 DEBUG oslo_concurrency.lockutils [req-f4ec3bd8-b02f-4500-aa0b-60239f37fac5 req-0bae2b29-d74a-4f37-ab95-6a56e4a2a203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:29 np0005539564 nova_compute[226295]: 2025-11-29 08:28:29.094 226310 DEBUG nova.compute.manager [req-f4ec3bd8-b02f-4500-aa0b-60239f37fac5 req-0bae2b29-d74a-4f37-ab95-6a56e4a2a203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:29 np0005539564 nova_compute[226295]: 2025-11-29 08:28:29.094 226310 WARNING nova.compute.manager [req-f4ec3bd8-b02f-4500-aa0b-60239f37fac5 req-0bae2b29-d74a-4f37-ab95-6a56e4a2a203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:28:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:29.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:29 np0005539564 nova_compute[226295]: 2025-11-29 08:28:29.919 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:29 np0005539564 NetworkManager[48997]: <info>  [1764404909.9223] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Nov 29 03:28:29 np0005539564 NetworkManager[48997]: <info>  [1764404909.9230] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Nov 29 03:28:30 np0005539564 nova_compute[226295]: 2025-11-29 08:28:30.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:28:30Z|00622|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:28:30 np0005539564 nova_compute[226295]: 2025-11-29 08:28:30.247 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:30 np0005539564 nova_compute[226295]: 2025-11-29 08:28:30.272 226310 DEBUG nova.compute.manager [req-95ecfa4d-e67b-4e94-9105-aab46307a8e5 req-9b47edf9-145a-4037-bfa6-9715d2cf1457 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-changed-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:30 np0005539564 nova_compute[226295]: 2025-11-29 08:28:30.272 226310 DEBUG nova.compute.manager [req-95ecfa4d-e67b-4e94-9105-aab46307a8e5 req-9b47edf9-145a-4037-bfa6-9715d2cf1457 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Refreshing instance network info cache due to event network-changed-bec1fbdf-d4dc-4b2c-af66-9ba123464651. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:30 np0005539564 nova_compute[226295]: 2025-11-29 08:28:30.273 226310 DEBUG oslo_concurrency.lockutils [req-95ecfa4d-e67b-4e94-9105-aab46307a8e5 req-9b47edf9-145a-4037-bfa6-9715d2cf1457 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:30 np0005539564 nova_compute[226295]: 2025-11-29 08:28:30.273 226310 DEBUG oslo_concurrency.lockutils [req-95ecfa4d-e67b-4e94-9105-aab46307a8e5 req-9b47edf9-145a-4037-bfa6-9715d2cf1457 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:30 np0005539564 nova_compute[226295]: 2025-11-29 08:28:30.273 226310 DEBUG nova.network.neutron [req-95ecfa4d-e67b-4e94-9105-aab46307a8e5 req-9b47edf9-145a-4037-bfa6-9715d2cf1457 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Refreshing network info cache for port bec1fbdf-d4dc-4b2c-af66-9ba123464651 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:30 np0005539564 nova_compute[226295]: 2025-11-29 08:28:30.530 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:31 np0005539564 nova_compute[226295]: 2025-11-29 08:28:31.769 226310 DEBUG nova.network.neutron [req-95ecfa4d-e67b-4e94-9105-aab46307a8e5 req-9b47edf9-145a-4037-bfa6-9715d2cf1457 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updated VIF entry in instance network info cache for port bec1fbdf-d4dc-4b2c-af66-9ba123464651. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:31 np0005539564 nova_compute[226295]: 2025-11-29 08:28:31.769 226310 DEBUG nova.network.neutron [req-95ecfa4d-e67b-4e94-9105-aab46307a8e5 req-9b47edf9-145a-4037-bfa6-9715d2cf1457 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating instance_info_cache with network_info: [{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:31.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:32 np0005539564 nova_compute[226295]: 2025-11-29 08:28:32.085 226310 DEBUG oslo_concurrency.lockutils [req-95ecfa4d-e67b-4e94-9105-aab46307a8e5 req-9b47edf9-145a-4037-bfa6-9715d2cf1457 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:33 np0005539564 nova_compute[226295]: 2025-11-29 08:28:33.486 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:33.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:35.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:35 np0005539564 nova_compute[226295]: 2025-11-29 08:28:35.534 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:35.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:37.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:38 np0005539564 nova_compute[226295]: 2025-11-29 08:28:38.488 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:39.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:40 np0005539564 nova_compute[226295]: 2025-11-29 08:28:40.539 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:41.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:43 np0005539564 ovn_controller[130591]: 2025-11-29T08:28:43Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:a4:7d 10.100.0.7
Nov 29 03:28:43 np0005539564 ovn_controller[130591]: 2025-11-29T08:28:43Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:a4:7d 10.100.0.7
Nov 29 03:28:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:43.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:43 np0005539564 nova_compute[226295]: 2025-11-29 08:28:43.491 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:43.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:45.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:45 np0005539564 nova_compute[226295]: 2025-11-29 08:28:45.541 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:28:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:28:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:28:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:45.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:46 np0005539564 nova_compute[226295]: 2025-11-29 08:28:46.009 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:46 np0005539564 nova_compute[226295]: 2025-11-29 08:28:46.010 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:46 np0005539564 nova_compute[226295]: 2025-11-29 08:28:46.030 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:28:46 np0005539564 nova_compute[226295]: 2025-11-29 08:28:46.160 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:46 np0005539564 nova_compute[226295]: 2025-11-29 08:28:46.160 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:46 np0005539564 nova_compute[226295]: 2025-11-29 08:28:46.171 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:28:46 np0005539564 nova_compute[226295]: 2025-11-29 08:28:46.172 226310 INFO nova.compute.claims [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:28:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:47.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:47 np0005539564 nova_compute[226295]: 2025-11-29 08:28:47.540 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:47.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1333799630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:48 np0005539564 nova_compute[226295]: 2025-11-29 08:28:48.035 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:48 np0005539564 nova_compute[226295]: 2025-11-29 08:28:48.045 226310 DEBUG nova.compute.provider_tree [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:48 np0005539564 nova_compute[226295]: 2025-11-29 08:28:48.441 226310 DEBUG nova.scheduler.client.report [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:48 np0005539564 nova_compute[226295]: 2025-11-29 08:28:48.494 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:49.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:49 np0005539564 nova_compute[226295]: 2025-11-29 08:28:49.606 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:49 np0005539564 nova_compute[226295]: 2025-11-29 08:28:49.607 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:28:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:49.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:50 np0005539564 nova_compute[226295]: 2025-11-29 08:28:50.546 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:50 np0005539564 nova_compute[226295]: 2025-11-29 08:28:50.576 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:28:50 np0005539564 nova_compute[226295]: 2025-11-29 08:28:50.577 226310 DEBUG nova.network.neutron [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:28:50 np0005539564 nova_compute[226295]: 2025-11-29 08:28:50.907 226310 INFO nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:28:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.368 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.438 226310 DEBUG nova.policy [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd039e57f31de4717a235fc96ebd56559', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '527c6a274d1e478eadfe67139e121185', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.555 226310 INFO nova.virt.block_device [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Booting with volume 32bcf381-1196-418b-a68f-b868a3c26635 at /dev/vda#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.689 226310 DEBUG os_brick.utils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.691 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.705 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.705 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[4645b61f-ca29-4f89-8726-5c3e17f31132]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.707 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.716 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.717 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[4524a9ba-2258-491f-9819-96bcb758ecf5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.719 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.731 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.731 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[30ac32a8-cce0-4fdb-99e5-f724db90a5d5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.733 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[ad50630f-7e50-4deb-a684-d4239482606d]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.734 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.775 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.779 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.780 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.780 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.781 226310 DEBUG os_brick.utils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (90ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:28:51 np0005539564 nova_compute[226295]: 2025-11-29 08:28:51.781 226310 DEBUG nova.virt.block_device [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating existing volume attachment record: 9b787483-1583-4e61-9695-239723e19162 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:28:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:51.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3833390503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:53.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:53 np0005539564 nova_compute[226295]: 2025-11-29 08:28:53.497 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:53.910 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:53.910 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:28:53 np0005539564 nova_compute[226295]: 2025-11-29 08:28:53.911 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:55.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:55 np0005539564 nova_compute[226295]: 2025-11-29 08:28:55.554 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:55 np0005539564 nova_compute[226295]: 2025-11-29 08:28:55.695 226310 DEBUG nova.network.neutron [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Successfully created port: eac06205-cdc0-424d-b7e2-7740e0db232d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:28:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:55.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:56 np0005539564 nova_compute[226295]: 2025-11-29 08:28:56.224 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:28:56 np0005539564 nova_compute[226295]: 2025-11-29 08:28:56.226 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:28:56 np0005539564 nova_compute[226295]: 2025-11-29 08:28:56.227 226310 INFO nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Creating image(s)#033[00m
Nov 29 03:28:56 np0005539564 nova_compute[226295]: 2025-11-29 08:28:56.228 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:28:56 np0005539564 nova_compute[226295]: 2025-11-29 08:28:56.228 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Ensure instance console log exists: /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:28:56 np0005539564 nova_compute[226295]: 2025-11-29 08:28:56.229 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:56 np0005539564 nova_compute[226295]: 2025-11-29 08:28:56.230 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:56 np0005539564 nova_compute[226295]: 2025-11-29 08:28:56.230 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:56 np0005539564 podman[286727]: 2025-11-29 08:28:56.54220503 +0000 UTC m=+0.087155200 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 03:28:56 np0005539564 podman[286728]: 2025-11-29 08:28:56.551559793 +0000 UTC m=+0.084262381 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:28:56 np0005539564 podman[286726]: 2025-11-29 08:28:56.585200063 +0000 UTC m=+0.125561368 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:28:57 np0005539564 nova_compute[226295]: 2025-11-29 08:28:57.051 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.111628) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937111715, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1693, "num_deletes": 255, "total_data_size": 3703389, "memory_usage": 3747080, "flush_reason": "Manual Compaction"}
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937141303, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 2429582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57179, "largest_seqno": 58867, "table_properties": {"data_size": 2422494, "index_size": 4095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15891, "raw_average_key_size": 20, "raw_value_size": 2407941, "raw_average_value_size": 3139, "num_data_blocks": 178, "num_entries": 767, "num_filter_entries": 767, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404809, "oldest_key_time": 1764404809, "file_creation_time": 1764404937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 29734 microseconds, and 13473 cpu microseconds.
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.141367) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 2429582 bytes OK
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.141401) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.144645) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.144662) EVENT_LOG_v1 {"time_micros": 1764404937144656, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.144689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3695579, prev total WAL file size 3695579, number of live WAL files 2.
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.146074) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(2372KB)], [111(10MB)]
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937146110, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 13197830, "oldest_snapshot_seqno": -1}
Nov 29 03:28:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:57.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8880 keys, 11321328 bytes, temperature: kUnknown
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937255745, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 11321328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11263916, "index_size": 34132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 231285, "raw_average_key_size": 26, "raw_value_size": 11107966, "raw_average_value_size": 1250, "num_data_blocks": 1322, "num_entries": 8880, "num_filter_entries": 8880, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764404937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.256113) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11321328 bytes
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.258519) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.3 rd, 103.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.3 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(10.1) write-amplify(4.7) OK, records in: 9406, records dropped: 526 output_compression: NoCompression
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.258564) EVENT_LOG_v1 {"time_micros": 1764404937258548, "job": 70, "event": "compaction_finished", "compaction_time_micros": 109731, "compaction_time_cpu_micros": 26593, "output_level": 6, "num_output_files": 1, "total_output_size": 11321328, "num_input_records": 9406, "num_output_records": 8880, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937259275, "job": 70, "event": "table_file_deletion", "file_number": 113}
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937261632, "job": 70, "event": "table_file_deletion", "file_number": 111}
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.145977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.261789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.261797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.261800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.261803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:28:57.261805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:58 np0005539564 nova_compute[226295]: 2025-11-29 08:28:58.415 226310 DEBUG nova.network.neutron [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Successfully updated port: eac06205-cdc0-424d-b7e2-7740e0db232d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:28:58 np0005539564 nova_compute[226295]: 2025-11-29 08:28:58.498 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:59 np0005539564 nova_compute[226295]: 2025-11-29 08:28:59.161 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:59 np0005539564 nova_compute[226295]: 2025-11-29 08:28:59.162 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:59 np0005539564 nova_compute[226295]: 2025-11-29 08:28:59.162 226310 DEBUG nova.network.neutron [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:28:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:59.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:59 np0005539564 nova_compute[226295]: 2025-11-29 08:28:59.297 226310 DEBUG nova.compute.manager [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:59 np0005539564 nova_compute[226295]: 2025-11-29 08:28:59.298 226310 DEBUG nova.compute.manager [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing instance network info cache due to event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:59 np0005539564 nova_compute[226295]: 2025-11-29 08:28:59.298 226310 DEBUG oslo_concurrency.lockutils [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:59 np0005539564 nova_compute[226295]: 2025-11-29 08:28:59.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:59 np0005539564 nova_compute[226295]: 2025-11-29 08:28:59.775 226310 DEBUG nova.network.neutron [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:28:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:28:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:59.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:28:59.914 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:00 np0005539564 nova_compute[226295]: 2025-11-29 08:29:00.529 226310 DEBUG nova.network.neutron [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:00 np0005539564 nova_compute[226295]: 2025-11-29 08:29:00.557 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:01.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:01 np0005539564 nova_compute[226295]: 2025-11-29 08:29:01.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:01 np0005539564 nova_compute[226295]: 2025-11-29 08:29:01.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:01 np0005539564 nova_compute[226295]: 2025-11-29 08:29:01.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:29:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:01.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.064 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.064 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Instance network_info: |[{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.065 226310 DEBUG oslo_concurrency.lockutils [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.065 226310 DEBUG nova.network.neutron [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.068 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Start _get_guest_xml network_info=[{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-32bcf381-1196-418b-a68f-b868a3c26635', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '32bcf381-1196-418b-a68f-b868a3c26635', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ef2296eb-4538-4e04-8c0b-42370d9e5b12', 'attached_at': '', 'detached_at': '', 'volume_id': '32bcf381-1196-418b-a68f-b868a3c26635', 'serial': '32bcf381-1196-418b-a68f-b868a3c26635'}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '9b787483-1583-4e61-9695-239723e19162', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.075 226310 WARNING nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.080 226310 DEBUG nova.virt.libvirt.host [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.081 226310 DEBUG nova.virt.libvirt.host [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.085 226310 DEBUG nova.virt.libvirt.host [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.086 226310 DEBUG nova.virt.libvirt.host [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.087 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.087 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.088 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.088 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.088 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.089 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.089 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.089 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.089 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.089 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.090 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.090 226310 DEBUG nova.virt.hardware [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.119 226310 DEBUG nova.storage.rbd_utils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image ef2296eb-4538-4e04-8c0b-42370d9e5b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.122 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2060121373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:02 np0005539564 nova_compute[226295]: 2025-11-29 08:29:02.670 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:03.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:03 np0005539564 nova_compute[226295]: 2025-11-29 08:29:03.501 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:03.744 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:03.744 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:03.745 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:03.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:04 np0005539564 nova_compute[226295]: 2025-11-29 08:29:04.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:04 np0005539564 nova_compute[226295]: 2025-11-29 08:29:04.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:29:04 np0005539564 nova_compute[226295]: 2025-11-29 08:29:04.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:29:04 np0005539564 nova_compute[226295]: 2025-11-29 08:29:04.703 226310 DEBUG nova.virt.libvirt.vif [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1293570209',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1293570209',id=168,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-l0g9m97g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:51Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=ef2296eb-4538-4e04-8c0b-42370d9e5b12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:04 np0005539564 nova_compute[226295]: 2025-11-29 08:29:04.705 226310 DEBUG nova.network.os_vif_util [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:04 np0005539564 nova_compute[226295]: 2025-11-29 08:29:04.706 226310 DEBUG nova.network.os_vif_util [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c3:a9,bridge_name='br-int',has_traffic_filtering=True,id=eac06205-cdc0-424d-b7e2-7740e0db232d,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeac06205-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:04 np0005539564 nova_compute[226295]: 2025-11-29 08:29:04.709 226310 DEBUG nova.objects.instance [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef2296eb-4538-4e04-8c0b-42370d9e5b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:04 np0005539564 nova_compute[226295]: 2025-11-29 08:29:04.778 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:29:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:05.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.479 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.480 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.481 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.481 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.562 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.829 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <uuid>ef2296eb-4538-4e04-8c0b-42370d9e5b12</uuid>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <name>instance-000000a8</name>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-1293570209</nova:name>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:29:02</nova:creationTime>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <nova:user uuid="d039e57f31de4717a235fc96ebd56559">tempest-TestInstancesWithCinderVolumes-663978016-project-member</nova:user>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <nova:project uuid="527c6a274d1e478eadfe67139e121185">tempest-TestInstancesWithCinderVolumes-663978016</nova:project>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <nova:port uuid="eac06205-cdc0-424d-b7e2-7740e0db232d">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <entry name="serial">ef2296eb-4538-4e04-8c0b-42370d9e5b12</entry>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <entry name="uuid">ef2296eb-4538-4e04-8c0b-42370d9e5b12</entry>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/ef2296eb-4538-4e04-8c0b-42370d9e5b12_disk.config">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-32bcf381-1196-418b-a68f-b868a3c26635">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <serial>32bcf381-1196-418b-a68f-b868a3c26635</serial>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:fb:c3:a9"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <target dev="tapeac06205-cd"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12/console.log" append="off"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:29:05 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:29:05 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:29:05 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:29:05 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.831 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Preparing to wait for external event network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.832 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.832 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.833 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.834 226310 DEBUG nova.virt.libvirt.vif [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1293570209',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1293570209',id=168,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-l0g9m97g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:51Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=ef2296eb-4538-4e04-8c0b-42370d9e5b12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.834 226310 DEBUG nova.network.os_vif_util [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.836 226310 DEBUG nova.network.os_vif_util [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c3:a9,bridge_name='br-int',has_traffic_filtering=True,id=eac06205-cdc0-424d-b7e2-7740e0db232d,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeac06205-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.836 226310 DEBUG os_vif [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c3:a9,bridge_name='br-int',has_traffic_filtering=True,id=eac06205-cdc0-424d-b7e2-7740e0db232d,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeac06205-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.837 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.838 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.838 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:05.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.843 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.843 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeac06205-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.844 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeac06205-cd, col_values=(('external_ids', {'iface-id': 'eac06205-cdc0-424d-b7e2-7740e0db232d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:c3:a9', 'vm-uuid': 'ef2296eb-4538-4e04-8c0b-42370d9e5b12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.846 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539564 NetworkManager[48997]: <info>  [1764404945.8476] manager: (tapeac06205-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.848 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.854 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539564 nova_compute[226295]: 2025-11-29 08:29:05.855 226310 INFO os_vif [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c3:a9,bridge_name='br-int',has_traffic_filtering=True,id=eac06205-cdc0-424d-b7e2-7740e0db232d,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeac06205-cd')#033[00m
Nov 29 03:29:06 np0005539564 nova_compute[226295]: 2025-11-29 08:29:06.671 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:06 np0005539564 nova_compute[226295]: 2025-11-29 08:29:06.672 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:06 np0005539564 nova_compute[226295]: 2025-11-29 08:29:06.673 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:fb:c3:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:06 np0005539564 nova_compute[226295]: 2025-11-29 08:29:06.673 226310 INFO nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Using config drive#033[00m
Nov 29 03:29:06 np0005539564 nova_compute[226295]: 2025-11-29 08:29:06.706 226310 DEBUG nova.storage.rbd_utils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image ef2296eb-4538-4e04-8c0b-42370d9e5b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:07.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.194 226310 DEBUG nova.network.neutron [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updated VIF entry in instance network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.195 226310 DEBUG nova.network.neutron [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.375 226310 DEBUG oslo_concurrency.lockutils [req-5920b981-8d9f-486a-aa5d-d673d396f5d8 req-e64e1ef2-a10c-4721-839e-afa195fb845f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.503 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.521 226310 INFO nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Creating config drive at /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12/disk.config#033[00m
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.528 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_65u53m4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.673 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_65u53m4" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.708 226310 DEBUG nova.storage.rbd_utils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image ef2296eb-4538-4e04-8c0b-42370d9e5b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:08 np0005539564 nova_compute[226295]: 2025-11-29 08:29:08.714 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12/disk.config ef2296eb-4538-4e04-8c0b-42370d9e5b12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.075 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating instance_info_cache with network_info: [{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.199 226310 DEBUG oslo_concurrency.lockutils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.200 226310 DEBUG oslo_concurrency.lockutils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.239 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:09.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.241 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.243 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.243 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.251 226310 DEBUG nova.objects.instance [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.371 226310 DEBUG oslo_concurrency.lockutils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.623 226310 DEBUG oslo_concurrency.processutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12/disk.config ef2296eb-4538-4e04-8c0b-42370d9e5b12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.908s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.624 226310 INFO nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Deleting local config drive /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:09 np0005539564 kernel: tapeac06205-cd: entered promiscuous mode
Nov 29 03:29:09 np0005539564 NetworkManager[48997]: <info>  [1764404949.7174] manager: (tapeac06205-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:09Z|00623|binding|INFO|Claiming lport eac06205-cdc0-424d-b7e2-7740e0db232d for this chassis.
Nov 29 03:29:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:09Z|00624|binding|INFO|eac06205-cdc0-424d-b7e2-7740e0db232d: Claiming fa:16:3e:fb:c3:a9 10.100.0.11
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:09Z|00625|binding|INFO|Setting lport eac06205-cdc0-424d-b7e2-7740e0db232d ovn-installed in OVS
Nov 29 03:29:09 np0005539564 nova_compute[226295]: 2025-11-29 08:29:09.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:09 np0005539564 systemd-machined[190128]: New machine qemu-77-instance-000000a8.
Nov 29 03:29:09 np0005539564 systemd-udevd[286904]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:09 np0005539564 NetworkManager[48997]: <info>  [1764404949.7846] device (tapeac06205-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:09 np0005539564 NetworkManager[48997]: <info>  [1764404949.7860] device (tapeac06205-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:09 np0005539564 systemd[1]: Started Virtual Machine qemu-77-instance-000000a8.
Nov 29 03:29:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:09Z|00626|binding|INFO|Setting lport eac06205-cdc0-424d-b7e2-7740e0db232d up in Southbound
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.799 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c3:a9 10.100.0.11'], port_security=['fa:16:3e:fb:c3:a9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ef2296eb-4538-4e04-8c0b-42370d9e5b12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-371b699e-06e1-407e-ac77-9768d9a0e76e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '527c6a274d1e478eadfe67139e121185', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e734722-bbf6-4c47-9bc6-bf8d5f52e07d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c0188f4-aa09-4b91-9f84-524ffee1218e, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=eac06205-cdc0-424d-b7e2-7740e0db232d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.801 139780 INFO neutron.agent.ovn.metadata.agent [-] Port eac06205-cdc0-424d-b7e2-7740e0db232d in datapath 371b699e-06e1-407e-ac77-9768d9a0e76e bound to our chassis#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.804 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 371b699e-06e1-407e-ac77-9768d9a0e76e#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.816 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8dd1fe-fe2e-4418-a71a-02aed874521f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.817 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap371b699e-01 in ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.818 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap371b699e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.818 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b4acfd8d-7d51-40e7-9a95-25f432254ee4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.819 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f0bd2e-66bf-4172-95b8-d9363ffac72f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.831 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[8d49a346-e7af-4a7a-a421-60c778a0eb7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.854 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49b0f000-7a21-44ea-9954-ad392ddf8472]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.887 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b2085d86-388f-456b-bb74-71a5ef420cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.893 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4aca5e-c1ce-495f-a7d9-1a425adf3375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 NetworkManager[48997]: <info>  [1764404949.8950] manager: (tap371b699e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.931 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8f05f8-2c8a-46ea-b59a-beb68745ba6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.935 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[2859ea2f-d3e3-47e6-b247-5d9921b49019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 NetworkManager[48997]: <info>  [1764404949.9639] device (tap371b699e-00): carrier: link connected
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.972 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[79665e08-7ef5-4f36-8491-6c2b51ca26f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:09.995 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9536e7e4-c9e9-4ee9-be9f-5c5d395ba1c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap371b699e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:80:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792464, 'reachable_time': 25446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286937, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.017 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[72239944-fd4f-4794-8494-b5c2ac31042c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:80be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 792464, 'tstamp': 792464}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286938, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.038 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c34da290-cd10-41a1-bec7-9b3e33417b34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap371b699e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:80:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792464, 'reachable_time': 25446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286939, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.070 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c6709d4a-dbb3-4bbe-8d8a-462d6111018e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.144 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a4daa8dd-fc71-4ddb-bf92-8ade400c91ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.146 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap371b699e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.147 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.147 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap371b699e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.149 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:10 np0005539564 NetworkManager[48997]: <info>  [1764404950.1502] manager: (tap371b699e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Nov 29 03:29:10 np0005539564 kernel: tap371b699e-00: entered promiscuous mode
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.153 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap371b699e-00, col_values=(('external_ids', {'iface-id': 'bf759292-fede-4172-b0b8-efd6e3442b62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:10Z|00627|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.174 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.175 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.176 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7a4332-122e-4883-b258-5d7cd883c92d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.177 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-371b699e-06e1-407e-ac77-9768d9a0e76e
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 371b699e-06e1-407e-ac77-9768d9a0e76e
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:10.178 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'env', 'PROCESS_TAG=haproxy-371b699e-06e1-407e-ac77-9768d9a0e76e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/371b699e-06e1-407e-ac77-9768d9a0e76e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.277 226310 DEBUG oslo_concurrency.lockutils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.278 226310 DEBUG oslo_concurrency.lockutils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.279 226310 INFO nova.compute.manager [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Attaching volume bd7644b6-0d0f-4a70-962b-b60c03d49643 to /dev/vdb#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.489 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404950.48929, ef2296eb-4538-4e04-8c0b-42370d9e5b12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.490 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:10 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.508 226310 DEBUG os_brick.utils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.509 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.526 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.526 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[65504ef4-9501-425a-88ca-fb5d1a0e30a0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.527 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.540 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.541 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb460b0-249c-46b7-bba8-a2800fb2f3af]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.542 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.552 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.552 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[bf057e53-f078-4db0-bf62-cf97cad794b9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.553 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[2e349487-d6ca-466f-a6c6-d5014943ef2d]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.553 226310 DEBUG oslo_concurrency.processutils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:10 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:29:10 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:29:10 np0005539564 podman[287011]: 2025-11-29 08:29:10.583899003 +0000 UTC m=+0.064314681 container create b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.597 226310 DEBUG oslo_concurrency.processutils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "nvme version" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.600 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.600 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.600 226310 DEBUG os_brick.initiator.connectors.lightos [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.601 226310 DEBUG os_brick.utils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.601 226310 DEBUG nova.virt.block_device [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating existing volume attachment record: c0fa40c4-7c54-4de4-bf87-07d04de2570d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.608 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.613 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404950.489439, ef2296eb-4538-4e04-8c0b-42370d9e5b12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.613 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:29:10 np0005539564 systemd[1]: Started libpod-conmon-b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e.scope.
Nov 29 03:29:10 np0005539564 podman[287011]: 2025-11-29 08:29:10.54645082 +0000 UTC m=+0.026866538 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:10 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:29:10 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46ac4d39acbf99279a9d2f5345552d857bb32180dd58a6dabb263cd95a20620e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:10 np0005539564 podman[287011]: 2025-11-29 08:29:10.714179217 +0000 UTC m=+0.194594915 container init b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:29:10 np0005539564 podman[287011]: 2025-11-29 08:29:10.721186597 +0000 UTC m=+0.201602275 container start b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:29:10 np0005539564 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[287033]: [NOTICE]   (287037) : New worker (287039) forked
Nov 29 03:29:10 np0005539564 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[287033]: [NOTICE]   (287037) : Loading success.
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.800 226310 DEBUG nova.compute.manager [req-004a4cc2-a98c-414d-85a2-3840efc889c7 req-c32294b4-79a3-49bd-be21-a961e74a3905 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.800 226310 DEBUG oslo_concurrency.lockutils [req-004a4cc2-a98c-414d-85a2-3840efc889c7 req-c32294b4-79a3-49bd-be21-a961e74a3905 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.801 226310 DEBUG oslo_concurrency.lockutils [req-004a4cc2-a98c-414d-85a2-3840efc889c7 req-c32294b4-79a3-49bd-be21-a961e74a3905 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.801 226310 DEBUG oslo_concurrency.lockutils [req-004a4cc2-a98c-414d-85a2-3840efc889c7 req-c32294b4-79a3-49bd-be21-a961e74a3905 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.801 226310 DEBUG nova.compute.manager [req-004a4cc2-a98c-414d-85a2-3840efc889c7 req-c32294b4-79a3-49bd-be21-a961e74a3905 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Processing event network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.802 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.809 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.815 226310 INFO nova.virt.libvirt.driver [-] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Instance spawned successfully.#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.815 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.846 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.869 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.874 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404950.8075876, ef2296eb-4538-4e04-8c0b-42370d9e5b12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.874 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.922 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.923 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.924 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.925 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.925 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.926 226310 DEBUG nova.virt.libvirt.driver [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.934 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:10 np0005539564 nova_compute[226295]: 2025-11-29 08:29:10.940 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:11 np0005539564 nova_compute[226295]: 2025-11-29 08:29:11.125 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:11 np0005539564 nova_compute[226295]: 2025-11-29 08:29:11.189 226310 INFO nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Took 14.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:29:11 np0005539564 nova_compute[226295]: 2025-11-29 08:29:11.189 226310 DEBUG nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:11 np0005539564 nova_compute[226295]: 2025-11-29 08:29:11.474 226310 INFO nova.compute.manager [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Took 25.34 seconds to build instance.#033[00m
Nov 29 03:29:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3319369817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:11 np0005539564 nova_compute[226295]: 2025-11-29 08:29:11.522 226310 DEBUG oslo_concurrency.lockutils [None req-6e5ad9a5-15e3-426d-9c3d-66c707efffc5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:12 np0005539564 nova_compute[226295]: 2025-11-29 08:29:12.130 226310 DEBUG nova.objects.instance [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:12 np0005539564 nova_compute[226295]: 2025-11-29 08:29:12.247 226310 DEBUG nova.virt.libvirt.driver [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Attempting to attach volume bd7644b6-0d0f-4a70-962b-b60c03d49643 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:29:12 np0005539564 nova_compute[226295]: 2025-11-29 08:29:12.251 226310 DEBUG nova.virt.libvirt.guest [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:29:12 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:29:12 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-bd7644b6-0d0f-4a70-962b-b60c03d49643">
Nov 29 03:29:12 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:12 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:12 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:12 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:29:12 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:29:12 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:12 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:29:12 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:29:12 np0005539564 nova_compute[226295]:  <serial>bd7644b6-0d0f-4a70-962b-b60c03d49643</serial>
Nov 29 03:29:12 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:29:12 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:29:12 np0005539564 nova_compute[226295]: 2025-11-29 08:29:12.968 226310 DEBUG nova.virt.libvirt.driver [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:12 np0005539564 nova_compute[226295]: 2025-11-29 08:29:12.969 226310 DEBUG nova.virt.libvirt.driver [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:12 np0005539564 nova_compute[226295]: 2025-11-29 08:29:12.969 226310 DEBUG nova.virt.libvirt.driver [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:12 np0005539564 nova_compute[226295]: 2025-11-29 08:29:12.969 226310 DEBUG nova.virt.libvirt.driver [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No VIF found with MAC fa:16:3e:68:a4:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:13.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.432 226310 DEBUG nova.compute.manager [req-235921c7-9768-416a-8c7e-945d55a9a571 req-0511398a-6ebe-4883-91c3-8c61571817d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.432 226310 DEBUG oslo_concurrency.lockutils [req-235921c7-9768-416a-8c7e-945d55a9a571 req-0511398a-6ebe-4883-91c3-8c61571817d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.433 226310 DEBUG oslo_concurrency.lockutils [req-235921c7-9768-416a-8c7e-945d55a9a571 req-0511398a-6ebe-4883-91c3-8c61571817d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.433 226310 DEBUG oslo_concurrency.lockutils [req-235921c7-9768-416a-8c7e-945d55a9a571 req-0511398a-6ebe-4883-91c3-8c61571817d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.433 226310 DEBUG nova.compute.manager [req-235921c7-9768-416a-8c7e-945d55a9a571 req-0511398a-6ebe-4883-91c3-8c61571817d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] No waiting events found dispatching network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.433 226310 WARNING nova.compute.manager [req-235921c7-9768-416a-8c7e-945d55a9a571 req-0511398a-6ebe-4883-91c3-8c61571817d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received unexpected event network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.506 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.527 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.528 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.528 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.528 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:29:13 np0005539564 nova_compute[226295]: 2025-11-29 08:29:13.528 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:13.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:13 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4031100585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.010 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.334 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.335 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.335 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.339 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.340 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.517 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.518 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3827MB free_disk=20.714061737060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.518 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.518 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:14 np0005539564 nova_compute[226295]: 2025-11-29 08:29:14.725 226310 DEBUG oslo_concurrency.lockutils [None req-0255768e-d954-46ab-bfc5-91ce3d1551ba 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:15 np0005539564 nova_compute[226295]: 2025-11-29 08:29:15.013 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance bf4c2292-18d7-4c4b-b97e-abb227923156 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:15 np0005539564 nova_compute[226295]: 2025-11-29 08:29:15.013 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:15 np0005539564 nova_compute[226295]: 2025-11-29 08:29:15.014 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:29:15 np0005539564 nova_compute[226295]: 2025-11-29 08:29:15.014 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:29:15 np0005539564 nova_compute[226295]: 2025-11-29 08:29:15.066 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:15.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:15 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3632048410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:15 np0005539564 nova_compute[226295]: 2025-11-29 08:29:15.566 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:15 np0005539564 nova_compute[226295]: 2025-11-29 08:29:15.571 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:15 np0005539564 nova_compute[226295]: 2025-11-29 08:29:15.848 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:15.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:16 np0005539564 nova_compute[226295]: 2025-11-29 08:29:16.129 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:17.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:17 np0005539564 nova_compute[226295]: 2025-11-29 08:29:17.514 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:29:17 np0005539564 nova_compute[226295]: 2025-11-29 08:29:17.515 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:17 np0005539564 nova_compute[226295]: 2025-11-29 08:29:17.516 226310 DEBUG oslo_concurrency.lockutils [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:17 np0005539564 nova_compute[226295]: 2025-11-29 08:29:17.516 226310 DEBUG oslo_concurrency.lockutils [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:17 np0005539564 nova_compute[226295]: 2025-11-29 08:29:17.516 226310 DEBUG nova.compute.manager [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:17 np0005539564 nova_compute[226295]: 2025-11-29 08:29:17.521 226310 DEBUG nova.compute.manager [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:29:17 np0005539564 nova_compute[226295]: 2025-11-29 08:29:17.523 226310 DEBUG nova.objects.instance [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:17 np0005539564 nova_compute[226295]: 2025-11-29 08:29:17.577 226310 DEBUG nova.virt.libvirt.driver [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:29:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:17.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:18 np0005539564 nova_compute[226295]: 2025-11-29 08:29:18.508 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:19.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:19.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:20 np0005539564 nova_compute[226295]: 2025-11-29 08:29:20.851 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:21.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:21.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:22 np0005539564 nova_compute[226295]: 2025-11-29 08:29:22.759 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:22 np0005539564 nova_compute[226295]: 2025-11-29 08:29:22.760 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:22 np0005539564 nova_compute[226295]: 2025-11-29 08:29:22.864 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:29:23 np0005539564 nova_compute[226295]: 2025-11-29 08:29:23.059 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:23 np0005539564 nova_compute[226295]: 2025-11-29 08:29:23.060 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:23 np0005539564 nova_compute[226295]: 2025-11-29 08:29:23.066 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:29:23 np0005539564 nova_compute[226295]: 2025-11-29 08:29:23.066 226310 INFO nova.compute.claims [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:29:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:23.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:23 np0005539564 nova_compute[226295]: 2025-11-29 08:29:23.511 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:23 np0005539564 nova_compute[226295]: 2025-11-29 08:29:23.631 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:23.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3665741576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.121 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.128 226310 DEBUG nova.compute.provider_tree [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.238 226310 DEBUG nova.scheduler.client.report [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:24 np0005539564 kernel: tapbec1fbdf-d4 (unregistering): left promiscuous mode
Nov 29 03:29:24 np0005539564 NetworkManager[48997]: <info>  [1764404964.3366] device (tapbec1fbdf-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:24Z|00628|binding|INFO|Releasing lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 from this chassis (sb_readonly=0)
Nov 29 03:29:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:24Z|00629|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 down in Southbound
Nov 29 03:29:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:24Z|00630|binding|INFO|Removing iface tapbec1fbdf-d4 ovn-installed in OVS
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.348 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.350 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.373 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:24 np0005539564 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Nov 29 03:29:24 np0005539564 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a5.scope: Consumed 17.145s CPU time.
Nov 29 03:29:24 np0005539564 systemd-machined[190128]: Machine qemu-76-instance-000000a5 terminated.
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.484 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.485 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.616 226310 INFO nova.virt.libvirt.driver [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance shutdown successfully after 7 seconds.#033[00m
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.626 226310 INFO nova.virt.libvirt.driver [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance destroyed successfully.#033[00m
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.628 226310 DEBUG nova.objects.instance [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'numa_topology' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:24.724 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a4:7d 10.100.0.7'], port_security=['fa:16:3e:68:a4:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf4c2292-18d7-4c4b-b97e-abb227923156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02abc3ce-c8f1-4034-8c00-97d80a9dca82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1429573d-31ea-4b00-8580-1031fbde1ea5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=bec1fbdf-d4dc-4b2c-af66-9ba123464651) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:24.727 139780 INFO neutron.agent.ovn.metadata.agent [-] Port bec1fbdf-d4dc-4b2c-af66-9ba123464651 in datapath a259ebcb-7cce-4363-8e50-c25ed4a3daec unbound from our chassis#033[00m
Nov 29 03:29:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:24.730 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a259ebcb-7cce-4363-8e50-c25ed4a3daec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:24.732 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[091b440c-c655-45f9-9eaa-16c75a9ef33c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:24.733 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec namespace which is not needed anymore#033[00m
Nov 29 03:29:24 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[286499]: [NOTICE]   (286503) : haproxy version is 2.8.14-c23fe91
Nov 29 03:29:24 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[286499]: [NOTICE]   (286503) : path to executable is /usr/sbin/haproxy
Nov 29 03:29:24 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[286499]: [WARNING]  (286503) : Exiting Master process...
Nov 29 03:29:24 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[286499]: [ALERT]    (286503) : Current worker (286505) exited with code 143 (Terminated)
Nov 29 03:29:24 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[286499]: [WARNING]  (286503) : All workers exited. Exiting... (0)
Nov 29 03:29:24 np0005539564 systemd[1]: libpod-0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524.scope: Deactivated successfully.
Nov 29 03:29:24 np0005539564 conmon[286499]: conmon 0a0807d4716df2b3af30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524.scope/container/memory.events
Nov 29 03:29:24 np0005539564 podman[287171]: 2025-11-29 08:29:24.896905266 +0000 UTC m=+0.054310660 container died 0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:24 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524-userdata-shm.mount: Deactivated successfully.
Nov 29 03:29:24 np0005539564 systemd[1]: var-lib-containers-storage-overlay-fdf8c257fbb2016fea3315b4f0572b30acc3750345478adb5f8feec71cf042f3-merged.mount: Deactivated successfully.
Nov 29 03:29:24 np0005539564 podman[287171]: 2025-11-29 08:29:24.941572905 +0000 UTC m=+0.098978299 container cleanup 0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:24 np0005539564 systemd[1]: libpod-conmon-0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524.scope: Deactivated successfully.
Nov 29 03:29:24 np0005539564 nova_compute[226295]: 2025-11-29 08:29:24.997 226310 DEBUG nova.compute.manager [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:25 np0005539564 podman[287200]: 2025-11-29 08:29:25.030538022 +0000 UTC m=+0.053664193 container remove 0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.038 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bebae87d-5bab-4fe0-a15c-87f4e992a3d6]: (4, ('Sat Nov 29 08:29:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec (0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524)\n0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524\nSat Nov 29 08:29:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec (0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524)\n0a0807d4716df2b3af3032057b3799e5d5fd317a68ddd2ea1effac4f16240524\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.040 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[65734f23-e766-400e-a89d-7f66dafbb5eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.041 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa259ebcb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:25 np0005539564 nova_compute[226295]: 2025-11-29 08:29:25.044 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:25 np0005539564 kernel: tapa259ebcb-70: left promiscuous mode
Nov 29 03:29:25 np0005539564 nova_compute[226295]: 2025-11-29 08:29:25.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.067 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[45f8e534-a717-4ec7-91e9-75744fa4c86c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.091 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a2377303-301f-4483-8142-b276f3fb8eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.092 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[371857c6-8455-4142-8b64-6438f3981832]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.117 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6d9295-7350-4efb-b4ad-e34a2b868480]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788132, 'reachable_time': 15059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287221, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.120 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:29:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:25.120 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[5164fdf0-bb85-454f-95ba-406e3386695e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:25 np0005539564 systemd[1]: run-netns-ovnmeta\x2da259ebcb\x2d7cce\x2d4363\x2d8e50\x2dc25ed4a3daec.mount: Deactivated successfully.
Nov 29 03:29:25 np0005539564 nova_compute[226295]: 2025-11-29 08:29:25.215 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:29:25 np0005539564 nova_compute[226295]: 2025-11-29 08:29:25.216 226310 DEBUG nova.network.neutron [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:29:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:25.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:25 np0005539564 nova_compute[226295]: 2025-11-29 08:29:25.540 226310 DEBUG nova.policy [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4f4d28745dd46e586642c84c051db39', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23450c2eaf4442459dec94c6d29f0412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:29:25 np0005539564 nova_compute[226295]: 2025-11-29 08:29:25.856 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:25.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:25 np0005539564 nova_compute[226295]: 2025-11-29 08:29:25.892 226310 INFO nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.196 226310 DEBUG oslo_concurrency.lockutils [None req-f278a9a5-4192-4046-a948-7ed0da12425a 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 8.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.247 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:29:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:26Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:c3:a9 10.100.0.11
Nov 29 03:29:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:26Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:c3:a9 10.100.0.11
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.542 226310 DEBUG nova.compute.manager [req-d203afa7-c47c-42db-8dd0-1b99233358b9 req-38f4d2c2-afc1-4343-b3df-d7a954d0d8db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.542 226310 DEBUG oslo_concurrency.lockutils [req-d203afa7-c47c-42db-8dd0-1b99233358b9 req-38f4d2c2-afc1-4343-b3df-d7a954d0d8db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.543 226310 DEBUG oslo_concurrency.lockutils [req-d203afa7-c47c-42db-8dd0-1b99233358b9 req-38f4d2c2-afc1-4343-b3df-d7a954d0d8db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.543 226310 DEBUG oslo_concurrency.lockutils [req-d203afa7-c47c-42db-8dd0-1b99233358b9 req-38f4d2c2-afc1-4343-b3df-d7a954d0d8db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.543 226310 DEBUG nova.compute.manager [req-d203afa7-c47c-42db-8dd0-1b99233358b9 req-38f4d2c2-afc1-4343-b3df-d7a954d0d8db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.543 226310 WARNING nova.compute.manager [req-d203afa7-c47c-42db-8dd0-1b99233358b9 req-38f4d2c2-afc1-4343-b3df-d7a954d0d8db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.568 226310 INFO nova.virt.block_device [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Booting with volume 5fdf7d48-8c9e-4f5c-9e3e-276135ea4364 at /dev/vda#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.810 226310 DEBUG os_brick.utils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.811 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.829 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.829 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[e68f8137-a0cd-4ca2-97ab-a9d78d63c160]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.831 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.844 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.844 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[fffee49d-0f17-482e-838d-9ee2e82faad3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.845 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.859 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.860 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[88928a06-c098-4fc8-a31e-50a0c562c293]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.861 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4c57d4-b9a4-4330-9809-926f9fbb5a50]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.862 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.914 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "nvme version" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.917 226310 DEBUG os_brick.initiator.connectors.lightos [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.918 226310 DEBUG os_brick.initiator.connectors.lightos [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.918 226310 DEBUG os_brick.initiator.connectors.lightos [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.918 226310 DEBUG os_brick.utils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] <== get_connector_properties: return (107ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:29:26 np0005539564 nova_compute[226295]: 2025-11-29 08:29:26.919 226310 DEBUG nova.virt.block_device [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Updating existing volume attachment record: abe74d4b-45fe-467e-87cb-a4e0f8dc4dba _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:29:27 np0005539564 nova_compute[226295]: 2025-11-29 08:29:27.231 226310 DEBUG nova.objects.instance [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:27 np0005539564 nova_compute[226295]: 2025-11-29 08:29:27.261 226310 DEBUG oslo_concurrency.lockutils [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:27 np0005539564 nova_compute[226295]: 2025-11-29 08:29:27.262 226310 DEBUG oslo_concurrency.lockutils [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquired lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:27 np0005539564 nova_compute[226295]: 2025-11-29 08:29:27.262 226310 DEBUG nova.network.neutron [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:27.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:27 np0005539564 nova_compute[226295]: 2025-11-29 08:29:27.263 226310 DEBUG nova.objects.instance [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'info_cache' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:27 np0005539564 podman[287232]: 2025-11-29 08:29:27.550051125 +0000 UTC m=+0.070402706 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:29:27 np0005539564 podman[287231]: 2025-11-29 08:29:27.571974008 +0000 UTC m=+0.103809280 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 03:29:27 np0005539564 podman[287230]: 2025-11-29 08:29:27.614319703 +0000 UTC m=+0.148672073 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 03:29:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:27.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:27 np0005539564 nova_compute[226295]: 2025-11-29 08:29:27.878 226310 DEBUG nova.network.neutron [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Successfully created port: 14db1945-d4bf-40c6-bc47-10a1048900de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.251 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.253 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.254 226310 INFO nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Creating image(s)#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.255 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.255 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Ensure instance console log exists: /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.256 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.257 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.257 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.513 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.636 226310 DEBUG nova.compute.manager [req-6451443c-b0b3-4e5a-9cc3-19fc351b19ee req-28b0b5ba-b6f8-4f5e-9212-9bb60543685e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.637 226310 DEBUG oslo_concurrency.lockutils [req-6451443c-b0b3-4e5a-9cc3-19fc351b19ee req-28b0b5ba-b6f8-4f5e-9212-9bb60543685e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.637 226310 DEBUG oslo_concurrency.lockutils [req-6451443c-b0b3-4e5a-9cc3-19fc351b19ee req-28b0b5ba-b6f8-4f5e-9212-9bb60543685e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.638 226310 DEBUG oslo_concurrency.lockutils [req-6451443c-b0b3-4e5a-9cc3-19fc351b19ee req-28b0b5ba-b6f8-4f5e-9212-9bb60543685e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.638 226310 DEBUG nova.compute.manager [req-6451443c-b0b3-4e5a-9cc3-19fc351b19ee req-28b0b5ba-b6f8-4f5e-9212-9bb60543685e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:28 np0005539564 nova_compute[226295]: 2025-11-29 08:29:28.638 226310 WARNING nova.compute.manager [req-6451443c-b0b3-4e5a-9cc3-19fc351b19ee req-28b0b5ba-b6f8-4f5e-9212-9bb60543685e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:29:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:29.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.552 226310 DEBUG nova.network.neutron [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating instance_info_cache with network_info: [{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.681 226310 DEBUG oslo_concurrency.lockutils [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Releasing lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.725 226310 INFO nova.virt.libvirt.driver [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance destroyed successfully.#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.727 226310 DEBUG nova.objects.instance [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'numa_topology' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.742 226310 DEBUG nova.objects.instance [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'resources' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.757 226310 DEBUG nova.virt.libvirt.vif [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.759 226310 DEBUG nova.network.os_vif_util [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.761 226310 DEBUG nova.network.os_vif_util [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.762 226310 DEBUG os_vif [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.764 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.765 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbec1fbdf-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.770 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.773 226310 INFO os_vif [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4')#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.785 226310 DEBUG nova.virt.libvirt.driver [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Start _get_guest_xml network_info=[{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bd7644b6-0d0f-4a70-962b-b60c03d49643', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bd7644b6-0d0f-4a70-962b-b60c03d49643', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bf4c2292-18d7-4c4b-b97e-abb227923156', 'attached_at': '', 'detached_at': '', 'volume_id': 'bd7644b6-0d0f-4a70-962b-b60c03d49643', 'serial': 'bd7644b6-0d0f-4a70-962b-b60c03d49643'}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'c0fa40c4-7c54-4de4-bf87-07d04de2570d', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.790 226310 WARNING nova.virt.libvirt.driver [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.795 226310 DEBUG nova.virt.libvirt.host [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.796 226310 DEBUG nova.virt.libvirt.host [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.799 226310 DEBUG nova.virt.libvirt.host [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.799 226310 DEBUG nova.virt.libvirt.host [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.801 226310 DEBUG nova.virt.libvirt.driver [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.801 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.802 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.802 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.802 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.803 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.803 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.805 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.806 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.806 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.806 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.807 226310 DEBUG nova.virt.hardware [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.807 226310 DEBUG nova.objects.instance [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'vcpu_model' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:29 np0005539564 nova_compute[226295]: 2025-11-29 08:29:29.823 226310 DEBUG oslo_concurrency.processutils [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:29.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2784251390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:30 np0005539564 nova_compute[226295]: 2025-11-29 08:29:30.313 226310 DEBUG oslo_concurrency.processutils [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:30 np0005539564 nova_compute[226295]: 2025-11-29 08:29:30.359 226310 DEBUG oslo_concurrency.processutils [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:30 np0005539564 nova_compute[226295]: 2025-11-29 08:29:30.864 226310 DEBUG oslo_concurrency.processutils [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.167 226310 DEBUG nova.virt.libvirt.vif [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.168 226310 DEBUG nova.network.os_vif_util [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.170 226310 DEBUG nova.network.os_vif_util [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.172 226310 DEBUG nova.objects.instance [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'pci_devices' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.192 226310 DEBUG nova.virt.libvirt.driver [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <uuid>bf4c2292-18d7-4c4b-b97e-abb227923156</uuid>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <name>instance-000000a5</name>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <nova:name>tempest-AttachVolumeTestJSON-server-958672118</nova:name>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:29:29</nova:creationTime>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <nova:user uuid="7a362a419f6a492aae2f102ad2bbd5e9">tempest-AttachVolumeTestJSON-942041170-project-member</nova:user>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <nova:project uuid="eb0810bf6f5b4eb59638b7a2cf59ed5b">tempest-AttachVolumeTestJSON-942041170</nova:project>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <nova:port uuid="bec1fbdf-d4dc-4b2c-af66-9ba123464651">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <entry name="serial">bf4c2292-18d7-4c4b-b97e-abb227923156</entry>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <entry name="uuid">bf4c2292-18d7-4c4b-b97e-abb227923156</entry>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/bf4c2292-18d7-4c4b-b97e-abb227923156_disk">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/bf4c2292-18d7-4c4b-b97e-abb227923156_disk.config">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-bd7644b6-0d0f-4a70-962b-b60c03d49643">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <serial>bd7644b6-0d0f-4a70-962b-b60c03d49643</serial>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:68:a4:7d"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <target dev="tapbec1fbdf-d4"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/console.log" append="off"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:29:31 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:29:31 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:29:31 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:29:31 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.195 226310 DEBUG nova.virt.libvirt.driver [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.196 226310 DEBUG nova.virt.libvirt.driver [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.196 226310 DEBUG nova.virt.libvirt.driver [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.198 226310 DEBUG nova.virt.libvirt.vif [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.198 226310 DEBUG nova.network.os_vif_util [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.199 226310 DEBUG nova.network.os_vif_util [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.200 226310 DEBUG os_vif [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.201 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.202 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.203 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.207 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.207 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbec1fbdf-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.208 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbec1fbdf-d4, col_values=(('external_ids', {'iface-id': 'bec1fbdf-d4dc-4b2c-af66-9ba123464651', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:a4:7d', 'vm-uuid': 'bf4c2292-18d7-4c4b-b97e-abb227923156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:31 np0005539564 NetworkManager[48997]: <info>  [1764404971.2121] manager: (tapbec1fbdf-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.210 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.217 226310 INFO os_vif [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4')#033[00m
Nov 29 03:29:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:31.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:31 np0005539564 kernel: tapbec1fbdf-d4: entered promiscuous mode
Nov 29 03:29:31 np0005539564 NetworkManager[48997]: <info>  [1764404971.5692] manager: (tapbec1fbdf-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.570 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:31Z|00631|binding|INFO|Claiming lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 for this chassis.
Nov 29 03:29:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:31Z|00632|binding|INFO|bec1fbdf-d4dc-4b2c-af66-9ba123464651: Claiming fa:16:3e:68:a4:7d 10.100.0.7
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.582 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a4:7d 10.100.0.7'], port_security=['fa:16:3e:68:a4:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf4c2292-18d7-4c4b-b97e-abb227923156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '02abc3ce-c8f1-4034-8c00-97d80a9dca82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1429573d-31ea-4b00-8580-1031fbde1ea5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=bec1fbdf-d4dc-4b2c-af66-9ba123464651) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.585 139780 INFO neutron.agent.ovn.metadata.agent [-] Port bec1fbdf-d4dc-4b2c-af66-9ba123464651 in datapath a259ebcb-7cce-4363-8e50-c25ed4a3daec bound to our chassis#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.588 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a259ebcb-7cce-4363-8e50-c25ed4a3daec#033[00m
Nov 29 03:29:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:31Z|00633|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 ovn-installed in OVS
Nov 29 03:29:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:31Z|00634|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 up in Southbound
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.598 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.601 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.609 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf9e003-2e3e-463c-82fa-2cd4ba72a713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.610 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa259ebcb-71 in ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.613 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa259ebcb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.614 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fee93c3f-1519-4e0a-9fa7-24925174eea3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.615 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[de84f954-b327-4bbb-85c6-dbc9b516d696]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 systemd-udevd[287368]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:31 np0005539564 systemd-machined[190128]: New machine qemu-78-instance-000000a5.
Nov 29 03:29:31 np0005539564 NetworkManager[48997]: <info>  [1764404971.6321] device (tapbec1fbdf-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:31 np0005539564 NetworkManager[48997]: <info>  [1764404971.6333] device (tapbec1fbdf-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.636 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[905c86db-6ceb-4017-89d5-8af6af37bf99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 systemd[1]: Started Virtual Machine qemu-78-instance-000000a5.
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.670 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e14e01a2-3a57-4369-91b7-5f20fc251b00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.717 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4938c787-d0ac-4340-82fe-04df63d52351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.725 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[78315b2e-b8bd-42a7-afeb-d7f4cf11923a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 NetworkManager[48997]: <info>  [1764404971.7263] manager: (tapa259ebcb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/294)
Nov 29 03:29:31 np0005539564 systemd-udevd[287371]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.747 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.748 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.766 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c261ad5f-f8e2-4e1d-b1d9-e1b175d47f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.772 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bb07efea-04dc-4c9a-a481-fe302ef2c2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 NetworkManager[48997]: <info>  [1764404971.8055] device (tapa259ebcb-70): carrier: link connected
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.814 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1a14d1e8-5407-478f-9a8e-abf35aa10d2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.833 226310 DEBUG nova.network.neutron [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Successfully updated port: 14db1945-d4bf-40c6-bc47-10a1048900de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.836 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[950924c5-53a1-4c3c-a5a5-757364a86d8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa259ebcb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:16:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794648, 'reachable_time': 35324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287402, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.857 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3baad8d2-09b7-4f71-9b28-1cdc8a729c72]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:1647'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794648, 'tstamp': 794648}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287403, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.870 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-841820bc-c4f3-4198-b22c-ddd672e9cc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.870 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-841820bc-c4f3-4198-b22c-ddd672e9cc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.871 226310 DEBUG nova.network.neutron [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.877 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[076ba328-19ed-4c64-8636-bba64e1c4d11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa259ebcb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:16:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794648, 'reachable_time': 35324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287404, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:31.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.908 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3abe9ab3-1de5-4569-9305-d77e1db0eaea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.977 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7c172a27-8df1-48ae-9ca8-a743e0a0b57b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.979 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa259ebcb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.979 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.980 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa259ebcb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:31 np0005539564 kernel: tapa259ebcb-70: entered promiscuous mode
Nov 29 03:29:31 np0005539564 NetworkManager[48997]: <info>  [1764404971.9829] manager: (tapa259ebcb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.982 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.984 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.988 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa259ebcb-70, col_values=(('external_ids', {'iface-id': '5c5c4b01-f2eb-4ea5-9341-cfa577051cf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.989 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:31Z|00635|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:29:31 np0005539564 nova_compute[226295]: 2025-11-29 08:29:31.990 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.991 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.992 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d37a96-ad31-4cbc-a528-76a551029ce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.993 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-a259ebcb-7cce-4363-8e50-c25ed4a3daec
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID a259ebcb-7cce-4363-8e50-c25ed4a3daec
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:31.993 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'env', 'PROCESS_TAG=haproxy-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a259ebcb-7cce-4363-8e50-c25ed4a3daec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.005 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.134 226310 DEBUG nova.network.neutron [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.338 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for bf4c2292-18d7-4c4b-b97e-abb227923156 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.338 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404972.3370297, bf4c2292-18d7-4c4b-b97e-abb227923156 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.339 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.341 226310 DEBUG nova.compute.manager [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.344 226310 INFO nova.virt.libvirt.driver [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance rebooted successfully.#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.345 226310 DEBUG nova.compute.manager [None req-e215cd81-72a4-4b5b-a2df-adeca15f0daf 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:32 np0005539564 podman[287496]: 2025-11-29 08:29:32.363378784 +0000 UTC m=+0.044232547 container create f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.397 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:32 np0005539564 systemd[1]: Started libpod-conmon-f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99.scope.
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.401 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:32 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:29:32 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a46dcce5874f05f0490d8869113d6b32bccedef09d1c6bded73e35c3c1bce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.431 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.432 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404972.337658, bf4c2292-18d7-4c4b-b97e-abb227923156 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.432 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:32 np0005539564 podman[287496]: 2025-11-29 08:29:32.34066299 +0000 UTC m=+0.021516753 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:32 np0005539564 podman[287496]: 2025-11-29 08:29:32.440411139 +0000 UTC m=+0.121265002 container init f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:29:32 np0005539564 podman[287496]: 2025-11-29 08:29:32.445198959 +0000 UTC m=+0.126052762 container start f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.462 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:32 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[287511]: [NOTICE]   (287515) : New worker (287517) forked
Nov 29 03:29:32 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[287511]: [NOTICE]   (287515) : Loading success.
Nov 29 03:29:32 np0005539564 nova_compute[226295]: 2025-11-29 08:29:32.466 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:32.508 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:29:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:33.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.458 226310 DEBUG nova.compute.manager [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-changed-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.458 226310 DEBUG nova.compute.manager [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Refreshing instance network info cache due to event network-changed-14db1945-d4bf-40c6-bc47-10a1048900de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.458 226310 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-841820bc-c4f3-4198-b22c-ddd672e9cc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.468 226310 DEBUG nova.network.neutron [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Updating instance_info_cache with network_info: [{"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.493 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-841820bc-c4f3-4198-b22c-ddd672e9cc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.494 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Instance network_info: |[{"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.494 226310 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-841820bc-c4f3-4198-b22c-ddd672e9cc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.495 226310 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Refreshing network info cache for port 14db1945-d4bf-40c6-bc47-10a1048900de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.502 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Start _get_guest_xml network_info=[{"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5fdf7d48-8c9e-4f5c-9e3e-276135ea4364', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5fdf7d48-8c9e-4f5c-9e3e-276135ea4364', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '841820bc-c4f3-4198-b22c-ddd672e9cc75', 'attached_at': '', 'detached_at': '', 'volume_id': '5fdf7d48-8c9e-4f5c-9e3e-276135ea4364', 'serial': '5fdf7d48-8c9e-4f5c-9e3e-276135ea4364', 'multiattach': True}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'abe74d4b-45fe-467e-87cb-a4e0f8dc4dba', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.510 226310 WARNING nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.521 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.526 226310 DEBUG nova.virt.libvirt.host [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.527 226310 DEBUG nova.virt.libvirt.host [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.533 226310 DEBUG nova.virt.libvirt.host [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.533 226310 DEBUG nova.virt.libvirt.host [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.535 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.536 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.537 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.537 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.538 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.538 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.538 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.539 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.539 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.540 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.540 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.541 226310 DEBUG nova.virt.hardware [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.588 226310 DEBUG nova.storage.rbd_utils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 841820bc-c4f3-4198-b22c-ddd672e9cc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.593 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.938 226310 DEBUG nova.compute.manager [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.939 226310 DEBUG oslo_concurrency.lockutils [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.939 226310 DEBUG oslo_concurrency.lockutils [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.940 226310 DEBUG oslo_concurrency.lockutils [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.940 226310 DEBUG nova.compute.manager [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.940 226310 WARNING nova.compute.manager [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.941 226310 DEBUG nova.compute.manager [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.941 226310 DEBUG oslo_concurrency.lockutils [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.941 226310 DEBUG oslo_concurrency.lockutils [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.942 226310 DEBUG oslo_concurrency.lockutils [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.942 226310 DEBUG nova.compute.manager [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:33 np0005539564 nova_compute[226295]: 2025-11-29 08:29:33.942 226310 WARNING nova.compute.manager [req-ba3249dc-1fbb-4482-8d95-cd239982fc76 req-b5eff5a2-ff3e-4837-9175-394f05cc1f0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e364 e364: 3 total, 3 up, 3 in
Nov 29 03:29:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1620097015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.064 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.106 226310 DEBUG nova.virt.libvirt.vif [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1179306640',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1179306640',id=169,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-gwuyevua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:26Z,user_data=None,user_id='b4f4d28745dd46e586642c84c051db39',uuid=841820bc-c4f3-4198-b22c-ddd672e9cc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.107 226310 DEBUG nova.network.os_vif_util [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.108 226310 DEBUG nova.network.os_vif_util [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:1b:52,bridge_name='br-int',has_traffic_filtering=True,id=14db1945-d4bf-40c6-bc47-10a1048900de,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14db1945-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.110 226310 DEBUG nova.objects.instance [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 841820bc-c4f3-4198-b22c-ddd672e9cc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.125 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <uuid>841820bc-c4f3-4198-b22c-ddd672e9cc75</uuid>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <name>instance-000000a9</name>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-1179306640</nova:name>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:29:33</nova:creationTime>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <nova:user uuid="b4f4d28745dd46e586642c84c051db39">tempest-AttachVolumeMultiAttachTest-1454477111-project-member</nova:user>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <nova:project uuid="23450c2eaf4442459dec94c6d29f0412">tempest-AttachVolumeMultiAttachTest-1454477111</nova:project>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <nova:port uuid="14db1945-d4bf-40c6-bc47-10a1048900de">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <entry name="serial">841820bc-c4f3-4198-b22c-ddd672e9cc75</entry>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <entry name="uuid">841820bc-c4f3-4198-b22c-ddd672e9cc75</entry>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/841820bc-c4f3-4198-b22c-ddd672e9cc75_disk.config">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-5fdf7d48-8c9e-4f5c-9e3e-276135ea4364">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <serial>5fdf7d48-8c9e-4f5c-9e3e-276135ea4364</serial>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <shareable/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:6e:1b:52"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <target dev="tap14db1945-d4"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75/console.log" append="off"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:29:34 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:29:34 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:29:34 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:29:34 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.125 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Preparing to wait for external event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.125 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.126 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.126 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.126 226310 DEBUG nova.virt.libvirt.vif [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1179306640',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1179306640',id=169,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-gwuyevua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:26Z,user_data=None,user_id='b4f4d28745dd46e586642c84c051db39',uuid=841820bc-c4f3-4198-b22c-ddd672e9cc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.127 226310 DEBUG nova.network.os_vif_util [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.127 226310 DEBUG nova.network.os_vif_util [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:1b:52,bridge_name='br-int',has_traffic_filtering=True,id=14db1945-d4bf-40c6-bc47-10a1048900de,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14db1945-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.127 226310 DEBUG os_vif [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:1b:52,bridge_name='br-int',has_traffic_filtering=True,id=14db1945-d4bf-40c6-bc47-10a1048900de,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14db1945-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.128 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.128 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.128 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.130 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.131 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14db1945-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.131 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14db1945-d4, col_values=(('external_ids', {'iface-id': '14db1945-d4bf-40c6-bc47-10a1048900de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:1b:52', 'vm-uuid': '841820bc-c4f3-4198-b22c-ddd672e9cc75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:34 np0005539564 NetworkManager[48997]: <info>  [1764404974.1851] manager: (tap14db1945-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.183 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.192 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.194 226310 INFO os_vif [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:1b:52,bridge_name='br-int',has_traffic_filtering=True,id=14db1945-d4bf-40c6-bc47-10a1048900de,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14db1945-d4')#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.806 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.807 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.807 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:6e:1b:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.808 226310 INFO nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Using config drive#033[00m
Nov 29 03:29:34 np0005539564 nova_compute[226295]: 2025-11-29 08:29:34.838 226310 DEBUG nova.storage.rbd_utils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 841820bc-c4f3-4198-b22c-ddd672e9cc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:35.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.291 226310 INFO nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Creating config drive at /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75/disk.config#033[00m
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.296 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapg7zt6i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.442 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapg7zt6i" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.476 226310 DEBUG nova.storage.rbd_utils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 841820bc-c4f3-4198-b22c-ddd672e9cc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.480 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75/disk.config 841820bc-c4f3-4198-b22c-ddd672e9cc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.667 226310 DEBUG oslo_concurrency.processutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75/disk.config 841820bc-c4f3-4198-b22c-ddd672e9cc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.668 226310 INFO nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Deleting local config drive /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:35 np0005539564 kernel: tap14db1945-d4: entered promiscuous mode
Nov 29 03:29:35 np0005539564 NetworkManager[48997]: <info>  [1764404975.7305] manager: (tap14db1945-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.732 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:35Z|00636|binding|INFO|Claiming lport 14db1945-d4bf-40c6-bc47-10a1048900de for this chassis.
Nov 29 03:29:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:35Z|00637|binding|INFO|14db1945-d4bf-40c6-bc47-10a1048900de: Claiming fa:16:3e:6e:1b:52 10.100.0.11
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.741 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:1b:52 10.100.0.11'], port_security=['fa:16:3e:6e:1b:52 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '841820bc-c4f3-4198-b22c-ddd672e9cc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f38b737a-f658-4b72-a53c-7f8397e745b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=14db1945-d4bf-40c6-bc47-10a1048900de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.743 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 14db1945-d4bf-40c6-bc47-10a1048900de in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 bound to our chassis#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.745 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:29:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:35Z|00638|binding|INFO|Setting lport 14db1945-d4bf-40c6-bc47-10a1048900de ovn-installed in OVS
Nov 29 03:29:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:35Z|00639|binding|INFO|Setting lport 14db1945-d4bf-40c6-bc47-10a1048900de up in Southbound
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.759 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:35 np0005539564 nova_compute[226295]: 2025-11-29 08:29:35.762 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.769 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3630fbd8-3717-4207-9140-9272c49d330f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.770 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabbc8daa-d1 in ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.775 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabbc8daa-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.775 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[56233899-aefd-45d7-b2fe-341f0a92681d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.779 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[529a639a-7534-425f-9e1b-b1b3c376d202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 systemd-machined[190128]: New machine qemu-79-instance-000000a9.
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.792 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[60a263ce-5bf1-4872-932f-119e82aba858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 systemd[1]: Started Virtual Machine qemu-79-instance-000000a9.
Nov 29 03:29:35 np0005539564 systemd-udevd[287643]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.818 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f0097216-bba3-4f8b-a29b-da19f8b1ac4f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 NetworkManager[48997]: <info>  [1764404975.8235] device (tap14db1945-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:35 np0005539564 NetworkManager[48997]: <info>  [1764404975.8243] device (tap14db1945-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.852 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[87c6027f-bec8-4f68-8cfd-436b8b501ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 systemd-udevd[287645]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:35 np0005539564 NetworkManager[48997]: <info>  [1764404975.8584] manager: (tapabbc8daa-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/298)
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.857 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f4e38e-f85a-4f98-8a2a-134a0b3dc57f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.899 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb61944-7fa8-4d2c-8940-0698dcc22d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.902 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae99674-4c54-4c57-8835-9b612dd0bd35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 NetworkManager[48997]: <info>  [1764404975.9247] device (tapabbc8daa-d0): carrier: link connected
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.929 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb5061a-f60b-438b-ad36-a8c2a3bf4b0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.944 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b21d32f-4132-4669-b685-75be5abcc76a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795060, 'reachable_time': 37119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287673, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.958 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b1df5bda-42ae-4874-8e5e-2610301304d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:892d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795060, 'tstamp': 795060}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287674, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:35.973 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb43f04-4f14-42ff-a7ed-d28873783bf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795060, 'reachable_time': 37119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287675, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.006 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[edabdded-a505-4952-9da9-eb7275878922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.066 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6f20febd-4f1d-4acc-8ef9-d4211b2b86f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.067 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.067 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.068 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.070 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:36 np0005539564 NetworkManager[48997]: <info>  [1764404976.0709] manager: (tapabbc8daa-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Nov 29 03:29:36 np0005539564 kernel: tapabbc8daa-d0: entered promiscuous mode
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.072 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.074 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:36 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:36Z|00640|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.095 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.097 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e04325ff-6f29-4283-a25c-70a50818aca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.098 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:36.099 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'env', 'PROCESS_TAG=haproxy-abbc8daa-d665-4e2f-bf74-9e57db481441', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abbc8daa-d665-4e2f-bf74-9e57db481441.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.307 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404976.306641, 841820bc-c4f3-4198-b22c-ddd672e9cc75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.307 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.333 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.337 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404976.3068085, 841820bc-c4f3-4198-b22c-ddd672e9cc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.338 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.371 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.376 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.422 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:36 np0005539564 podman[287748]: 2025-11-29 08:29:36.485542066 +0000 UTC m=+0.049142021 container create 527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.506 226310 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Updated VIF entry in instance network info cache for port 14db1945-d4bf-40c6-bc47-10a1048900de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.507 226310 DEBUG nova.network.neutron [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Updating instance_info_cache with network_info: [{"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:36 np0005539564 systemd[1]: Started libpod-conmon-527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5.scope.
Nov 29 03:29:36 np0005539564 nova_compute[226295]: 2025-11-29 08:29:36.541 226310 DEBUG oslo_concurrency.lockutils [req-d2f6a8d9-930b-48d2-b745-df148fdda851 req-fd08278e-be7c-4382-bb1d-d114cc98e587 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-841820bc-c4f3-4198-b22c-ddd672e9cc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:36 np0005539564 podman[287748]: 2025-11-29 08:29:36.459510742 +0000 UTC m=+0.023110687 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:36 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:29:36 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d53b37fa564082c8481e21604e9347a8d7d3f5d3d976451ab306ece688c4b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:36 np0005539564 podman[287748]: 2025-11-29 08:29:36.581369668 +0000 UTC m=+0.144969693 container init 527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:29:36 np0005539564 podman[287748]: 2025-11-29 08:29:36.591906993 +0000 UTC m=+0.155506958 container start 527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:29:36 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[287764]: [NOTICE]   (287768) : New worker (287770) forked
Nov 29 03:29:36 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[287764]: [NOTICE]   (287768) : Loading success.
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.028 226310 DEBUG nova.compute.manager [req-037a6122-e781-45c2-8365-93a4ba2548ea req-96fb9fdf-0bee-4271-9aa3-bac7b74aaafc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.029 226310 DEBUG oslo_concurrency.lockutils [req-037a6122-e781-45c2-8365-93a4ba2548ea req-96fb9fdf-0bee-4271-9aa3-bac7b74aaafc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.030 226310 DEBUG oslo_concurrency.lockutils [req-037a6122-e781-45c2-8365-93a4ba2548ea req-96fb9fdf-0bee-4271-9aa3-bac7b74aaafc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.030 226310 DEBUG oslo_concurrency.lockutils [req-037a6122-e781-45c2-8365-93a4ba2548ea req-96fb9fdf-0bee-4271-9aa3-bac7b74aaafc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.030 226310 DEBUG nova.compute.manager [req-037a6122-e781-45c2-8365-93a4ba2548ea req-96fb9fdf-0bee-4271-9aa3-bac7b74aaafc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Processing event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.032 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.038 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764404977.0380762, 841820bc-c4f3-4198-b22c-ddd672e9cc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.039 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.042 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.046 226310 INFO nova.virt.libvirt.driver [-] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Instance spawned successfully.#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.047 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.082 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.093 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.099 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.100 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.101 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.102 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.103 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.103 226310 DEBUG nova.virt.libvirt.driver [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.134 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.183 226310 INFO nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Took 8.93 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.183 226310 DEBUG nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.253 226310 INFO nova.compute.manager [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Took 14.22 seconds to build instance.#033[00m
Nov 29 03:29:37 np0005539564 nova_compute[226295]: 2025-11-29 08:29:37.271 226310 DEBUG oslo_concurrency.lockutils [None req-7e1dad2c-afc5-446b-b2b3-8ba6fda4d3fc b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:37.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:37 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:37.510 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:37.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:38 np0005539564 nova_compute[226295]: 2025-11-29 08:29:38.521 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:39 np0005539564 nova_compute[226295]: 2025-11-29 08:29:39.185 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:39.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:39 np0005539564 nova_compute[226295]: 2025-11-29 08:29:39.550 226310 DEBUG nova.compute.manager [req-60bc6b9e-282e-4256-982e-452c12cba372 req-daa10085-e1b5-441c-b559-aa868f84dba6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:39 np0005539564 nova_compute[226295]: 2025-11-29 08:29:39.550 226310 DEBUG oslo_concurrency.lockutils [req-60bc6b9e-282e-4256-982e-452c12cba372 req-daa10085-e1b5-441c-b559-aa868f84dba6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:39 np0005539564 nova_compute[226295]: 2025-11-29 08:29:39.551 226310 DEBUG oslo_concurrency.lockutils [req-60bc6b9e-282e-4256-982e-452c12cba372 req-daa10085-e1b5-441c-b559-aa868f84dba6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:39 np0005539564 nova_compute[226295]: 2025-11-29 08:29:39.551 226310 DEBUG oslo_concurrency.lockutils [req-60bc6b9e-282e-4256-982e-452c12cba372 req-daa10085-e1b5-441c-b559-aa868f84dba6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:39 np0005539564 nova_compute[226295]: 2025-11-29 08:29:39.551 226310 DEBUG nova.compute.manager [req-60bc6b9e-282e-4256-982e-452c12cba372 req-daa10085-e1b5-441c-b559-aa868f84dba6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] No waiting events found dispatching network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:39 np0005539564 nova_compute[226295]: 2025-11-29 08:29:39.552 226310 WARNING nova.compute.manager [req-60bc6b9e-282e-4256-982e-452c12cba372 req-daa10085-e1b5-441c-b559-aa868f84dba6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received unexpected event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:39.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:41.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e365 e365: 3 total, 3 up, 3 in
Nov 29 03:29:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:41.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:42Z|00641|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:29:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:42Z|00642|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:29:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:42Z|00643|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:29:42 np0005539564 nova_compute[226295]: 2025-11-29 08:29:42.817 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:43.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:43 np0005539564 nova_compute[226295]: 2025-11-29 08:29:43.523 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e366 e366: 3 total, 3 up, 3 in
Nov 29 03:29:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:43.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:44 np0005539564 nova_compute[226295]: 2025-11-29 08:29:44.240 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:44 np0005539564 nova_compute[226295]: 2025-11-29 08:29:44.509 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.213 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.214 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.214 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.215 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.215 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.217 226310 INFO nova.compute.manager [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Terminating instance#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.219 226310 DEBUG nova.compute.manager [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:29:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:45.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:45 np0005539564 kernel: tap14db1945-d4 (unregistering): left promiscuous mode
Nov 29 03:29:45 np0005539564 NetworkManager[48997]: <info>  [1764404985.3251] device (tap14db1945-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.401 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00644|binding|INFO|Releasing lport 14db1945-d4bf-40c6-bc47-10a1048900de from this chassis (sb_readonly=0)
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00645|binding|INFO|Setting lport 14db1945-d4bf-40c6-bc47-10a1048900de down in Southbound
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00646|binding|INFO|Removing iface tap14db1945-d4 ovn-installed in OVS
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.404 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.413 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:1b:52 10.100.0.11'], port_security=['fa:16:3e:6e:1b:52 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '841820bc-c4f3-4198-b22c-ddd672e9cc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f38b737a-f658-4b72-a53c-7f8397e745b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=14db1945-d4bf-40c6-bc47-10a1048900de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.414 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 14db1945-d4bf-40c6-bc47-10a1048900de in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.416 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abbc8daa-d665-4e2f-bf74-9e57db481441, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.417 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.417 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9951546d-442c-4d2f-8bd5-c932384e2f76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.418 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 namespace which is not needed anymore#033[00m
Nov 29 03:29:45 np0005539564 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Nov 29 03:29:45 np0005539564 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a9.scope: Consumed 8.917s CPU time.
Nov 29 03:29:45 np0005539564 systemd-machined[190128]: Machine qemu-79-instance-000000a9 terminated.
Nov 29 03:29:45 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[287764]: [NOTICE]   (287768) : haproxy version is 2.8.14-c23fe91
Nov 29 03:29:45 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[287764]: [NOTICE]   (287768) : path to executable is /usr/sbin/haproxy
Nov 29 03:29:45 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[287764]: [WARNING]  (287768) : Exiting Master process...
Nov 29 03:29:45 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[287764]: [WARNING]  (287768) : Exiting Master process...
Nov 29 03:29:45 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[287764]: [ALERT]    (287768) : Current worker (287770) exited with code 143 (Terminated)
Nov 29 03:29:45 np0005539564 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[287764]: [WARNING]  (287768) : All workers exited. Exiting... (0)
Nov 29 03:29:45 np0005539564 systemd[1]: libpod-527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5.scope: Deactivated successfully.
Nov 29 03:29:45 np0005539564 podman[287803]: 2025-11-29 08:29:45.584347071 +0000 UTC m=+0.082244497 container died 527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:29:45 np0005539564 kernel: tap14db1945-d4: entered promiscuous mode
Nov 29 03:29:45 np0005539564 systemd-udevd[287782]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.649 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 NetworkManager[48997]: <info>  [1764404985.6551] manager: (tap14db1945-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00647|binding|INFO|Claiming lport 14db1945-d4bf-40c6-bc47-10a1048900de for this chassis.
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00648|binding|INFO|14db1945-d4bf-40c6-bc47-10a1048900de: Claiming fa:16:3e:6e:1b:52 10.100.0.11
Nov 29 03:29:45 np0005539564 kernel: tap14db1945-d4 (unregistering): left promiscuous mode
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.666 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:1b:52 10.100.0.11'], port_security=['fa:16:3e:6e:1b:52 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '841820bc-c4f3-4198-b22c-ddd672e9cc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f38b737a-f658-4b72-a53c-7f8397e745b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=14db1945-d4bf-40c6-bc47-10a1048900de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:45 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00649|binding|INFO|Setting lport 14db1945-d4bf-40c6-bc47-10a1048900de ovn-installed in OVS
Nov 29 03:29:45 np0005539564 systemd[1]: var-lib-containers-storage-overlay-66d53b37fa564082c8481e21604e9347a8d7d3f5d3d976451ab306ece688c4b0-merged.mount: Deactivated successfully.
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00650|binding|INFO|Setting lport 14db1945-d4bf-40c6-bc47-10a1048900de up in Southbound
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.688 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.691 226310 INFO nova.virt.libvirt.driver [-] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Instance destroyed successfully.#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.692 226310 DEBUG nova.objects.instance [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'resources' on Instance uuid 841820bc-c4f3-4198-b22c-ddd672e9cc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00651|binding|INFO|Releasing lport 14db1945-d4bf-40c6-bc47-10a1048900de from this chassis (sb_readonly=0)
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00652|binding|INFO|Setting lport 14db1945-d4bf-40c6-bc47-10a1048900de down in Southbound
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.697 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:45Z|00653|binding|INFO|Removing iface tap14db1945-d4 ovn-installed in OVS
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.698 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.704 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:1b:52 10.100.0.11'], port_security=['fa:16:3e:6e:1b:52 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '841820bc-c4f3-4198-b22c-ddd672e9cc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f38b737a-f658-4b72-a53c-7f8397e745b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=14db1945-d4bf-40c6-bc47-10a1048900de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.718 226310 DEBUG nova.virt.libvirt.vif [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1179306640',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1179306640',id=169,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-gwuyevua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:37Z,user_data=None,user_id='b4f4d28745dd46e586642c84c051db39',uuid=841820bc-c4f3-4198-b22c-ddd672e9cc75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.718 226310 DEBUG nova.network.os_vif_util [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "14db1945-d4bf-40c6-bc47-10a1048900de", "address": "fa:16:3e:6e:1b:52", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14db1945-d4", "ovs_interfaceid": "14db1945-d4bf-40c6-bc47-10a1048900de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.719 226310 DEBUG nova.network.os_vif_util [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:1b:52,bridge_name='br-int',has_traffic_filtering=True,id=14db1945-d4bf-40c6-bc47-10a1048900de,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14db1945-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.720 226310 DEBUG os_vif [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:1b:52,bridge_name='br-int',has_traffic_filtering=True,id=14db1945-d4bf-40c6-bc47-10a1048900de,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14db1945-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:45 np0005539564 podman[287803]: 2025-11-29 08:29:45.724849262 +0000 UTC m=+0.222746688 container cleanup 527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.726 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14db1945-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.727 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.728 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.732 226310 INFO os_vif [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:1b:52,bridge_name='br-int',has_traffic_filtering=True,id=14db1945-d4bf-40c6-bc47-10a1048900de,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14db1945-d4')#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.735 226310 DEBUG nova.compute.manager [req-0881adc5-07d6-4e72-8580-572b8dd22c73 req-584fb2aa-f8f9-4cb7-9ee7-22c2cd0be8e5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-unplugged-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.735 226310 DEBUG oslo_concurrency.lockutils [req-0881adc5-07d6-4e72-8580-572b8dd22c73 req-584fb2aa-f8f9-4cb7-9ee7-22c2cd0be8e5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.736 226310 DEBUG oslo_concurrency.lockutils [req-0881adc5-07d6-4e72-8580-572b8dd22c73 req-584fb2aa-f8f9-4cb7-9ee7-22c2cd0be8e5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:45 np0005539564 systemd[1]: libpod-conmon-527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5.scope: Deactivated successfully.
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.737 226310 DEBUG oslo_concurrency.lockutils [req-0881adc5-07d6-4e72-8580-572b8dd22c73 req-584fb2aa-f8f9-4cb7-9ee7-22c2cd0be8e5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.737 226310 DEBUG nova.compute.manager [req-0881adc5-07d6-4e72-8580-572b8dd22c73 req-584fb2aa-f8f9-4cb7-9ee7-22c2cd0be8e5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] No waiting events found dispatching network-vif-unplugged-14db1945-d4bf-40c6-bc47-10a1048900de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.738 226310 DEBUG nova.compute.manager [req-0881adc5-07d6-4e72-8580-572b8dd22c73 req-584fb2aa-f8f9-4cb7-9ee7-22c2cd0be8e5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-unplugged-14db1945-d4bf-40c6-bc47-10a1048900de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:29:45 np0005539564 podman[287838]: 2025-11-29 08:29:45.830747588 +0000 UTC m=+0.083286495 container remove 527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.841 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f44c2c-57b8-4661-8314-f8144aa5bceb]: (4, ('Sat Nov 29 08:29:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 (527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5)\n527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5\nSat Nov 29 08:29:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 (527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5)\n527cf55f4ae36b4499e232466196aaf837c6100539f48b8a5fad45d6ae7383f5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.844 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a6df0cd7-7997-404e-935a-af120fe2d56b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.845 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.848 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 kernel: tapabbc8daa-d0: left promiscuous mode
Nov 29 03:29:45 np0005539564 nova_compute[226295]: 2025-11-29 08:29:45.875 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.880 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6514d0f5-0aa6-44d1-b11e-2f84424d029c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.900 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b6acdf-5206-445c-9733-4a788bec67ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.902 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[748d8183-d0ed-4aca-985c-4847629ca9aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:45.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.925 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f5064159-a6c8-48c7-bd7f-dc23e48bd214]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795052, 'reachable_time': 16907, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287860, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 systemd[1]: run-netns-ovnmeta\x2dabbc8daa\x2dd665\x2d4e2f\x2dbf74\x2d9e57db481441.mount: Deactivated successfully.
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.931 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.931 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[cde657d5-8263-455e-8acd-ad9bd58a42eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.932 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 14db1945-d4bf-40c6-bc47-10a1048900de in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.933 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abbc8daa-d665-4e2f-bf74-9e57db481441, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.934 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ff249ebb-bee4-445d-b474-e07fc9e0bb5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.935 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 14db1945-d4bf-40c6-bc47-10a1048900de in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.936 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abbc8daa-d665-4e2f-bf74-9e57db481441, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:29:45.937 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f04184f5-6793-42fe-ac1e-90333833dc3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539564 nova_compute[226295]: 2025-11-29 08:29:46.798 226310 INFO nova.virt.libvirt.driver [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Deleting instance files /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75_del#033[00m
Nov 29 03:29:46 np0005539564 nova_compute[226295]: 2025-11-29 08:29:46.801 226310 INFO nova.virt.libvirt.driver [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Deletion of /var/lib/nova/instances/841820bc-c4f3-4198-b22c-ddd672e9cc75_del complete#033[00m
Nov 29 03:29:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:46Z|00654|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:29:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:46Z|00655|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:29:46 np0005539564 nova_compute[226295]: 2025-11-29 08:29:46.864 226310 INFO nova.compute.manager [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Took 1.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:29:46 np0005539564 nova_compute[226295]: 2025-11-29 08:29:46.865 226310 DEBUG oslo.service.loopingcall [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:29:46 np0005539564 nova_compute[226295]: 2025-11-29 08:29:46.865 226310 DEBUG nova.compute.manager [-] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:29:46 np0005539564 nova_compute[226295]: 2025-11-29 08:29:46.866 226310 DEBUG nova.network.neutron [-] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:29:46 np0005539564 nova_compute[226295]: 2025-11-29 08:29:46.893 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:47 np0005539564 ovn_controller[130591]: 2025-11-29T08:29:47Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:a4:7d 10.100.0.7
Nov 29 03:29:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:47.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.817 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.817 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.817 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.818 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.818 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] No waiting events found dispatching network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.818 226310 WARNING nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received unexpected event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.819 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.819 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.819 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.819 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.820 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] No waiting events found dispatching network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.820 226310 WARNING nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received unexpected event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.820 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.820 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.821 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.821 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.821 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] No waiting events found dispatching network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.821 226310 WARNING nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received unexpected event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.822 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-unplugged-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.822 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.822 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.822 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.823 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] No waiting events found dispatching network-vif-unplugged-14db1945-d4bf-40c6-bc47-10a1048900de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.823 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-unplugged-14db1945-d4bf-40c6-bc47-10a1048900de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.823 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.823 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.824 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.824 226310 DEBUG oslo_concurrency.lockutils [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.824 226310 DEBUG nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] No waiting events found dispatching network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.824 226310 WARNING nova.compute.manager [req-b85561f9-93c5-4d41-b47e-f322803ffd2a req-799b9005-332a-4108-b264-d2f769a04cdb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received unexpected event network-vif-plugged-14db1945-d4bf-40c6-bc47-10a1048900de for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.903 226310 DEBUG nova.network.neutron [-] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:47.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:47 np0005539564 nova_compute[226295]: 2025-11-29 08:29:47.931 226310 INFO nova.compute.manager [-] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.005 226310 DEBUG nova.compute.manager [req-6bf20140-f9d7-46f6-bfeb-32ed5833eef3 req-e2a070d1-0c21-4194-b862-58d6d1a58729 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Received event network-vif-deleted-14db1945-d4bf-40c6-bc47-10a1048900de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.128 226310 INFO nova.compute.manager [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.188 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.189 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.278 226310 DEBUG oslo_concurrency.processutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.525 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3484673518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.779 226310 DEBUG oslo_concurrency.processutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.785 226310 DEBUG nova.compute.provider_tree [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:48 np0005539564 nova_compute[226295]: 2025-11-29 08:29:48.875 226310 DEBUG nova.scheduler.client.report [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:49.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:49 np0005539564 nova_compute[226295]: 2025-11-29 08:29:49.631 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:49.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:50 np0005539564 nova_compute[226295]: 2025-11-29 08:29:50.270 226310 INFO nova.scheduler.client.report [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Deleted allocations for instance 841820bc-c4f3-4198-b22c-ddd672e9cc75#033[00m
Nov 29 03:29:50 np0005539564 nova_compute[226295]: 2025-11-29 08:29:50.404 226310 DEBUG oslo_concurrency.lockutils [None req-15fc77b8-663e-42e6-b5fd-14631af7ae75 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "841820bc-c4f3-4198-b22c-ddd672e9cc75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:50 np0005539564 nova_compute[226295]: 2025-11-29 08:29:50.728 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:51.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:51.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e367 e367: 3 total, 3 up, 3 in
Nov 29 03:29:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:53.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:53 np0005539564 nova_compute[226295]: 2025-11-29 08:29:53.529 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:29:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:29:53 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:29:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e368 e368: 3 total, 3 up, 3 in
Nov 29 03:29:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:55.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:55 np0005539564 nova_compute[226295]: 2025-11-29 08:29:55.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:55.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:57.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:57 np0005539564 nova_compute[226295]: 2025-11-29 08:29:57.355 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:57 np0005539564 nova_compute[226295]: 2025-11-29 08:29:57.925 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:57.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:58 np0005539564 podman[288030]: 2025-11-29 08:29:58.53084186 +0000 UTC m=+0.077672272 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:29:58 np0005539564 nova_compute[226295]: 2025-11-29 08:29:58.532 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539564 podman[288028]: 2025-11-29 08:29:58.552188527 +0000 UTC m=+0.108000263 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:58 np0005539564 podman[288029]: 2025-11-29 08:29:58.560205195 +0000 UTC m=+0.107837899 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 e369: 3 total, 3 up, 3 in
Nov 29 03:29:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:59.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:59 np0005539564 nova_compute[226295]: 2025-11-29 08:29:59.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:29:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:59.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:00 np0005539564 nova_compute[226295]: 2025-11-29 08:30:00.683 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404985.6815505, 841820bc-c4f3-4198-b22c-ddd672e9cc75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:00 np0005539564 nova_compute[226295]: 2025-11-29 08:30:00.683 226310 INFO nova.compute.manager [-] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:00 np0005539564 nova_compute[226295]: 2025-11-29 08:30:00.716 226310 DEBUG nova.compute.manager [None req-f4080f48-cf99-4677-b0b7-85b2a68a3476 - - - - - -] [instance: 841820bc-c4f3-4198-b22c-ddd672e9cc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:00 np0005539564 nova_compute[226295]: 2025-11-29 08:30:00.738 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:01.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:01 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 03:30:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:30:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:30:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:01.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:02 np0005539564 nova_compute[226295]: 2025-11-29 08:30:02.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:02 np0005539564 nova_compute[226295]: 2025-11-29 08:30:02.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:30:02 np0005539564 nova_compute[226295]: 2025-11-29 08:30:02.463 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:03.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:03 np0005539564 nova_compute[226295]: 2025-11-29 08:30:03.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:03 np0005539564 nova_compute[226295]: 2025-11-29 08:30:03.535 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:03.744 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:03.745 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:03.745 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:03.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:04 np0005539564 nova_compute[226295]: 2025-11-29 08:30:04.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:04 np0005539564 nova_compute[226295]: 2025-11-29 08:30:04.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:30:04 np0005539564 nova_compute[226295]: 2025-11-29 08:30:04.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:30:04 np0005539564 nova_compute[226295]: 2025-11-29 08:30:04.571 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:04 np0005539564 nova_compute[226295]: 2025-11-29 08:30:04.571 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:04 np0005539564 nova_compute[226295]: 2025-11-29 08:30:04.572 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:30:04 np0005539564 nova_compute[226295]: 2025-11-29 08:30:04.572 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:05.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:05 np0005539564 nova_compute[226295]: 2025-11-29 08:30:05.741 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:05.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.224 226310 DEBUG oslo_concurrency.lockutils [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.224 226310 DEBUG oslo_concurrency.lockutils [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.238 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating instance_info_cache with network_info: [{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.243 226310 INFO nova.compute.manager [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Detaching volume bd7644b6-0d0f-4a70-962b-b60c03d49643#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.265 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.265 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.266 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.267 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.547 226310 INFO nova.virt.block_device [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Attempting to driver detach volume bd7644b6-0d0f-4a70-962b-b60c03d49643 from mountpoint /dev/vdb#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.566 226310 DEBUG nova.virt.libvirt.driver [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Attempting to detach device vdb from instance bf4c2292-18d7-4c4b-b97e-abb227923156 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.567 226310 DEBUG nova.virt.libvirt.guest [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-bd7644b6-0d0f-4a70-962b-b60c03d49643">
Nov 29 03:30:06 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <serial>bd7644b6-0d0f-4a70-962b-b60c03d49643</serial>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:30:06 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.579 226310 INFO nova.virt.libvirt.driver [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully detached device vdb from instance bf4c2292-18d7-4c4b-b97e-abb227923156 from the persistent domain config.#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.580 226310 DEBUG nova.virt.libvirt.driver [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance bf4c2292-18d7-4c4b-b97e-abb227923156 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.582 226310 DEBUG nova.virt.libvirt.guest [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-bd7644b6-0d0f-4a70-962b-b60c03d49643">
Nov 29 03:30:06 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <serial>bd7644b6-0d0f-4a70-962b-b60c03d49643</serial>
Nov 29 03:30:06 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:30:06 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:30:06 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.713 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764405006.7128735, bf4c2292-18d7-4c4b-b97e-abb227923156 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.716 226310 DEBUG nova.virt.libvirt.driver [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance bf4c2292-18d7-4c4b-b97e-abb227923156 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.719 226310 INFO nova.virt.libvirt.driver [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully detached device vdb from instance bf4c2292-18d7-4c4b-b97e-abb227923156 from the live domain config.#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.892 226310 DEBUG nova.objects.instance [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:06 np0005539564 nova_compute[226295]: 2025-11-29 08:30:06.944 226310 DEBUG oslo_concurrency.lockutils [None req-2377ca7a-1281-4a8d-982f-bc9fd221985f 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:07.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:07 np0005539564 nova_compute[226295]: 2025-11-29 08:30:07.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:07 np0005539564 nova_compute[226295]: 2025-11-29 08:30:07.671 226310 DEBUG oslo_concurrency.lockutils [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:07 np0005539564 nova_compute[226295]: 2025-11-29 08:30:07.672 226310 DEBUG oslo_concurrency.lockutils [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:07 np0005539564 nova_compute[226295]: 2025-11-29 08:30:07.673 226310 DEBUG nova.compute.manager [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:07 np0005539564 nova_compute[226295]: 2025-11-29 08:30:07.679 226310 DEBUG nova.compute.manager [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:30:07 np0005539564 nova_compute[226295]: 2025-11-29 08:30:07.680 226310 DEBUG nova.objects.instance [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:07 np0005539564 nova_compute[226295]: 2025-11-29 08:30:07.717 226310 DEBUG nova.virt.libvirt.driver [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:30:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:07.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:08 np0005539564 nova_compute[226295]: 2025-11-29 08:30:08.124 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539564 nova_compute[226295]: 2025-11-29 08:30:08.538 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:09.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:09.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:10 np0005539564 kernel: tapbec1fbdf-d4 (unregistering): left promiscuous mode
Nov 29 03:30:10 np0005539564 NetworkManager[48997]: <info>  [1764405010.0450] device (tapbec1fbdf-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:10Z|00656|binding|INFO|Releasing lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 from this chassis (sb_readonly=0)
Nov 29 03:30:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:10Z|00657|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 down in Southbound
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.058 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:10Z|00658|binding|INFO|Removing iface tapbec1fbdf-d4 ovn-installed in OVS
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.072 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a4:7d 10.100.0.7'], port_security=['fa:16:3e:68:a4:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf4c2292-18d7-4c4b-b97e-abb227923156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '02abc3ce-c8f1-4034-8c00-97d80a9dca82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1429573d-31ea-4b00-8580-1031fbde1ea5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=bec1fbdf-d4dc-4b2c-af66-9ba123464651) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.073 139780 INFO neutron.agent.ovn.metadata.agent [-] Port bec1fbdf-d4dc-4b2c-af66-9ba123464651 in datapath a259ebcb-7cce-4363-8e50-c25ed4a3daec unbound from our chassis#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.076 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a259ebcb-7cce-4363-8e50-c25ed4a3daec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.077 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[321c95ca-b85d-4b69-9c3d-be47b797c6f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.077 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec namespace which is not needed anymore#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:10 np0005539564 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Nov 29 03:30:10 np0005539564 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a5.scope: Consumed 16.598s CPU time.
Nov 29 03:30:10 np0005539564 systemd-machined[190128]: Machine qemu-78-instance-000000a5 terminated.
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.213 226310 DEBUG nova.compute.manager [req-99466258-cc08-4d39-aad7-741a663ffb20 req-583c4f5c-0499-4797-9a1c-9118ae869648 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.213 226310 DEBUG oslo_concurrency.lockutils [req-99466258-cc08-4d39-aad7-741a663ffb20 req-583c4f5c-0499-4797-9a1c-9118ae869648 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.214 226310 DEBUG oslo_concurrency.lockutils [req-99466258-cc08-4d39-aad7-741a663ffb20 req-583c4f5c-0499-4797-9a1c-9118ae869648 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.214 226310 DEBUG oslo_concurrency.lockutils [req-99466258-cc08-4d39-aad7-741a663ffb20 req-583c4f5c-0499-4797-9a1c-9118ae869648 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.215 226310 DEBUG nova.compute.manager [req-99466258-cc08-4d39-aad7-741a663ffb20 req-583c4f5c-0499-4797-9a1c-9118ae869648 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.215 226310 WARNING nova.compute.manager [req-99466258-cc08-4d39-aad7-741a663ffb20 req-583c4f5c-0499-4797-9a1c-9118ae869648 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state active and task_state powering-off.#033[00m
Nov 29 03:30:10 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[287511]: [NOTICE]   (287515) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:10 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[287511]: [NOTICE]   (287515) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:10 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[287511]: [WARNING]  (287515) : Exiting Master process...
Nov 29 03:30:10 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[287511]: [ALERT]    (287515) : Current worker (287517) exited with code 143 (Terminated)
Nov 29 03:30:10 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[287511]: [WARNING]  (287515) : All workers exited. Exiting... (0)
Nov 29 03:30:10 np0005539564 systemd[1]: libpod-f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99.scope: Deactivated successfully.
Nov 29 03:30:10 np0005539564 podman[288162]: 2025-11-29 08:30:10.229476174 +0000 UTC m=+0.046028056 container died f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:30:10 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:10 np0005539564 systemd[1]: var-lib-containers-storage-overlay-6a6a46dcce5874f05f0490d8869113d6b32bccedef09d1c6bded73e35c3c1bce-merged.mount: Deactivated successfully.
Nov 29 03:30:10 np0005539564 podman[288162]: 2025-11-29 08:30:10.29509897 +0000 UTC m=+0.111650812 container cleanup f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:30:10 np0005539564 systemd[1]: libpod-conmon-f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99.scope: Deactivated successfully.
Nov 29 03:30:10 np0005539564 podman[288201]: 2025-11-29 08:30:10.378681801 +0000 UTC m=+0.052869461 container remove f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.385 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a6caca8c-ff69-4e0c-980c-b746df942bac]: (4, ('Sat Nov 29 08:30:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec (f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99)\nf44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99\nSat Nov 29 08:30:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec (f44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99)\nf44054d1927af8e839df5f31c5f2590a8093401a1fb83e0e208c545d15b1fa99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.389 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[681074b2-0de1-452e-a99b-1ab9127687e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.391 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa259ebcb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.394 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:10 np0005539564 kernel: tapa259ebcb-70: left promiscuous mode
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.418 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.421 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ef5058-ebc9-4bc9-b1ee-351d7662e29b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.443 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[54eea868-c9f0-424c-b17b-97d84b0550d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.445 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ce8fb4-7363-4dda-b926-e6165903c781]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.465 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[40965606-9855-4e26-9afd-031b70381107]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794638, 'reachable_time': 43374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288221, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.469 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:30:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:10.469 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c89e7d-95de-4a29-9ccf-de81f32ee3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:10 np0005539564 systemd[1]: run-netns-ovnmeta\x2da259ebcb\x2d7cce\x2d4363\x2d8e50\x2dc25ed4a3daec.mount: Deactivated successfully.
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.639 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.736 226310 INFO nova.virt.libvirt.driver [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.743 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.745 226310 INFO nova.virt.libvirt.driver [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance destroyed successfully.#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.746 226310 DEBUG nova.objects.instance [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'numa_topology' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.770 226310 DEBUG nova.compute.manager [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:10 np0005539564 nova_compute[226295]: 2025-11-29 08:30:10.823 226310 DEBUG oslo_concurrency.lockutils [None req-08e9d5ff-cf5f-422a-b187-b5711a97d1b2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:11.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:11 np0005539564 nova_compute[226295]: 2025-11-29 08:30:11.527 226310 DEBUG nova.objects.instance [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:11 np0005539564 nova_compute[226295]: 2025-11-29 08:30:11.548 226310 DEBUG oslo_concurrency.lockutils [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:11 np0005539564 nova_compute[226295]: 2025-11-29 08:30:11.548 226310 DEBUG oslo_concurrency.lockutils [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquired lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:11 np0005539564 nova_compute[226295]: 2025-11-29 08:30:11.548 226310 DEBUG nova.network.neutron [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:11 np0005539564 nova_compute[226295]: 2025-11-29 08:30:11.549 226310 DEBUG nova.objects.instance [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'info_cache' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:11.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:12 np0005539564 nova_compute[226295]: 2025-11-29 08:30:12.560 226310 DEBUG nova.compute.manager [req-95495639-5516-4b2a-ae1f-55507cbd4ab4 req-c1553bb2-fe13-4ca1-b1c4-6b4c2f272063 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:12 np0005539564 nova_compute[226295]: 2025-11-29 08:30:12.561 226310 DEBUG oslo_concurrency.lockutils [req-95495639-5516-4b2a-ae1f-55507cbd4ab4 req-c1553bb2-fe13-4ca1-b1c4-6b4c2f272063 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:12 np0005539564 nova_compute[226295]: 2025-11-29 08:30:12.562 226310 DEBUG oslo_concurrency.lockutils [req-95495639-5516-4b2a-ae1f-55507cbd4ab4 req-c1553bb2-fe13-4ca1-b1c4-6b4c2f272063 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:12 np0005539564 nova_compute[226295]: 2025-11-29 08:30:12.562 226310 DEBUG oslo_concurrency.lockutils [req-95495639-5516-4b2a-ae1f-55507cbd4ab4 req-c1553bb2-fe13-4ca1-b1c4-6b4c2f272063 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:12 np0005539564 nova_compute[226295]: 2025-11-29 08:30:12.563 226310 DEBUG nova.compute.manager [req-95495639-5516-4b2a-ae1f-55507cbd4ab4 req-c1553bb2-fe13-4ca1-b1c4-6b4c2f272063 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:12 np0005539564 nova_compute[226295]: 2025-11-29 08:30:12.563 226310 WARNING nova.compute.manager [req-95495639-5516-4b2a-ae1f-55507cbd4ab4 req-c1553bb2-fe13-4ca1-b1c4-6b4c2f272063 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:30:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:13.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:13 np0005539564 nova_compute[226295]: 2025-11-29 08:30:13.539 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:13.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.517 226310 DEBUG nova.network.neutron [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating instance_info_cache with network_info: [{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.536 226310 DEBUG oslo_concurrency.lockutils [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Releasing lock "refresh_cache-bf4c2292-18d7-4c4b-b97e-abb227923156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.572 226310 INFO nova.virt.libvirt.driver [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance destroyed successfully.#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.573 226310 DEBUG nova.objects.instance [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'numa_topology' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.594 226310 DEBUG nova.objects.instance [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'resources' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.611 226310 DEBUG nova.virt.libvirt.vif [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.611 226310 DEBUG nova.network.os_vif_util [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.612 226310 DEBUG nova.network.os_vif_util [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.613 226310 DEBUG os_vif [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.616 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.616 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbec1fbdf-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.619 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.621 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.625 226310 INFO os_vif [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4')#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.635 226310 DEBUG nova.virt.libvirt.driver [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Start _get_guest_xml network_info=[{"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.640 226310 WARNING nova.virt.libvirt.driver [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.647 226310 DEBUG nova.virt.libvirt.host [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.648 226310 DEBUG nova.virt.libvirt.host [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.652 226310 DEBUG nova.virt.libvirt.host [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.652 226310 DEBUG nova.virt.libvirt.host [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.654 226310 DEBUG nova.virt.libvirt.driver [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.655 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.656 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.656 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.656 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.657 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.657 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.658 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.658 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.659 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.659 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.660 226310 DEBUG nova.virt.hardware [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.660 226310 DEBUG nova.objects.instance [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'vcpu_model' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:14 np0005539564 nova_compute[226295]: 2025-11-29 08:30:14.681 226310 DEBUG oslo_concurrency.processutils [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:15 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/484854823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.159 226310 DEBUG oslo_concurrency.processutils [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.217 226310 DEBUG oslo_concurrency.processutils [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:15.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.385 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.386 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.386 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.387 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.387 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:15 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3417512963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.753 226310 DEBUG oslo_concurrency.processutils [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.755 226310 DEBUG nova.virt.libvirt.vif [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.756 226310 DEBUG nova.network.os_vif_util [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.757 226310 DEBUG nova.network.os_vif_util [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.759 226310 DEBUG nova.objects.instance [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'pci_devices' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.776 226310 DEBUG nova.virt.libvirt.driver [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <uuid>bf4c2292-18d7-4c4b-b97e-abb227923156</uuid>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <name>instance-000000a5</name>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <nova:name>tempest-AttachVolumeTestJSON-server-958672118</nova:name>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:30:14</nova:creationTime>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <nova:user uuid="7a362a419f6a492aae2f102ad2bbd5e9">tempest-AttachVolumeTestJSON-942041170-project-member</nova:user>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <nova:project uuid="eb0810bf6f5b4eb59638b7a2cf59ed5b">tempest-AttachVolumeTestJSON-942041170</nova:project>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <nova:port uuid="bec1fbdf-d4dc-4b2c-af66-9ba123464651">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <entry name="serial">bf4c2292-18d7-4c4b-b97e-abb227923156</entry>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <entry name="uuid">bf4c2292-18d7-4c4b-b97e-abb227923156</entry>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/bf4c2292-18d7-4c4b-b97e-abb227923156_disk">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/bf4c2292-18d7-4c4b-b97e-abb227923156_disk.config">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:68:a4:7d"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <target dev="tapbec1fbdf-d4"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156/console.log" append="off"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:30:15 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:30:15 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:30:15 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:30:15 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.785 226310 DEBUG nova.virt.libvirt.driver [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.785 226310 DEBUG nova.virt.libvirt.driver [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.786 226310 DEBUG nova.virt.libvirt.vif [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.787 226310 DEBUG nova.network.os_vif_util [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.788 226310 DEBUG nova.network.os_vif_util [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.789 226310 DEBUG os_vif [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.789 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.790 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.791 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.793 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.794 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbec1fbdf-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.794 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbec1fbdf-d4, col_values=(('external_ids', {'iface-id': 'bec1fbdf-d4dc-4b2c-af66-9ba123464651', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:a4:7d', 'vm-uuid': 'bf4c2292-18d7-4c4b-b97e-abb227923156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.796 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539564 NetworkManager[48997]: <info>  [1764405015.7979] manager: (tapbec1fbdf-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.799 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.800 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.801 226310 INFO os_vif [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4')#033[00m
Nov 29 03:30:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:15 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1639437657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.832 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:15 np0005539564 kernel: tapbec1fbdf-d4: entered promiscuous mode
Nov 29 03:30:15 np0005539564 NetworkManager[48997]: <info>  [1764405015.8655] manager: (tapbec1fbdf-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Nov 29 03:30:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:15Z|00659|binding|INFO|Claiming lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 for this chassis.
Nov 29 03:30:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:15Z|00660|binding|INFO|bec1fbdf-d4dc-4b2c-af66-9ba123464651: Claiming fa:16:3e:68:a4:7d 10.100.0.7
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.867 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.884 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a4:7d 10.100.0.7'], port_security=['fa:16:3e:68:a4:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf4c2292-18d7-4c4b-b97e-abb227923156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '02abc3ce-c8f1-4034-8c00-97d80a9dca82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1429573d-31ea-4b00-8580-1031fbde1ea5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=bec1fbdf-d4dc-4b2c-af66-9ba123464651) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.885 139780 INFO neutron.agent.ovn.metadata.agent [-] Port bec1fbdf-d4dc-4b2c-af66-9ba123464651 in datapath a259ebcb-7cce-4363-8e50-c25ed4a3daec bound to our chassis#033[00m
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.887 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a259ebcb-7cce-4363-8e50-c25ed4a3daec#033[00m
Nov 29 03:30:15 np0005539564 systemd-udevd[288320]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:15Z|00661|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 ovn-installed in OVS
Nov 29 03:30:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:15Z|00662|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 up in Southbound
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.896 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[943d986a-e377-4afb-8f6e-3da22eea7130]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.897 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa259ebcb-71 in ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.897 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539564 nova_compute[226295]: 2025-11-29 08:30:15.899 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.899 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa259ebcb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.899 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6c15d0e0-0cb2-470e-9035-3c53653dcff4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.900 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1a51cca1-cc68-41a8-8324-efd117841237]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539564 NetworkManager[48997]: <info>  [1764405015.9028] device (tapbec1fbdf-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:15 np0005539564 systemd-machined[190128]: New machine qemu-80-instance-000000a5.
Nov 29 03:30:15 np0005539564 NetworkManager[48997]: <info>  [1764405015.9038] device (tapbec1fbdf-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.910 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[201bc3a1-6efb-4812-aac4-38b9b40cc46b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539564 systemd[1]: Started Virtual Machine qemu-80-instance-000000a5.
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.932 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[48e21f5c-573d-4789-92ce-6a05aad7a337]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:15.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.958 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[790cb0ac-df35-413b-837a-88516d7503d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:15.964 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[355e3fa3-dfad-4659-8103-eee8c360aaea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539564 NetworkManager[48997]: <info>  [1764405015.9661] manager: (tapa259ebcb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/303)
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.005 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b5fc1ce2-945f-49f1-aed0-13200b6f79ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.008 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[afdce698-0e21-40f7-8cb3-7886098a55ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 NetworkManager[48997]: <info>  [1764405016.0378] device (tapa259ebcb-70): carrier: link connected
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.045 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a0112845-da29-4b4a-838f-0282d1d55138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.069 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aff3ef38-f8f2-4b76-85ff-5cc67c920625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa259ebcb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:16:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799071, 'reachable_time': 29852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288354, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.080 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.081 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.084 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.085 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.095 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aea4ee-8c3a-4b1b-b624-18fefb2c4db7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:1647'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799071, 'tstamp': 799071}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288355, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.129 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f862f4b7-9bbd-4440-b1c7-4102c6caf565]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa259ebcb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:16:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799071, 'reachable_time': 29852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288356, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.180 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[73422364-2bd3-4e54-a9eb-b0ee73cbc0df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.252 226310 DEBUG nova.compute.manager [req-9090457c-ba0b-470f-8fcb-f8762653a68c req-a03d4535-7bf8-49f6-ac90-0aefd0f6c07f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.252 226310 DEBUG oslo_concurrency.lockutils [req-9090457c-ba0b-470f-8fcb-f8762653a68c req-a03d4535-7bf8-49f6-ac90-0aefd0f6c07f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.253 226310 DEBUG oslo_concurrency.lockutils [req-9090457c-ba0b-470f-8fcb-f8762653a68c req-a03d4535-7bf8-49f6-ac90-0aefd0f6c07f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.253 226310 DEBUG oslo_concurrency.lockutils [req-9090457c-ba0b-470f-8fcb-f8762653a68c req-a03d4535-7bf8-49f6-ac90-0aefd0f6c07f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.253 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cadda14a-8978-4bf9-9a1d-8c38f977618f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.254 226310 DEBUG nova.compute.manager [req-9090457c-ba0b-470f-8fcb-f8762653a68c req-a03d4535-7bf8-49f6-ac90-0aefd0f6c07f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.254 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa259ebcb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.254 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.254 226310 WARNING nova.compute.manager [req-9090457c-ba0b-470f-8fcb-f8762653a68c req-a03d4535-7bf8-49f6-ac90-0aefd0f6c07f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.255 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa259ebcb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:16 np0005539564 kernel: tapa259ebcb-70: entered promiscuous mode
Nov 29 03:30:16 np0005539564 NetworkManager[48997]: <info>  [1764405016.2581] manager: (tapa259ebcb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.261 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa259ebcb-70, col_values=(('external_ids', {'iface-id': '5c5c4b01-f2eb-4ea5-9341-cfa577051cf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:16Z|00663|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.277 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.278 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.280 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5c8c9f-d00e-42ce-9c90-77fe41bd351f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.281 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-a259ebcb-7cce-4363-8e50-c25ed4a3daec
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID a259ebcb-7cce-4363-8e50-c25ed4a3daec
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:30:16 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:16.281 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'env', 'PROCESS_TAG=haproxy-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a259ebcb-7cce-4363-8e50-c25ed4a3daec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.317 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.318 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3983MB free_disk=20.94214630126953GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.318 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.318 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.356 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for bf4c2292-18d7-4c4b-b97e-abb227923156 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.356 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405016.3555777, bf4c2292-18d7-4c4b-b97e-abb227923156 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.356 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.359 226310 DEBUG nova.compute.manager [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.363 226310 INFO nova.virt.libvirt.driver [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance rebooted successfully.#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.363 226310 DEBUG nova.compute.manager [None req-5b1c6db3-ab57-47d3-8aa6-2097fd7985fa 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.418 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.423 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.450 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405016.3560083, bf4c2292-18d7-4c4b-b97e-abb227923156 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.450 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.459 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance bf4c2292-18d7-4c4b-b97e-abb227923156 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.459 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.460 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.460 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.481 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.486 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:16 np0005539564 nova_compute[226295]: 2025-11-29 08:30:16.511 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:16 np0005539564 podman[288430]: 2025-11-29 08:30:16.644868826 +0000 UTC m=+0.046770386 container create 75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:30:16 np0005539564 systemd[1]: Started libpod-conmon-75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95.scope.
Nov 29 03:30:16 np0005539564 podman[288430]: 2025-11-29 08:30:16.621622747 +0000 UTC m=+0.023524347 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:30:16 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:30:16 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e1effdc16fec3f2b98bc6ec33e174837a80d908b32c6beea4097a0d524dd14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:30:16 np0005539564 podman[288430]: 2025-11-29 08:30:16.744137392 +0000 UTC m=+0.146038962 container init 75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:30:16 np0005539564 podman[288430]: 2025-11-29 08:30:16.752608011 +0000 UTC m=+0.154509581 container start 75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:30:16 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[288465]: [NOTICE]   (288469) : New worker (288471) forked
Nov 29 03:30:16 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[288465]: [NOTICE]   (288469) : Loading success.
Nov 29 03:30:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:17 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3144057685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:17 np0005539564 nova_compute[226295]: 2025-11-29 08:30:17.020 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:17 np0005539564 nova_compute[226295]: 2025-11-29 08:30:17.029 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:17 np0005539564 nova_compute[226295]: 2025-11-29 08:30:17.150 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:17 np0005539564 nova_compute[226295]: 2025-11-29 08:30:17.180 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:30:17 np0005539564 nova_compute[226295]: 2025-11-29 08:30:17.181 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:17.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:17.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:18 np0005539564 nova_compute[226295]: 2025-11-29 08:30:18.333 226310 DEBUG nova.compute.manager [req-8656c071-7a2f-4921-ae57-906b1fcff39b req-368033d9-27ba-4ea8-937d-9dbdf46609f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:18 np0005539564 nova_compute[226295]: 2025-11-29 08:30:18.334 226310 DEBUG oslo_concurrency.lockutils [req-8656c071-7a2f-4921-ae57-906b1fcff39b req-368033d9-27ba-4ea8-937d-9dbdf46609f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:18 np0005539564 nova_compute[226295]: 2025-11-29 08:30:18.334 226310 DEBUG oslo_concurrency.lockutils [req-8656c071-7a2f-4921-ae57-906b1fcff39b req-368033d9-27ba-4ea8-937d-9dbdf46609f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:18 np0005539564 nova_compute[226295]: 2025-11-29 08:30:18.334 226310 DEBUG oslo_concurrency.lockutils [req-8656c071-7a2f-4921-ae57-906b1fcff39b req-368033d9-27ba-4ea8-937d-9dbdf46609f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:18 np0005539564 nova_compute[226295]: 2025-11-29 08:30:18.335 226310 DEBUG nova.compute.manager [req-8656c071-7a2f-4921-ae57-906b1fcff39b req-368033d9-27ba-4ea8-937d-9dbdf46609f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:18 np0005539564 nova_compute[226295]: 2025-11-29 08:30:18.335 226310 WARNING nova.compute.manager [req-8656c071-7a2f-4921-ae57-906b1fcff39b req-368033d9-27ba-4ea8-937d-9dbdf46609f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:18 np0005539564 nova_compute[226295]: 2025-11-29 08:30:18.543 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:18.671 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:18 np0005539564 nova_compute[226295]: 2025-11-29 08:30:18.672 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:18 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:18.673 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:30:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:19.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:19.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:20 np0005539564 nova_compute[226295]: 2025-11-29 08:30:20.798 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:20 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:20Z|00664|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:30:20 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:20Z|00665|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:30:20 np0005539564 nova_compute[226295]: 2025-11-29 08:30:20.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:21.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:21.676 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:23 np0005539564 nova_compute[226295]: 2025-11-29 08:30:23.546 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:23.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:25.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:25 np0005539564 nova_compute[226295]: 2025-11-29 08:30:25.847 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:25.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:26 np0005539564 nova_compute[226295]: 2025-11-29 08:30:26.718 226310 DEBUG oslo_concurrency.lockutils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:26 np0005539564 nova_compute[226295]: 2025-11-29 08:30:26.718 226310 DEBUG oslo_concurrency.lockutils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:26 np0005539564 nova_compute[226295]: 2025-11-29 08:30:26.841 226310 DEBUG nova.objects.instance [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid ef2296eb-4538-4e04-8c0b-42370d9e5b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:26 np0005539564 nova_compute[226295]: 2025-11-29 08:30:26.971 226310 DEBUG oslo_concurrency.lockutils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.296 226310 DEBUG oslo_concurrency.lockutils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.296 226310 DEBUG oslo_concurrency.lockutils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.296 226310 INFO nova.compute.manager [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Attaching volume 1c425827-5356-476a-a66c-a9193ddaa105 to /dev/vdb#033[00m
Nov 29 03:30:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:27.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.525 226310 DEBUG os_brick.utils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.526 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.542 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.542 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[54c99cbf-d32a-46cd-9429-6d5e8cf4d3a3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.543 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.552 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.552 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce28cf7-5631-4d0b-a611-ffe1e6ea5bce]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.554 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.561 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.561 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f97388-ad23-437a-8fab-c474b2b9fbb6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.563 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd5ccc1-637d-4116-ac20-4c2721f542fe]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.564 226310 DEBUG oslo_concurrency.processutils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.603 226310 DEBUG oslo_concurrency.processutils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.608 226310 DEBUG os_brick.initiator.connectors.lightos [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.608 226310 DEBUG os_brick.initiator.connectors.lightos [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.609 226310 DEBUG os_brick.initiator.connectors.lightos [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.609 226310 DEBUG os_brick.utils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:30:27 np0005539564 nova_compute[226295]: 2025-11-29 08:30:27.610 226310 DEBUG nova.virt.block_device [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating existing volume attachment record: 7990cef2-1b71-4efc-81a0-f7ba345cacb8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:30:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:27.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:28 np0005539564 nova_compute[226295]: 2025-11-29 08:30:28.549 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:29.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:29 np0005539564 nova_compute[226295]: 2025-11-29 08:30:29.481 226310 DEBUG nova.objects.instance [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid ef2296eb-4538-4e04-8c0b-42370d9e5b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:29 np0005539564 podman[288491]: 2025-11-29 08:30:29.515776156 +0000 UTC m=+0.066712166 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:30:29 np0005539564 podman[288490]: 2025-11-29 08:30:29.523557637 +0000 UTC m=+0.076798390 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:30:29 np0005539564 podman[288489]: 2025-11-29 08:30:29.554667448 +0000 UTC m=+0.107026986 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:30:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:29.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:30 np0005539564 nova_compute[226295]: 2025-11-29 08:30:30.178 226310 DEBUG nova.virt.libvirt.driver [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Attempting to attach volume 1c425827-5356-476a-a66c-a9193ddaa105 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:30:30 np0005539564 nova_compute[226295]: 2025-11-29 08:30:30.182 226310 DEBUG nova.virt.libvirt.guest [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:30:30 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:30 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-1c425827-5356-476a-a66c-a9193ddaa105">
Nov 29 03:30:30 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:30 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:30 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:30 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:30:30 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:30:30 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:30:30 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:30:30 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:30:30 np0005539564 nova_compute[226295]:  <serial>1c425827-5356-476a-a66c-a9193ddaa105</serial>
Nov 29 03:30:30 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:30:30 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:30:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:30Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:a4:7d 10.100.0.7
Nov 29 03:30:30 np0005539564 nova_compute[226295]: 2025-11-29 08:30:30.850 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:30 np0005539564 nova_compute[226295]: 2025-11-29 08:30:30.929 226310 DEBUG nova.virt.libvirt.driver [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:30 np0005539564 nova_compute[226295]: 2025-11-29 08:30:30.930 226310 DEBUG nova.virt.libvirt.driver [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:30 np0005539564 nova_compute[226295]: 2025-11-29 08:30:30.930 226310 DEBUG nova.virt.libvirt.driver [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:30 np0005539564 nova_compute[226295]: 2025-11-29 08:30:30.931 226310 DEBUG nova.virt.libvirt.driver [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:fb:c3:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:31 np0005539564 nova_compute[226295]: 2025-11-29 08:30:31.162 226310 DEBUG oslo_concurrency.lockutils [None req-65a7c932-bf76-43f0-aff0-0f5b3f7f1001 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:31.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:31.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.265 226310 DEBUG oslo_concurrency.lockutils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.266 226310 DEBUG oslo_concurrency.lockutils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.287 226310 DEBUG nova.objects.instance [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid ef2296eb-4538-4e04-8c0b-42370d9e5b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.328 226310 DEBUG oslo_concurrency.lockutils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.595 226310 DEBUG oslo_concurrency.lockutils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.595 226310 DEBUG oslo_concurrency.lockutils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.595 226310 INFO nova.compute.manager [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Attaching volume 2c994368-e80f-41ad-8820-f509c2008bb5 to /dev/vdc#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.809 226310 DEBUG os_brick.utils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.810 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.830 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.831 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d4270c-2f93-4461-9559-3320591aafca]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.832 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.845 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.846 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3c0b14-d968-459b-bdef-964407f35dc5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.848 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.860 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.860 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bd6207-f179-4d44-b68a-27e19422fa06]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.862 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfe38ac-b0f8-4697-b3e8-c635b5bfbd01]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.863 226310 DEBUG oslo_concurrency.processutils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.898 226310 DEBUG oslo_concurrency.processutils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.901 226310 DEBUG os_brick.initiator.connectors.lightos [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.901 226310 DEBUG os_brick.initiator.connectors.lightos [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.901 226310 DEBUG os_brick.initiator.connectors.lightos [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.902 226310 DEBUG os_brick.utils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:30:32 np0005539564 nova_compute[226295]: 2025-11-29 08:30:32.902 226310 DEBUG nova.virt.block_device [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating existing volume attachment record: edcd20d2-5f69-47ab-8706-511c8119892f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:30:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:33.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.562 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.671 226310 DEBUG nova.objects.instance [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid ef2296eb-4538-4e04-8c0b-42370d9e5b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.698 226310 DEBUG nova.virt.libvirt.driver [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Attempting to attach volume 2c994368-e80f-41ad-8820-f509c2008bb5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.700 226310 DEBUG nova.virt.libvirt.guest [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:30:33 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:33 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-2c994368-e80f-41ad-8820-f509c2008bb5">
Nov 29 03:30:33 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:33 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:33 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:33 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:30:33 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:30:33 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:30:33 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:30:33 np0005539564 nova_compute[226295]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:30:33 np0005539564 nova_compute[226295]:  <serial>2c994368-e80f-41ad-8820-f509c2008bb5</serial>
Nov 29 03:30:33 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:30:33 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.854 226310 DEBUG nova.virt.libvirt.driver [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.854 226310 DEBUG nova.virt.libvirt.driver [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.855 226310 DEBUG nova.virt.libvirt.driver [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.855 226310 DEBUG nova.virt.libvirt.driver [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.856 226310 DEBUG nova.virt.libvirt.driver [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:fb:c3:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:33 np0005539564 nova_compute[226295]: 2025-11-29 08:30:33.895 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:33.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:34 np0005539564 nova_compute[226295]: 2025-11-29 08:30:34.094 226310 DEBUG oslo_concurrency.lockutils [None req-f741cd73-ebe7-4bbf-96dc-247e1a9cfa4d d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:35.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:35 np0005539564 nova_compute[226295]: 2025-11-29 08:30:35.892 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:37.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:37.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:38 np0005539564 nova_compute[226295]: 2025-11-29 08:30:38.566 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:38 np0005539564 nova_compute[226295]: 2025-11-29 08:30:38.760 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:39 np0005539564 nova_compute[226295]: 2025-11-29 08:30:39.291 226310 DEBUG nova.compute.manager [req-5cbe64fd-bc6e-47c2-bba1-e703a85e55ed req-4aa59dc7-b5bf-45b2-868e-82b6404fb0db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:39 np0005539564 nova_compute[226295]: 2025-11-29 08:30:39.291 226310 DEBUG nova.compute.manager [req-5cbe64fd-bc6e-47c2-bba1-e703a85e55ed req-4aa59dc7-b5bf-45b2-868e-82b6404fb0db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing instance network info cache due to event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:39 np0005539564 nova_compute[226295]: 2025-11-29 08:30:39.292 226310 DEBUG oslo_concurrency.lockutils [req-5cbe64fd-bc6e-47c2-bba1-e703a85e55ed req-4aa59dc7-b5bf-45b2-868e-82b6404fb0db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:39 np0005539564 nova_compute[226295]: 2025-11-29 08:30:39.292 226310 DEBUG oslo_concurrency.lockutils [req-5cbe64fd-bc6e-47c2-bba1-e703a85e55ed req-4aa59dc7-b5bf-45b2-868e-82b6404fb0db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:39 np0005539564 nova_compute[226295]: 2025-11-29 08:30:39.292 226310 DEBUG nova.network.neutron [req-5cbe64fd-bc6e-47c2-bba1-e703a85e55ed req-4aa59dc7-b5bf-45b2-868e-82b6404fb0db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:39.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:39.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.868 226310 DEBUG nova.network.neutron [req-5cbe64fd-bc6e-47c2-bba1-e703a85e55ed req-4aa59dc7-b5bf-45b2-868e-82b6404fb0db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updated VIF entry in instance network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.869 226310 DEBUG nova.network.neutron [req-5cbe64fd-bc6e-47c2-bba1-e703a85e55ed req-4aa59dc7-b5bf-45b2-868e-82b6404fb0db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.892 226310 DEBUG oslo_concurrency.lockutils [req-5cbe64fd-bc6e-47c2-bba1-e703a85e55ed req-4aa59dc7-b5bf-45b2-868e-82b6404fb0db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.895 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.988 226310 DEBUG nova.compute.manager [req-c84b6088-dcdc-4327-83da-9dbf90f39930 req-2d5f5f94-78a8-44b1-9566-03950e42e302 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.989 226310 DEBUG nova.compute.manager [req-c84b6088-dcdc-4327-83da-9dbf90f39930 req-2d5f5f94-78a8-44b1-9566-03950e42e302 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing instance network info cache due to event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.989 226310 DEBUG oslo_concurrency.lockutils [req-c84b6088-dcdc-4327-83da-9dbf90f39930 req-2d5f5f94-78a8-44b1-9566-03950e42e302 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.990 226310 DEBUG oslo_concurrency.lockutils [req-c84b6088-dcdc-4327-83da-9dbf90f39930 req-2d5f5f94-78a8-44b1-9566-03950e42e302 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:40 np0005539564 nova_compute[226295]: 2025-11-29 08:30:40.990 226310 DEBUG nova.network.neutron [req-c84b6088-dcdc-4327-83da-9dbf90f39930 req-2d5f5f94-78a8-44b1-9566-03950e42e302 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:41.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:42 np0005539564 nova_compute[226295]: 2025-11-29 08:30:42.161 226310 DEBUG nova.network.neutron [req-c84b6088-dcdc-4327-83da-9dbf90f39930 req-2d5f5f94-78a8-44b1-9566-03950e42e302 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updated VIF entry in instance network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:42 np0005539564 nova_compute[226295]: 2025-11-29 08:30:42.161 226310 DEBUG nova.network.neutron [req-c84b6088-dcdc-4327-83da-9dbf90f39930 req-2d5f5f94-78a8-44b1-9566-03950e42e302 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:42 np0005539564 nova_compute[226295]: 2025-11-29 08:30:42.175 226310 DEBUG oslo_concurrency.lockutils [req-c84b6088-dcdc-4327-83da-9dbf90f39930 req-2d5f5f94-78a8-44b1-9566-03950e42e302 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:43 np0005539564 nova_compute[226295]: 2025-11-29 08:30:43.096 226310 DEBUG nova.compute.manager [req-f01e5daa-a00a-403b-be54-27302dd373ec req-1cd1deb3-a257-45b5-aaf8-6b8321ac7b9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:43 np0005539564 nova_compute[226295]: 2025-11-29 08:30:43.096 226310 DEBUG nova.compute.manager [req-f01e5daa-a00a-403b-be54-27302dd373ec req-1cd1deb3-a257-45b5-aaf8-6b8321ac7b9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing instance network info cache due to event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:43 np0005539564 nova_compute[226295]: 2025-11-29 08:30:43.097 226310 DEBUG oslo_concurrency.lockutils [req-f01e5daa-a00a-403b-be54-27302dd373ec req-1cd1deb3-a257-45b5-aaf8-6b8321ac7b9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:43 np0005539564 nova_compute[226295]: 2025-11-29 08:30:43.097 226310 DEBUG oslo_concurrency.lockutils [req-f01e5daa-a00a-403b-be54-27302dd373ec req-1cd1deb3-a257-45b5-aaf8-6b8321ac7b9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:43 np0005539564 nova_compute[226295]: 2025-11-29 08:30:43.097 226310 DEBUG nova.network.neutron [req-f01e5daa-a00a-403b-be54-27302dd373ec req-1cd1deb3-a257-45b5-aaf8-6b8321ac7b9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:43 np0005539564 nova_compute[226295]: 2025-11-29 08:30:43.570 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:44.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:44 np0005539564 nova_compute[226295]: 2025-11-29 08:30:44.561 226310 DEBUG nova.network.neutron [req-f01e5daa-a00a-403b-be54-27302dd373ec req-1cd1deb3-a257-45b5-aaf8-6b8321ac7b9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updated VIF entry in instance network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:44 np0005539564 nova_compute[226295]: 2025-11-29 08:30:44.562 226310 DEBUG nova.network.neutron [req-f01e5daa-a00a-403b-be54-27302dd373ec req-1cd1deb3-a257-45b5-aaf8-6b8321ac7b9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:44 np0005539564 nova_compute[226295]: 2025-11-29 08:30:44.575 226310 DEBUG oslo_concurrency.lockutils [req-f01e5daa-a00a-403b-be54-27302dd373ec req-1cd1deb3-a257-45b5-aaf8-6b8321ac7b9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:44 np0005539564 nova_compute[226295]: 2025-11-29 08:30:44.606 226310 DEBUG nova.compute.manager [req-cc7707dd-9c3e-4552-b5e6-0846218524d7 req-e503c848-3887-48cb-aa32-64b1c8782e1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:44 np0005539564 nova_compute[226295]: 2025-11-29 08:30:44.607 226310 DEBUG nova.compute.manager [req-cc7707dd-9c3e-4552-b5e6-0846218524d7 req-e503c848-3887-48cb-aa32-64b1c8782e1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing instance network info cache due to event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:44 np0005539564 nova_compute[226295]: 2025-11-29 08:30:44.607 226310 DEBUG oslo_concurrency.lockutils [req-cc7707dd-9c3e-4552-b5e6-0846218524d7 req-e503c848-3887-48cb-aa32-64b1c8782e1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:44 np0005539564 nova_compute[226295]: 2025-11-29 08:30:44.607 226310 DEBUG oslo_concurrency.lockutils [req-cc7707dd-9c3e-4552-b5e6-0846218524d7 req-e503c848-3887-48cb-aa32-64b1c8782e1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:44 np0005539564 nova_compute[226295]: 2025-11-29 08:30:44.607 226310 DEBUG nova.network.neutron [req-cc7707dd-9c3e-4552-b5e6-0846218524d7 req-e503c848-3887-48cb-aa32-64b1c8782e1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:45.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:45 np0005539564 nova_compute[226295]: 2025-11-29 08:30:45.902 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:46.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:46 np0005539564 nova_compute[226295]: 2025-11-29 08:30:46.091 226310 DEBUG nova.network.neutron [req-cc7707dd-9c3e-4552-b5e6-0846218524d7 req-e503c848-3887-48cb-aa32-64b1c8782e1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updated VIF entry in instance network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:46 np0005539564 nova_compute[226295]: 2025-11-29 08:30:46.092 226310 DEBUG nova.network.neutron [req-cc7707dd-9c3e-4552-b5e6-0846218524d7 req-e503c848-3887-48cb-aa32-64b1c8782e1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:46 np0005539564 nova_compute[226295]: 2025-11-29 08:30:46.114 226310 DEBUG oslo_concurrency.lockutils [req-cc7707dd-9c3e-4552-b5e6-0846218524d7 req-e503c848-3887-48cb-aa32-64b1c8782e1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:30:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 54K writes, 211K keys, 54K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.04 MB/s#012Cumulative WAL: 54K writes, 19K syncs, 2.77 writes per sync, written: 0.20 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 40.17 MB, 0.07 MB/s#012Interval WAL: 11K writes, 4428 syncs, 2.53 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:30:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:47.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:48.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.574 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.729 226310 DEBUG oslo_concurrency.lockutils [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.730 226310 DEBUG oslo_concurrency.lockutils [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.744 226310 INFO nova.compute.manager [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Detaching volume 1c425827-5356-476a-a66c-a9193ddaa105#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.929 226310 INFO nova.virt.block_device [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Attempting to driver detach volume 1c425827-5356-476a-a66c-a9193ddaa105 from mountpoint /dev/vdb#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.940 226310 DEBUG nova.virt.libvirt.driver [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Attempting to detach device vdb from instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.942 226310 DEBUG nova.virt.libvirt.guest [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-1c425827-5356-476a-a66c-a9193ddaa105">
Nov 29 03:30:48 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <serial>1c425827-5356-476a-a66c-a9193ddaa105</serial>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:30:48 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.953 226310 INFO nova.virt.libvirt.driver [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdb from instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 from the persistent domain config.#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.954 226310 DEBUG nova.virt.libvirt.driver [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:30:48 np0005539564 nova_compute[226295]: 2025-11-29 08:30:48.954 226310 DEBUG nova.virt.libvirt.guest [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-1c425827-5356-476a-a66c-a9193ddaa105">
Nov 29 03:30:48 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <serial>1c425827-5356-476a-a66c-a9193ddaa105</serial>
Nov 29 03:30:48 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:30:48 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:30:48 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:49 np0005539564 nova_compute[226295]: 2025-11-29 08:30:49.095 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764405049.095347, ef2296eb-4538-4e04-8c0b-42370d9e5b12 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:30:49 np0005539564 nova_compute[226295]: 2025-11-29 08:30:49.099 226310 DEBUG nova.virt.libvirt.driver [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:30:49 np0005539564 nova_compute[226295]: 2025-11-29 08:30:49.102 226310 INFO nova.virt.libvirt.driver [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdb from instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 from the live domain config.#033[00m
Nov 29 03:30:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:49 np0005539564 nova_compute[226295]: 2025-11-29 08:30:49.318 226310 DEBUG nova.objects.instance [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid ef2296eb-4538-4e04-8c0b-42370d9e5b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:49 np0005539564 nova_compute[226295]: 2025-11-29 08:30:49.392 226310 DEBUG oslo_concurrency.lockutils [None req-e9e3faef-6142-4af8-841a-6776fa41ff8c d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:50.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:50 np0005539564 nova_compute[226295]: 2025-11-29 08:30:50.213 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:50 np0005539564 nova_compute[226295]: 2025-11-29 08:30:50.214 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:50 np0005539564 nova_compute[226295]: 2025-11-29 08:30:50.214 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:50 np0005539564 nova_compute[226295]: 2025-11-29 08:30:50.214 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:50 np0005539564 nova_compute[226295]: 2025-11-29 08:30:50.215 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:50 np0005539564 nova_compute[226295]: 2025-11-29 08:30:50.216 226310 INFO nova.compute.manager [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Terminating instance#033[00m
Nov 29 03:30:50 np0005539564 nova_compute[226295]: 2025-11-29 08:30:50.217 226310 DEBUG nova.compute.manager [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:50 np0005539564 nova_compute[226295]: 2025-11-29 08:30:50.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539564 kernel: tapbec1fbdf-d4 (unregistering): left promiscuous mode
Nov 29 03:30:51 np0005539564 NetworkManager[48997]: <info>  [1764405051.1057] device (tapbec1fbdf-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.116 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:51Z|00666|binding|INFO|Releasing lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 from this chassis (sb_readonly=0)
Nov 29 03:30:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:51Z|00667|binding|INFO|Setting lport bec1fbdf-d4dc-4b2c-af66-9ba123464651 down in Southbound
Nov 29 03:30:51 np0005539564 ovn_controller[130591]: 2025-11-29T08:30:51Z|00668|binding|INFO|Removing iface tapbec1fbdf-d4 ovn-installed in OVS
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.118 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:51.128 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a4:7d 10.100.0.7'], port_security=['fa:16:3e:68:a4:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf4c2292-18d7-4c4b-b97e-abb227923156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '02abc3ce-c8f1-4034-8c00-97d80a9dca82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1429573d-31ea-4b00-8580-1031fbde1ea5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=bec1fbdf-d4dc-4b2c-af66-9ba123464651) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:51.129 139780 INFO neutron.agent.ovn.metadata.agent [-] Port bec1fbdf-d4dc-4b2c-af66-9ba123464651 in datapath a259ebcb-7cce-4363-8e50-c25ed4a3daec unbound from our chassis#033[00m
Nov 29 03:30:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:51.131 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a259ebcb-7cce-4363-8e50-c25ed4a3daec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:51.132 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[24ba0524-0fcb-4773-a34b-907cf1c1a58c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:51.133 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec namespace which is not needed anymore#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.139 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539564 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Nov 29 03:30:51 np0005539564 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a5.scope: Consumed 15.412s CPU time.
Nov 29 03:30:51 np0005539564 systemd-machined[190128]: Machine qemu-80-instance-000000a5 terminated.
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.257 226310 INFO nova.virt.libvirt.driver [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Instance destroyed successfully.#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.257 226310 DEBUG nova.objects.instance [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'resources' on Instance uuid bf4c2292-18d7-4c4b-b97e-abb227923156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.275 226310 DEBUG nova.virt.libvirt.vif [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-958672118',display_name='tempest-AttachVolumeTestJSON-server-958672118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-958672118',id=165,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdrbbSC0C45QsA4WoH0skfLoEJuEvLicsgsNB+yVNBuuMlISuhPWp9oFNtgFMFejIvvaZEt/QMuFcGBBZNxI6yPKa6qrk6Y9WbQ5vb9c2PbERoEO1CjXu5JuNOGCArooQ==',key_name='tempest-keypair-1718432726',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-unuosnba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=bf4c2292-18d7-4c4b-b97e-abb227923156,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.276 226310 DEBUG nova.network.os_vif_util [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "address": "fa:16:3e:68:a4:7d", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec1fbdf-d4", "ovs_interfaceid": "bec1fbdf-d4dc-4b2c-af66-9ba123464651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.276 226310 DEBUG nova.network.os_vif_util [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.277 226310 DEBUG os_vif [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.278 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.278 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbec1fbdf-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539564 nova_compute[226295]: 2025-11-29 08:30:51.283 226310 INFO os_vif [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=bec1fbdf-d4dc-4b2c-af66-9ba123464651,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec1fbdf-d4')#033[00m
Nov 29 03:30:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:51.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:51 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[288465]: [NOTICE]   (288469) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:51 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[288465]: [NOTICE]   (288469) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:51 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[288465]: [WARNING]  (288469) : Exiting Master process...
Nov 29 03:30:51 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[288465]: [WARNING]  (288469) : Exiting Master process...
Nov 29 03:30:51 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[288465]: [ALERT]    (288469) : Current worker (288471) exited with code 143 (Terminated)
Nov 29 03:30:51 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[288465]: [WARNING]  (288469) : All workers exited. Exiting... (0)
Nov 29 03:30:51 np0005539564 systemd[1]: libpod-75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95.scope: Deactivated successfully.
Nov 29 03:30:51 np0005539564 podman[288624]: 2025-11-29 08:30:51.54780017 +0000 UTC m=+0.317791009 container died 75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:51 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:51 np0005539564 systemd[1]: var-lib-containers-storage-overlay-10e1effdc16fec3f2b98bc6ec33e174837a80d908b32c6beea4097a0d524dd14-merged.mount: Deactivated successfully.
Nov 29 03:30:51 np0005539564 podman[288624]: 2025-11-29 08:30:51.978863592 +0000 UTC m=+0.748854411 container cleanup 75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:51 np0005539564 systemd[1]: libpod-conmon-75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95.scope: Deactivated successfully.
Nov 29 03:30:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:52.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:52 np0005539564 podman[288679]: 2025-11-29 08:30:52.059458091 +0000 UTC m=+0.050526498 container remove 75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.066 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[30ef657e-ed18-43f4-91a1-d26f4fe25f6d]: (4, ('Sat Nov 29 08:30:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec (75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95)\n75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95\nSat Nov 29 08:30:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec (75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95)\n75f8ff280b9bf2f35fbff76dc9c33519ab5f7ce444abfef99080cbd6acfeec95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.069 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[abde009e-2e9e-45e7-9e4b-cb46ffbfdbe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.070 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa259ebcb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539564 nova_compute[226295]: 2025-11-29 08:30:52.072 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539564 kernel: tapa259ebcb-70: left promiscuous mode
Nov 29 03:30:52 np0005539564 nova_compute[226295]: 2025-11-29 08:30:52.087 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.091 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c032deec-93e4-4b9a-815c-dce77854dfd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.111 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[275f434e-7bac-4048-99e7-37e21aa52600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.113 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[68742bfb-fc6c-4700-8ce3-3a27f4aef472]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.131 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[39c64081-d115-496b-9a91-a6e0e6be1e4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799063, 'reachable_time': 25345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288691, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539564 systemd[1]: run-netns-ovnmeta\x2da259ebcb\x2d7cce\x2d4363\x2d8e50\x2dc25ed4a3daec.mount: Deactivated successfully.
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.142 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:30:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:52.142 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3c6cb6-2b90-4e9f-8086-d76e2bec4745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539564 nova_compute[226295]: 2025-11-29 08:30:52.231 226310 DEBUG nova.compute.manager [req-e3e8c1e4-88f5-476e-99f6-41e0864b4bcc req-b971abe5-6a5e-4d7c-8f09-cc4a082c0741 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:52 np0005539564 nova_compute[226295]: 2025-11-29 08:30:52.231 226310 DEBUG oslo_concurrency.lockutils [req-e3e8c1e4-88f5-476e-99f6-41e0864b4bcc req-b971abe5-6a5e-4d7c-8f09-cc4a082c0741 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:52 np0005539564 nova_compute[226295]: 2025-11-29 08:30:52.231 226310 DEBUG oslo_concurrency.lockutils [req-e3e8c1e4-88f5-476e-99f6-41e0864b4bcc req-b971abe5-6a5e-4d7c-8f09-cc4a082c0741 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:52 np0005539564 nova_compute[226295]: 2025-11-29 08:30:52.232 226310 DEBUG oslo_concurrency.lockutils [req-e3e8c1e4-88f5-476e-99f6-41e0864b4bcc req-b971abe5-6a5e-4d7c-8f09-cc4a082c0741 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:52 np0005539564 nova_compute[226295]: 2025-11-29 08:30:52.232 226310 DEBUG nova.compute.manager [req-e3e8c1e4-88f5-476e-99f6-41e0864b4bcc req-b971abe5-6a5e-4d7c-8f09-cc4a082c0741 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:52 np0005539564 nova_compute[226295]: 2025-11-29 08:30:52.232 226310 DEBUG nova.compute.manager [req-e3e8c1e4-88f5-476e-99f6-41e0864b4bcc req-b971abe5-6a5e-4d7c-8f09-cc4a082c0741 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-unplugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:53.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:53 np0005539564 nova_compute[226295]: 2025-11-29 08:30:53.654 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:53 np0005539564 nova_compute[226295]: 2025-11-29 08:30:53.812 226310 INFO nova.virt.libvirt.driver [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Deleting instance files /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156_del#033[00m
Nov 29 03:30:53 np0005539564 nova_compute[226295]: 2025-11-29 08:30:53.812 226310 INFO nova.virt.libvirt.driver [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Deletion of /var/lib/nova/instances/bf4c2292-18d7-4c4b-b97e-abb227923156_del complete#033[00m
Nov 29 03:30:53 np0005539564 nova_compute[226295]: 2025-11-29 08:30:53.883 226310 INFO nova.compute.manager [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Took 3.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:53 np0005539564 nova_compute[226295]: 2025-11-29 08:30:53.884 226310 DEBUG oslo.service.loopingcall [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:53 np0005539564 nova_compute[226295]: 2025-11-29 08:30:53.884 226310 DEBUG nova.compute.manager [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:53 np0005539564 nova_compute[226295]: 2025-11-29 08:30:53.884 226310 DEBUG nova.network.neutron [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:54.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.392 226310 DEBUG nova.compute.manager [req-7d823627-eddc-4fda-9beb-769c0726ad6e req-464548ef-80aa-4060-a8ac-51e16fb0840a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.393 226310 DEBUG oslo_concurrency.lockutils [req-7d823627-eddc-4fda-9beb-769c0726ad6e req-464548ef-80aa-4060-a8ac-51e16fb0840a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.394 226310 DEBUG oslo_concurrency.lockutils [req-7d823627-eddc-4fda-9beb-769c0726ad6e req-464548ef-80aa-4060-a8ac-51e16fb0840a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.394 226310 DEBUG oslo_concurrency.lockutils [req-7d823627-eddc-4fda-9beb-769c0726ad6e req-464548ef-80aa-4060-a8ac-51e16fb0840a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.394 226310 DEBUG nova.compute.manager [req-7d823627-eddc-4fda-9beb-769c0726ad6e req-464548ef-80aa-4060-a8ac-51e16fb0840a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] No waiting events found dispatching network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.395 226310 WARNING nova.compute.manager [req-7d823627-eddc-4fda-9beb-769c0726ad6e req-464548ef-80aa-4060-a8ac-51e16fb0840a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received unexpected event network-vif-plugged-bec1fbdf-d4dc-4b2c-af66-9ba123464651 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.587 226310 DEBUG oslo_concurrency.lockutils [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.588 226310 DEBUG oslo_concurrency.lockutils [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.610 226310 INFO nova.compute.manager [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Detaching volume 2c994368-e80f-41ad-8820-f509c2008bb5#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.739 226310 INFO nova.virt.block_device [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Attempting to driver detach volume 2c994368-e80f-41ad-8820-f509c2008bb5 from mountpoint /dev/vdc#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.754 226310 DEBUG nova.virt.libvirt.driver [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Attempting to detach device vdc from instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.755 226310 DEBUG nova.virt.libvirt.guest [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:54 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:54 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-2c994368-e80f-41ad-8820-f509c2008bb5">
Nov 29 03:30:54 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:54 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:54 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:54 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:30:54 np0005539564 nova_compute[226295]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:30:54 np0005539564 nova_compute[226295]:  <serial>2c994368-e80f-41ad-8820-f509c2008bb5</serial>
Nov 29 03:30:54 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:30:54 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:30:54 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.826 226310 DEBUG nova.network.neutron [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.844 226310 INFO nova.compute.manager [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Took 0.96 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.917 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.918 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:54 np0005539564 nova_compute[226295]: 2025-11-29 08:30:54.979 226310 DEBUG nova.compute.manager [req-de371b12-5ba1-424d-b800-23b5fa22691f req-2a1f1c82-c0ca-4773-9743-dc94f62be3e5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Received event network-vif-deleted-bec1fbdf-d4dc-4b2c-af66-9ba123464651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.014 226310 DEBUG oslo_concurrency.processutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.153 226310 INFO nova.virt.libvirt.driver [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdc from instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 from the persistent domain config.#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.154 226310 DEBUG nova.virt.libvirt.driver [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.155 226310 DEBUG nova.virt.libvirt.guest [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:55 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:55 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-2c994368-e80f-41ad-8820-f509c2008bb5">
Nov 29 03:30:55 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:55 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:55 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:55 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:30:55 np0005539564 nova_compute[226295]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:30:55 np0005539564 nova_compute[226295]:  <serial>2c994368-e80f-41ad-8820-f509c2008bb5</serial>
Nov 29 03:30:55 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:30:55 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:30:55 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:55.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.444 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764405055.434307, ef2296eb-4538-4e04-8c0b-42370d9e5b12 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.445 226310 DEBUG nova.virt.libvirt.driver [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.450 226310 INFO nova.virt.libvirt.driver [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdc from instance ef2296eb-4538-4e04-8c0b-42370d9e5b12 from the live domain config.#033[00m
Nov 29 03:30:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2115914059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.595 226310 DEBUG oslo_concurrency.processutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.605 226310 DEBUG nova.compute.provider_tree [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.625 226310 DEBUG nova.scheduler.client.report [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.636 226310 DEBUG nova.objects.instance [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid ef2296eb-4538-4e04-8c0b-42370d9e5b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.659 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.680 226310 DEBUG oslo_concurrency.lockutils [None req-66ad6fc7-1eb8-4e9c-b6db-403db6c535e5 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.684 226310 INFO nova.scheduler.client.report [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Deleted allocations for instance bf4c2292-18d7-4c4b-b97e-abb227923156#033[00m
Nov 29 03:30:55 np0005539564 nova_compute[226295]: 2025-11-29 08:30:55.745 226310 DEBUG oslo_concurrency.lockutils [None req-312548a7-14c0-4829-a4a0-e73bbc6d07b1 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "bf4c2292-18d7-4c4b-b97e-abb227923156" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:56.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:56 np0005539564 nova_compute[226295]: 2025-11-29 08:30:56.280 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:57.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:58.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:58.640 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:58 np0005539564 nova_compute[226295]: 2025-11-29 08:30:58.641 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:30:58.642 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:30:58 np0005539564 nova_compute[226295]: 2025-11-29 08:30:58.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:59 np0005539564 nova_compute[226295]: 2025-11-29 08:30:59.066 226310 DEBUG nova.compute.manager [req-70412bb7-eb13-4752-b4fe-1193a94bd89a req-b6651c5d-ed25-41d4-87c2-4e2036bfdb27 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:59 np0005539564 nova_compute[226295]: 2025-11-29 08:30:59.066 226310 DEBUG nova.compute.manager [req-70412bb7-eb13-4752-b4fe-1193a94bd89a req-b6651c5d-ed25-41d4-87c2-4e2036bfdb27 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing instance network info cache due to event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:59 np0005539564 nova_compute[226295]: 2025-11-29 08:30:59.066 226310 DEBUG oslo_concurrency.lockutils [req-70412bb7-eb13-4752-b4fe-1193a94bd89a req-b6651c5d-ed25-41d4-87c2-4e2036bfdb27 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:59 np0005539564 nova_compute[226295]: 2025-11-29 08:30:59.067 226310 DEBUG oslo_concurrency.lockutils [req-70412bb7-eb13-4752-b4fe-1193a94bd89a req-b6651c5d-ed25-41d4-87c2-4e2036bfdb27 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:59 np0005539564 nova_compute[226295]: 2025-11-29 08:30:59.067 226310 DEBUG nova.network.neutron [req-70412bb7-eb13-4752-b4fe-1193a94bd89a req-b6651c5d-ed25-41d4-87c2-4e2036bfdb27 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:30:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:59.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:00.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:00 np0005539564 nova_compute[226295]: 2025-11-29 08:31:00.172 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:00 np0005539564 nova_compute[226295]: 2025-11-29 08:31:00.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:00 np0005539564 podman[288718]: 2025-11-29 08:31:00.525860332 +0000 UTC m=+0.074155188 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 03:31:00 np0005539564 podman[288719]: 2025-11-29 08:31:00.534519246 +0000 UTC m=+0.080821478 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:31:00 np0005539564 podman[288717]: 2025-11-29 08:31:00.633158644 +0000 UTC m=+0.175587402 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 03:31:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:00.643 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e370 e370: 3 total, 3 up, 3 in
Nov 29 03:31:01 np0005539564 nova_compute[226295]: 2025-11-29 08:31:01.089 226310 DEBUG nova.network.neutron [req-70412bb7-eb13-4752-b4fe-1193a94bd89a req-b6651c5d-ed25-41d4-87c2-4e2036bfdb27 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updated VIF entry in instance network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:01 np0005539564 nova_compute[226295]: 2025-11-29 08:31:01.090 226310 DEBUG nova.network.neutron [req-70412bb7-eb13-4752-b4fe-1193a94bd89a req-b6651c5d-ed25-41d4-87c2-4e2036bfdb27 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:01 np0005539564 nova_compute[226295]: 2025-11-29 08:31:01.109 226310 DEBUG oslo_concurrency.lockutils [req-70412bb7-eb13-4752-b4fe-1193a94bd89a req-b6651c5d-ed25-41d4-87c2-4e2036bfdb27 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:01 np0005539564 nova_compute[226295]: 2025-11-29 08:31:01.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:01.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:02.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:02 np0005539564 nova_compute[226295]: 2025-11-29 08:31:02.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:02 np0005539564 nova_compute[226295]: 2025-11-29 08:31:02.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:31:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:03.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:03 np0005539564 nova_compute[226295]: 2025-11-29 08:31:03.661 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:03.745 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:03.746 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:03.747 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:31:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:04.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:04 np0005539564 nova_compute[226295]: 2025-11-29 08:31:04.112 226310 DEBUG nova.compute.manager [req-0d22b8b5-2baa-44dc-bc9f-96052b061e53 req-4338da31-d621-4a12-a7ed-5ae5ce48c012 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:04 np0005539564 nova_compute[226295]: 2025-11-29 08:31:04.112 226310 DEBUG nova.compute.manager [req-0d22b8b5-2baa-44dc-bc9f-96052b061e53 req-4338da31-d621-4a12-a7ed-5ae5ce48c012 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing instance network info cache due to event network-changed-eac06205-cdc0-424d-b7e2-7740e0db232d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:04 np0005539564 nova_compute[226295]: 2025-11-29 08:31:04.113 226310 DEBUG oslo_concurrency.lockutils [req-0d22b8b5-2baa-44dc-bc9f-96052b061e53 req-4338da31-d621-4a12-a7ed-5ae5ce48c012 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:04 np0005539564 nova_compute[226295]: 2025-11-29 08:31:04.113 226310 DEBUG oslo_concurrency.lockutils [req-0d22b8b5-2baa-44dc-bc9f-96052b061e53 req-4338da31-d621-4a12-a7ed-5ae5ce48c012 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:04 np0005539564 nova_compute[226295]: 2025-11-29 08:31:04.113 226310 DEBUG nova.network.neutron [req-0d22b8b5-2baa-44dc-bc9f-96052b061e53 req-4338da31-d621-4a12-a7ed-5ae5ce48c012 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Refreshing network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:04 np0005539564 nova_compute[226295]: 2025-11-29 08:31:04.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.346 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:05.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.769 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.770 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.786 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:31:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.928 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.929 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.935 226310 DEBUG nova.network.neutron [req-0d22b8b5-2baa-44dc-bc9f-96052b061e53 req-4338da31-d621-4a12-a7ed-5ae5ce48c012 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updated VIF entry in instance network info cache for port eac06205-cdc0-424d-b7e2-7740e0db232d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.935 226310 DEBUG nova.network.neutron [req-0d22b8b5-2baa-44dc-bc9f-96052b061e53 req-4338da31-d621-4a12-a7ed-5ae5ce48c012 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.941 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.942 226310 INFO nova.compute.claims [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:31:05 np0005539564 nova_compute[226295]: 2025-11-29 08:31:05.976 226310 DEBUG oslo_concurrency.lockutils [req-0d22b8b5-2baa-44dc-bc9f-96052b061e53 req-4338da31-d621-4a12-a7ed-5ae5ce48c012 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:06.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.222 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.269 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405051.251547, bf4c2292-18d7-4c4b-b97e-abb227923156 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.270 226310 INFO nova.compute.manager [-] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.283 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.301 226310 DEBUG nova.compute.manager [None req-e96a4b7e-988b-46dc-bbff-ec745e488adb - - - - - -] [instance: bf4c2292-18d7-4c4b-b97e-abb227923156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.499 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.499 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.500 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:31:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e371 e371: 3 total, 3 up, 3 in
Nov 29 03:31:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3782315917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.693 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.703 226310 DEBUG nova.compute.provider_tree [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.719 226310 DEBUG nova.scheduler.client.report [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.749 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.750 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.801 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.802 226310 DEBUG nova.network.neutron [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.819 226310 INFO nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.837 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.947 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.949 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.951 226310 INFO nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Creating image(s)#033[00m
Nov 29 03:31:06 np0005539564 nova_compute[226295]: 2025-11-29 08:31:06.986 226310 DEBUG nova.storage.rbd_utils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image b4b22d90-9381-462a-bb31-7c87d8627c3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.041 226310 DEBUG nova.storage.rbd_utils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image b4b22d90-9381-462a-bb31-7c87d8627c3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.073 226310 DEBUG nova.storage.rbd_utils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image b4b22d90-9381-462a-bb31-7c87d8627c3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.078 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.123 226310 DEBUG nova.policy [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a362a419f6a492aae2f102ad2bbd5e9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.167 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.168 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.169 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.169 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.220 226310 DEBUG nova.storage.rbd_utils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image b4b22d90-9381-462a-bb31-7c87d8627c3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.224 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b4b22d90-9381-462a-bb31-7c87d8627c3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:07.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.563 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b4b22d90-9381-462a-bb31-7c87d8627c3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:07 np0005539564 nova_compute[226295]: 2025-11-29 08:31:07.651 226310 DEBUG nova.storage.rbd_utils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] resizing rbd image b4b22d90-9381-462a-bb31-7c87d8627c3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:31:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e372 e372: 3 total, 3 up, 3 in
Nov 29 03:31:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:08.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.108 226310 DEBUG nova.objects.instance [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'migration_context' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.123 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.124 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Ensure instance console log exists: /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.124 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.125 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.125 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.208 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [{"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.223 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-ef2296eb-4538-4e04-8c0b-42370d9e5b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.224 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.225 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.232 226310 DEBUG nova.network.neutron [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Successfully created port: 61be4f14-5854-4137-968b-8b44c045bc1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:31:08 np0005539564 nova_compute[226295]: 2025-11-29 08:31:08.693 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:08.818984) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068819048, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1749, "num_deletes": 258, "total_data_size": 3733318, "memory_usage": 3784704, "flush_reason": "Manual Compaction"}
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068837337, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 2449043, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58872, "largest_seqno": 60616, "table_properties": {"data_size": 2441815, "index_size": 4171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16166, "raw_average_key_size": 20, "raw_value_size": 2426842, "raw_average_value_size": 3064, "num_data_blocks": 183, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404937, "oldest_key_time": 1764404937, "file_creation_time": 1764405068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 18487 microseconds, and 8061 cpu microseconds.
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:08.837478) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 2449043 bytes OK
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:08.837601) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:08.902684) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:08.902716) EVENT_LOG_v1 {"time_micros": 1764405068902706, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:08.902738) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 3725269, prev total WAL file size 3725269, number of live WAL files 2.
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:08.904362) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303135' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(2391KB)], [114(10MB)]
Nov 29 03:31:08 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068904392, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 13770371, "oldest_snapshot_seqno": -1}
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 9140 keys, 13628706 bytes, temperature: kUnknown
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405069138524, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 13628706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13566974, "index_size": 37748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 237963, "raw_average_key_size": 26, "raw_value_size": 13403846, "raw_average_value_size": 1466, "num_data_blocks": 1473, "num_entries": 9140, "num_filter_entries": 9140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:09.138776) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 13628706 bytes
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:09.153563) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.8 rd, 58.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.8 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(11.2) write-amplify(5.6) OK, records in: 9672, records dropped: 532 output_compression: NoCompression
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:09.153615) EVENT_LOG_v1 {"time_micros": 1764405069153595, "job": 72, "event": "compaction_finished", "compaction_time_micros": 234216, "compaction_time_cpu_micros": 27303, "output_level": 6, "num_output_files": 1, "total_output_size": 13628706, "num_input_records": 9672, "num_output_records": 9140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405069154764, "job": 72, "event": "table_file_deletion", "file_number": 116}
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405069159648, "job": 72, "event": "table_file_deletion", "file_number": 114}
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:08.904271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:09.159809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:09.159818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:09.159822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:09.159824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:31:09.159827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539564 nova_compute[226295]: 2025-11-29 08:31:09.173 226310 DEBUG nova.network.neutron [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Successfully updated port: 61be4f14-5854-4137-968b-8b44c045bc1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e373 e373: 3 total, 3 up, 3 in
Nov 29 03:31:09 np0005539564 nova_compute[226295]: 2025-11-29 08:31:09.192 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:09 np0005539564 nova_compute[226295]: 2025-11-29 08:31:09.193 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquired lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:09 np0005539564 nova_compute[226295]: 2025-11-29 08:31:09.193 226310 DEBUG nova.network.neutron [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:31:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:09 np0005539564 nova_compute[226295]: 2025-11-29 08:31:09.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:09 np0005539564 nova_compute[226295]: 2025-11-29 08:31:09.390 226310 DEBUG nova.network.neutron [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:31:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:09.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:10.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.577 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.578 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.578 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.579 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.579 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.581 226310 INFO nova.compute.manager [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Terminating instance#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.582 226310 DEBUG nova.compute.manager [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.645 226310 DEBUG nova.network.neutron [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updating instance_info_cache with network_info: [{"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.668 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Releasing lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.669 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Instance network_info: |[{"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.675 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Start _get_guest_xml network_info=[{"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.682 226310 WARNING nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.688 226310 DEBUG nova.virt.libvirt.host [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.689 226310 DEBUG nova.virt.libvirt.host [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.694 226310 DEBUG nova.virt.libvirt.host [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.695 226310 DEBUG nova.virt.libvirt.host [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.696 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.697 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.698 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.698 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.699 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.699 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.699 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.700 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.700 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.701 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.701 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.702 226310 DEBUG nova.virt.hardware [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:31:10 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.706 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:10 np0005539564 kernel: tapeac06205-cd (unregistering): left promiscuous mode
Nov 29 03:31:10 np0005539564 NetworkManager[48997]: <info>  [1764405070.9418] device (tapeac06205-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:31:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:10Z|00669|binding|INFO|Releasing lport eac06205-cdc0-424d-b7e2-7740e0db232d from this chassis (sb_readonly=0)
Nov 29 03:31:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:10Z|00670|binding|INFO|Setting lport eac06205-cdc0-424d-b7e2-7740e0db232d down in Southbound
Nov 29 03:31:10 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:10Z|00671|binding|INFO|Removing iface tapeac06205-cd ovn-installed in OVS
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:10.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.009 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c3:a9 10.100.0.11'], port_security=['fa:16:3e:fb:c3:a9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ef2296eb-4538-4e04-8c0b-42370d9e5b12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-371b699e-06e1-407e-ac77-9768d9a0e76e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '527c6a274d1e478eadfe67139e121185', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4e734722-bbf6-4c47-9bc6-bf8d5f52e07d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c0188f4-aa09-4b91-9f84-524ffee1218e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=eac06205-cdc0-424d-b7e2-7740e0db232d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.011 139780 INFO neutron.agent.ovn.metadata.agent [-] Port eac06205-cdc0-424d-b7e2-7740e0db232d in datapath 371b699e-06e1-407e-ac77-9768d9a0e76e unbound from our chassis#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.013 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 371b699e-06e1-407e-ac77-9768d9a0e76e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.014 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[626476d2-f75f-46c2-97f7-3aa19f6a4d9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.015 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e namespace which is not needed anymore#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.027 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Nov 29 03:31:11 np0005539564 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a8.scope: Consumed 20.942s CPU time.
Nov 29 03:31:11 np0005539564 systemd-machined[190128]: Machine qemu-77-instance-000000a8 terminated.
Nov 29 03:31:11 np0005539564 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[287033]: [NOTICE]   (287037) : haproxy version is 2.8.14-c23fe91
Nov 29 03:31:11 np0005539564 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[287033]: [NOTICE]   (287037) : path to executable is /usr/sbin/haproxy
Nov 29 03:31:11 np0005539564 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[287033]: [WARNING]  (287037) : Exiting Master process...
Nov 29 03:31:11 np0005539564 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[287033]: [WARNING]  (287037) : Exiting Master process...
Nov 29 03:31:11 np0005539564 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[287033]: [ALERT]    (287037) : Current worker (287039) exited with code 143 (Terminated)
Nov 29 03:31:11 np0005539564 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[287033]: [WARNING]  (287037) : All workers exited. Exiting... (0)
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.166 226310 DEBUG nova.compute.manager [req-475ae3c7-fbd9-4760-b452-784df8abcad7 req-9c1caa82-cea0-456c-bce9-b71db1079909 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received event network-changed-61be4f14-5854-4137-968b-8b44c045bc1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.167 226310 DEBUG nova.compute.manager [req-475ae3c7-fbd9-4760-b452-784df8abcad7 req-9c1caa82-cea0-456c-bce9-b71db1079909 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Refreshing instance network info cache due to event network-changed-61be4f14-5854-4137-968b-8b44c045bc1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.167 226310 DEBUG oslo_concurrency.lockutils [req-475ae3c7-fbd9-4760-b452-784df8abcad7 req-9c1caa82-cea0-456c-bce9-b71db1079909 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.167 226310 DEBUG oslo_concurrency.lockutils [req-475ae3c7-fbd9-4760-b452-784df8abcad7 req-9c1caa82-cea0-456c-bce9-b71db1079909 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:11 np0005539564 systemd[1]: libpod-b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e.scope: Deactivated successfully.
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.168 226310 DEBUG nova.network.neutron [req-475ae3c7-fbd9-4760-b452-784df8abcad7 req-9c1caa82-cea0-456c-bce9-b71db1079909 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Refreshing network info cache for port 61be4f14-5854-4137-968b-8b44c045bc1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:11 np0005539564 podman[289143]: 2025-11-29 08:31:11.176867884 +0000 UTC m=+0.055319318 container died b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:31:11 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 systemd[1]: var-lib-containers-storage-overlay-46ac4d39acbf99279a9d2f5345552d857bb32180dd58a6dabb263cd95a20620e-merged.mount: Deactivated successfully.
Nov 29 03:31:11 np0005539564 podman[289143]: 2025-11-29 08:31:11.227127284 +0000 UTC m=+0.105578658 container cleanup b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.226 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1382191317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.230 226310 INFO nova.virt.libvirt.driver [-] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Instance destroyed successfully.#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.231 226310 DEBUG nova.objects.instance [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'resources' on Instance uuid ef2296eb-4538-4e04-8c0b-42370d9e5b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:11 np0005539564 systemd[1]: libpod-conmon-b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e.scope: Deactivated successfully.
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.254 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.286 226310 DEBUG nova.storage.rbd_utils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image b4b22d90-9381-462a-bb31-7c87d8627c3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.290 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:11 np0005539564 podman[289183]: 2025-11-29 08:31:11.291382891 +0000 UTC m=+0.043273141 container remove b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.296 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[46a8907f-7d70-49bd-a839-f6a0b213bedb]: (4, ('Sat Nov 29 08:31:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e (b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e)\nb59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e\nSat Nov 29 08:31:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e (b59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e)\nb59e56ae3dac7fc4561a038fb03570ef20f194c17928ac6980a213a7a08ba63e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.298 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7f7618-a407-46b9-b8ea-546c20488f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.298 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap371b699e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:11 np0005539564 kernel: tap371b699e-00: left promiscuous mode
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.321 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.322 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[feb71ac0-2418-41ca-9921-3ca48ac6d832]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.328 226310 DEBUG nova.virt.libvirt.vif [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1293570209',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1293570209',id=168,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-l0g9m97g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:11Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=ef2296eb-4538-4e04-8c0b-42370d9e5b12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.329 226310 DEBUG nova.network.os_vif_util [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "eac06205-cdc0-424d-b7e2-7740e0db232d", "address": "fa:16:3e:fb:c3:a9", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeac06205-cd", "ovs_interfaceid": "eac06205-cdc0-424d-b7e2-7740e0db232d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.330 226310 DEBUG nova.network.os_vif_util [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c3:a9,bridge_name='br-int',has_traffic_filtering=True,id=eac06205-cdc0-424d-b7e2-7740e0db232d,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeac06205-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.331 226310 DEBUG os_vif [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c3:a9,bridge_name='br-int',has_traffic_filtering=True,id=eac06205-cdc0-424d-b7e2-7740e0db232d,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeac06205-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.334 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.336 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeac06205-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.338 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b56995d0-b5f8-40ae-ad14-b670666276da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.339 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4887eb-e0a0-4874-af03-c34d536b7ba5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.341 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.342 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.345 226310 INFO os_vif [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c3:a9,bridge_name='br-int',has_traffic_filtering=True,id=eac06205-cdc0-424d-b7e2-7740e0db232d,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeac06205-cd')#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.357 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e3127d15-e8ad-432b-bcc6-1636470374e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792456, 'reachable_time': 23102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289219, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:11 np0005539564 systemd[1]: run-netns-ovnmeta\x2d371b699e\x2d06e1\x2d407e\x2dac77\x2d9768d9a0e76e.mount: Deactivated successfully.
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.361 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:31:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:11.362 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[f869140e-c177-43ee-b7fe-0c64d725ce74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:11.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.478 226310 DEBUG nova.compute.manager [req-61548257-67e6-4b3e-a543-4db9d046de69 req-0555ef68-b3dc-4883-b705-59e9a887bf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-vif-unplugged-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.479 226310 DEBUG oslo_concurrency.lockutils [req-61548257-67e6-4b3e-a543-4db9d046de69 req-0555ef68-b3dc-4883-b705-59e9a887bf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.480 226310 DEBUG oslo_concurrency.lockutils [req-61548257-67e6-4b3e-a543-4db9d046de69 req-0555ef68-b3dc-4883-b705-59e9a887bf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.480 226310 DEBUG oslo_concurrency.lockutils [req-61548257-67e6-4b3e-a543-4db9d046de69 req-0555ef68-b3dc-4883-b705-59e9a887bf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.481 226310 DEBUG nova.compute.manager [req-61548257-67e6-4b3e-a543-4db9d046de69 req-0555ef68-b3dc-4883-b705-59e9a887bf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] No waiting events found dispatching network-vif-unplugged-eac06205-cdc0-424d-b7e2-7740e0db232d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.481 226310 DEBUG nova.compute.manager [req-61548257-67e6-4b3e-a543-4db9d046de69 req-0555ef68-b3dc-4883-b705-59e9a887bf75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-vif-unplugged-eac06205-cdc0-424d-b7e2-7740e0db232d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:31:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/105282154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.739 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.740 226310 DEBUG nova.virt.libvirt.vif [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-319067414',display_name='tempest-AttachVolumeTestJSON-server-319067414',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-319067414',id=172,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyqkXOK08CTph69txbTtq6hDXOFs7dzkRY0n3OL57A4cq65MQYsgTEl6dcAndCU2WpIdYkwjr+sWSAMTpecJCNMmd9pbWSWA1W/5SBqzNTXW4i70eNP1fMxq0ypOfA/2A==',key_name='tempest-keypair-318616355',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-4luqh18o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=b4b22d90-9381-462a-bb31-7c87d8627c3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.741 226310 DEBUG nova.network.os_vif_util [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.741 226310 DEBUG nova.network.os_vif_util [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:42:86,bridge_name='br-int',has_traffic_filtering=True,id=61be4f14-5854-4137-968b-8b44c045bc1b,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61be4f14-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.742 226310 DEBUG nova.objects.instance [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'pci_devices' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.758 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <uuid>b4b22d90-9381-462a-bb31-7c87d8627c3b</uuid>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <name>instance-000000ac</name>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <nova:name>tempest-AttachVolumeTestJSON-server-319067414</nova:name>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:31:10</nova:creationTime>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <nova:user uuid="7a362a419f6a492aae2f102ad2bbd5e9">tempest-AttachVolumeTestJSON-942041170-project-member</nova:user>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <nova:project uuid="eb0810bf6f5b4eb59638b7a2cf59ed5b">tempest-AttachVolumeTestJSON-942041170</nova:project>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <nova:port uuid="61be4f14-5854-4137-968b-8b44c045bc1b">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <entry name="serial">b4b22d90-9381-462a-bb31-7c87d8627c3b</entry>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <entry name="uuid">b4b22d90-9381-462a-bb31-7c87d8627c3b</entry>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/b4b22d90-9381-462a-bb31-7c87d8627c3b_disk">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/b4b22d90-9381-462a-bb31-7c87d8627c3b_disk.config">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:89:42:86"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <target dev="tap61be4f14-58"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b/console.log" append="off"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:31:11 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:31:11 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:31:11 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:31:11 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.760 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Preparing to wait for external event network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.760 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.761 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.761 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.762 226310 DEBUG nova.virt.libvirt.vif [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-319067414',display_name='tempest-AttachVolumeTestJSON-server-319067414',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-319067414',id=172,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyqkXOK08CTph69txbTtq6hDXOFs7dzkRY0n3OL57A4cq65MQYsgTEl6dcAndCU2WpIdYkwjr+sWSAMTpecJCNMmd9pbWSWA1W/5SBqzNTXW4i70eNP1fMxq0ypOfA/2A==',key_name='tempest-keypair-318616355',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-4luqh18o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=b4b22d90-9381-462a-bb31-7c87d8627c3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.762 226310 DEBUG nova.network.os_vif_util [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.763 226310 DEBUG nova.network.os_vif_util [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:42:86,bridge_name='br-int',has_traffic_filtering=True,id=61be4f14-5854-4137-968b-8b44c045bc1b,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61be4f14-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.763 226310 DEBUG os_vif [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:42:86,bridge_name='br-int',has_traffic_filtering=True,id=61be4f14-5854-4137-968b-8b44c045bc1b,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61be4f14-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.764 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.764 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.765 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.768 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61be4f14-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.768 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61be4f14-58, col_values=(('external_ids', {'iface-id': '61be4f14-5854-4137-968b-8b44c045bc1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:42:86', 'vm-uuid': 'b4b22d90-9381-462a-bb31-7c87d8627c3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:11 np0005539564 NetworkManager[48997]: <info>  [1764405071.7713] manager: (tap61be4f14-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.770 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.774 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.776 226310 INFO os_vif [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:42:86,bridge_name='br-int',has_traffic_filtering=True,id=61be4f14-5854-4137-968b-8b44c045bc1b,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61be4f14-58')#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.919 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.920 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.920 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No VIF found with MAC fa:16:3e:89:42:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.921 226310 INFO nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Using config drive#033[00m
Nov 29 03:31:11 np0005539564 nova_compute[226295]: 2025-11-29 08:31:11.960 226310 DEBUG nova.storage.rbd_utils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image b4b22d90-9381-462a-bb31-7c87d8627c3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:12.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:12 np0005539564 nova_compute[226295]: 2025-11-29 08:31:12.369 226310 INFO nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Creating config drive at /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b/disk.config#033[00m
Nov 29 03:31:12 np0005539564 nova_compute[226295]: 2025-11-29 08:31:12.381 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbeanv7dx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:12 np0005539564 nova_compute[226295]: 2025-11-29 08:31:12.550 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbeanv7dx" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:12 np0005539564 nova_compute[226295]: 2025-11-29 08:31:12.600 226310 DEBUG nova.storage.rbd_utils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] rbd image b4b22d90-9381-462a-bb31-7c87d8627c3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:12 np0005539564 nova_compute[226295]: 2025-11-29 08:31:12.607 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b/disk.config b4b22d90-9381-462a-bb31-7c87d8627c3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:12 np0005539564 nova_compute[226295]: 2025-11-29 08:31:12.648 226310 DEBUG nova.network.neutron [req-475ae3c7-fbd9-4760-b452-784df8abcad7 req-9c1caa82-cea0-456c-bce9-b71db1079909 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updated VIF entry in instance network info cache for port 61be4f14-5854-4137-968b-8b44c045bc1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:12 np0005539564 nova_compute[226295]: 2025-11-29 08:31:12.649 226310 DEBUG nova.network.neutron [req-475ae3c7-fbd9-4760-b452-784df8abcad7 req-9c1caa82-cea0-456c-bce9-b71db1079909 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updating instance_info_cache with network_info: [{"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:12 np0005539564 nova_compute[226295]: 2025-11-29 08:31:12.704 226310 DEBUG oslo_concurrency.lockutils [req-475ae3c7-fbd9-4760-b452-784df8abcad7 req-9c1caa82-cea0-456c-bce9-b71db1079909 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.265 226310 DEBUG oslo_concurrency.processutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b/disk.config b4b22d90-9381-462a-bb31-7c87d8627c3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.266 226310 INFO nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Deleting local config drive /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:31:13 np0005539564 kernel: tap61be4f14-58: entered promiscuous mode
Nov 29 03:31:13 np0005539564 NetworkManager[48997]: <info>  [1764405073.3533] manager: (tap61be4f14-58): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Nov 29 03:31:13 np0005539564 systemd-udevd[289122]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:13Z|00672|binding|INFO|Claiming lport 61be4f14-5854-4137-968b-8b44c045bc1b for this chassis.
Nov 29 03:31:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:13Z|00673|binding|INFO|61be4f14-5854-4137-968b-8b44c045bc1b: Claiming fa:16:3e:89:42:86 10.100.0.10
Nov 29 03:31:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:13.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.430 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539564 NetworkManager[48997]: <info>  [1764405073.4358] device (tap61be4f14-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:31:13 np0005539564 NetworkManager[48997]: <info>  [1764405073.4364] device (tap61be4f14-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:31:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:13Z|00674|binding|INFO|Setting lport 61be4f14-5854-4137-968b-8b44c045bc1b ovn-installed in OVS
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.446 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539564 systemd-machined[190128]: New machine qemu-81-instance-000000ac.
Nov 29 03:31:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:13Z|00675|binding|INFO|Setting lport 61be4f14-5854-4137-968b-8b44c045bc1b up in Southbound
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.459 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:42:86 10.100.0.10'], port_security=['fa:16:3e:89:42:86 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4b22d90-9381-462a-bb31-7c87d8627c3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dfe85a60-69af-44d9-855c-fe4b98539b8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1429573d-31ea-4b00-8580-1031fbde1ea5, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=61be4f14-5854-4137-968b-8b44c045bc1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.461 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 61be4f14-5854-4137-968b-8b44c045bc1b in datapath a259ebcb-7cce-4363-8e50-c25ed4a3daec bound to our chassis#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.462 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a259ebcb-7cce-4363-8e50-c25ed4a3daec#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.472 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[461d9264-bae2-4b47-b608-cf3cb929037d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.472 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa259ebcb-71 in ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:31:13 np0005539564 systemd[1]: Started Virtual Machine qemu-81-instance-000000ac.
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.474 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa259ebcb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.474 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8263b706-daf7-491f-9516-a988d2a62926]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.475 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d00ecb0b-4d4f-4ed1-894b-76489b6cba41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.488 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[743b5a39-9560-4837-9069-e65d46547178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.513 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a5befb-0bfc-4cc7-b48b-04c940a48258]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.540 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ead65651-7304-4fc1-bbae-fb0e1373d81f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 NetworkManager[48997]: <info>  [1764405073.5476] manager: (tapa259ebcb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/307)
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.547 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b70e8c-a392-41bc-b62b-aeb4003539c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.553 226310 DEBUG nova.compute.manager [req-b43fe90a-7321-48df-ad5b-e4f514ceee74 req-e31346df-04a1-4344-a843-784782b5d6eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.554 226310 DEBUG oslo_concurrency.lockutils [req-b43fe90a-7321-48df-ad5b-e4f514ceee74 req-e31346df-04a1-4344-a843-784782b5d6eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.554 226310 DEBUG oslo_concurrency.lockutils [req-b43fe90a-7321-48df-ad5b-e4f514ceee74 req-e31346df-04a1-4344-a843-784782b5d6eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.555 226310 DEBUG oslo_concurrency.lockutils [req-b43fe90a-7321-48df-ad5b-e4f514ceee74 req-e31346df-04a1-4344-a843-784782b5d6eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.555 226310 DEBUG nova.compute.manager [req-b43fe90a-7321-48df-ad5b-e4f514ceee74 req-e31346df-04a1-4344-a843-784782b5d6eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] No waiting events found dispatching network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.555 226310 WARNING nova.compute.manager [req-b43fe90a-7321-48df-ad5b-e4f514ceee74 req-e31346df-04a1-4344-a843-784782b5d6eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received unexpected event network-vif-plugged-eac06205-cdc0-424d-b7e2-7740e0db232d for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.581 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b81de03e-5e72-4219-94dd-7be1cab1f179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.585 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6364ae3c-6884-4a76-be30-d5c6891ffc1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 NetworkManager[48997]: <info>  [1764405073.6140] device (tapa259ebcb-70): carrier: link connected
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.622 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa40943-e18f-49e7-901d-9b01164729a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.644 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ede47eb5-452b-4542-9f70-d9d388bfb603]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa259ebcb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:16:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804829, 'reachable_time': 42832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289362, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.663 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b9db29fd-11fa-48e8-812e-b618a92828af]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:1647'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804829, 'tstamp': 804829}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289363, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.684 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7e76c7c0-ca80-4425-b675-d71425a97158]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa259ebcb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:16:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804829, 'reachable_time': 42832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289364, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.695 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.731 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8435146e-58a8-4853-8ae1-97c33aab4754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.798 226310 DEBUG nova.compute.manager [req-6d591ed3-bc37-4d90-b541-8328addba35c req-0f8089b6-e22c-4f56-bc20-19cc811e9dbe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received event network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.799 226310 DEBUG oslo_concurrency.lockutils [req-6d591ed3-bc37-4d90-b541-8328addba35c req-0f8089b6-e22c-4f56-bc20-19cc811e9dbe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.800 226310 DEBUG oslo_concurrency.lockutils [req-6d591ed3-bc37-4d90-b541-8328addba35c req-0f8089b6-e22c-4f56-bc20-19cc811e9dbe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.800 226310 DEBUG oslo_concurrency.lockutils [req-6d591ed3-bc37-4d90-b541-8328addba35c req-0f8089b6-e22c-4f56-bc20-19cc811e9dbe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.801 226310 DEBUG nova.compute.manager [req-6d591ed3-bc37-4d90-b541-8328addba35c req-0f8089b6-e22c-4f56-bc20-19cc811e9dbe 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Processing event network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.822 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[686a60fe-0e45-4c92-85ad-ed9be5ba6615]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.825 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa259ebcb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.826 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.827 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa259ebcb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:13 np0005539564 NetworkManager[48997]: <info>  [1764405073.8315] manager: (tapa259ebcb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.836 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa259ebcb-70, col_values=(('external_ids', {'iface-id': '5c5c4b01-f2eb-4ea5-9341-cfa577051cf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:13 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:13Z|00676|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:31:13 np0005539564 kernel: tapa259ebcb-70: entered promiscuous mode
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.830 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.834 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.838 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539564 nova_compute[226295]: 2025-11-29 08:31:13.866 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.868 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.870 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5eefa04d-683c-43c9-ba27-d75b2447e383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.871 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-a259ebcb-7cce-4363-8e50-c25ed4a3daec
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/a259ebcb-7cce-4363-8e50-c25ed4a3daec.pid.haproxy
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID a259ebcb-7cce-4363-8e50-c25ed4a3daec
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:31:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:13.873 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'env', 'PROCESS_TAG=haproxy-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a259ebcb-7cce-4363-8e50-c25ed4a3daec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:31:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:14.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.166 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405074.1656175, b4b22d90-9381-462a-bb31-7c87d8627c3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.166 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.168 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.173 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.176 226310 INFO nova.virt.libvirt.driver [-] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Instance spawned successfully.#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.176 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.202 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.204 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.204 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.204 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.205 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.205 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.206 226310 DEBUG nova.virt.libvirt.driver [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.210 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.257 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.258 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405074.165786, b4b22d90-9381-462a-bb31-7c87d8627c3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.258 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.287 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.290 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405074.172771, b4b22d90-9381-462a-bb31-7c87d8627c3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.291 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.296 226310 INFO nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Took 7.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.296 226310 DEBUG nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.307 226310 INFO nova.virt.libvirt.driver [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Deleting instance files /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12_del#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.308 226310 INFO nova.virt.libvirt.driver [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Deletion of /var/lib/nova/instances/ef2296eb-4538-4e04-8c0b-42370d9e5b12_del complete#033[00m
Nov 29 03:31:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:14 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.318 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.326 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.418 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.457 226310 INFO nova.compute.manager [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Took 8.59 seconds to build instance.#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.461 226310 INFO nova.compute.manager [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Took 3.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.461 226310 DEBUG oslo.service.loopingcall [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.461 226310 DEBUG nova.compute.manager [-] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.461 226310 DEBUG nova.network.neutron [-] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:31:14 np0005539564 podman[289490]: 2025-11-29 08:31:14.380136645 +0000 UTC m=+0.028605355 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:31:14 np0005539564 nova_compute[226295]: 2025-11-29 08:31:14.489 226310 DEBUG oslo_concurrency.lockutils [None req-c48bee2a-f562-42bd-bf3e-278b2a81a3cb 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:14 np0005539564 podman[289490]: 2025-11-29 08:31:14.59695928 +0000 UTC m=+0.245427950 container create 507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:31:14 np0005539564 systemd[1]: Started libpod-conmon-507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18.scope.
Nov 29 03:31:14 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:31:14 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb08b2982fa07525c6aa73426b7832ba29db935e1e69d468c26845802408d19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:15 np0005539564 podman[289490]: 2025-11-29 08:31:15.128362607 +0000 UTC m=+0.776831357 container init 507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:15 np0005539564 podman[289490]: 2025-11-29 08:31:15.135352607 +0000 UTC m=+0.783821277 container start 507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:31:15 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[289505]: [NOTICE]   (289509) : New worker (289511) forked
Nov 29 03:31:15 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[289505]: [NOTICE]   (289509) : Loading success.
Nov 29 03:31:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:15.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:15 np0005539564 nova_compute[226295]: 2025-11-29 08:31:15.684 226310 DEBUG nova.network.neutron [-] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:15 np0005539564 nova_compute[226295]: 2025-11-29 08:31:15.708 226310 INFO nova.compute.manager [-] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Took 1.25 seconds to deallocate network for instance.#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.069 226310 DEBUG nova.compute.manager [req-8327ae42-17a4-4e4e-a5e4-dee1dddaef73 req-dc5fc79d-a96e-4740-a63b-2ac9620bb435 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received event network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.070 226310 DEBUG oslo_concurrency.lockutils [req-8327ae42-17a4-4e4e-a5e4-dee1dddaef73 req-dc5fc79d-a96e-4740-a63b-2ac9620bb435 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.071 226310 DEBUG oslo_concurrency.lockutils [req-8327ae42-17a4-4e4e-a5e4-dee1dddaef73 req-dc5fc79d-a96e-4740-a63b-2ac9620bb435 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.071 226310 DEBUG oslo_concurrency.lockutils [req-8327ae42-17a4-4e4e-a5e4-dee1dddaef73 req-dc5fc79d-a96e-4740-a63b-2ac9620bb435 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.072 226310 DEBUG nova.compute.manager [req-8327ae42-17a4-4e4e-a5e4-dee1dddaef73 req-dc5fc79d-a96e-4740-a63b-2ac9620bb435 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] No waiting events found dispatching network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.072 226310 WARNING nova.compute.manager [req-8327ae42-17a4-4e4e-a5e4-dee1dddaef73 req-dc5fc79d-a96e-4740-a63b-2ac9620bb435 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received unexpected event network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.073 226310 DEBUG nova.compute.manager [req-8327ae42-17a4-4e4e-a5e4-dee1dddaef73 req-dc5fc79d-a96e-4740-a63b-2ac9620bb435 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Received event network-vif-deleted-eac06205-cdc0-424d-b7e2-7740e0db232d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.091 226310 INFO nova.compute.manager [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Took 0.38 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:31:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:16.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.541 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.542 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.612 226310 DEBUG oslo_concurrency.processutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:16 np0005539564 nova_compute[226295]: 2025-11-29 08:31:16.808 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:17 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4027604990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.194 226310 DEBUG oslo_concurrency.processutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.202 226310 DEBUG nova.compute.provider_tree [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.259 226310 DEBUG nova.scheduler.client.report [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.287 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.319 226310 INFO nova.scheduler.client.report [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Deleted allocations for instance ef2296eb-4538-4e04-8c0b-42370d9e5b12#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.379 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.380 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.380 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.381 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.381 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.430 226310 DEBUG oslo_concurrency.lockutils [None req-dd8136f9-24b9-459f-b908-8de03fdad22f d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "ef2296eb-4538-4e04-8c0b-42370d9e5b12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:17.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.585 226310 DEBUG nova.compute.manager [req-652c314c-0a11-4bfe-b96b-531b8b35e8bd req-a9a8d449-600b-4992-a7d6-ee512241cee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received event network-changed-61be4f14-5854-4137-968b-8b44c045bc1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.585 226310 DEBUG nova.compute.manager [req-652c314c-0a11-4bfe-b96b-531b8b35e8bd req-a9a8d449-600b-4992-a7d6-ee512241cee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Refreshing instance network info cache due to event network-changed-61be4f14-5854-4137-968b-8b44c045bc1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.585 226310 DEBUG oslo_concurrency.lockutils [req-652c314c-0a11-4bfe-b96b-531b8b35e8bd req-a9a8d449-600b-4992-a7d6-ee512241cee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.585 226310 DEBUG oslo_concurrency.lockutils [req-652c314c-0a11-4bfe-b96b-531b8b35e8bd req-a9a8d449-600b-4992-a7d6-ee512241cee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.586 226310 DEBUG nova.network.neutron [req-652c314c-0a11-4bfe-b96b-531b8b35e8bd req-a9a8d449-600b-4992-a7d6-ee512241cee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Refreshing network info cache for port 61be4f14-5854-4137-968b-8b44c045bc1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:17 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3842349862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:17 np0005539564 nova_compute[226295]: 2025-11-29 08:31:17.870 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:18.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:18 np0005539564 nova_compute[226295]: 2025-11-29 08:31:18.245 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:18 np0005539564 nova_compute[226295]: 2025-11-29 08:31:18.246 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:18 np0005539564 nova_compute[226295]: 2025-11-29 08:31:18.429 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:18 np0005539564 nova_compute[226295]: 2025-11-29 08:31:18.430 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4081MB free_disk=20.87588119506836GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:31:18 np0005539564 nova_compute[226295]: 2025-11-29 08:31:18.431 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:18 np0005539564 nova_compute[226295]: 2025-11-29 08:31:18.431 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:18 np0005539564 nova_compute[226295]: 2025-11-29 08:31:18.697 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e374 e374: 3 total, 3 up, 3 in
Nov 29 03:31:19 np0005539564 nova_compute[226295]: 2025-11-29 08:31:19.235 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance b4b22d90-9381-462a-bb31-7c87d8627c3b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:31:19 np0005539564 nova_compute[226295]: 2025-11-29 08:31:19.236 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:31:19 np0005539564 nova_compute[226295]: 2025-11-29 08:31:19.236 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:31:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:19 np0005539564 nova_compute[226295]: 2025-11-29 08:31:19.327 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:19.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:19 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2437782084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:19 np0005539564 nova_compute[226295]: 2025-11-29 08:31:19.847 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:19 np0005539564 nova_compute[226295]: 2025-11-29 08:31:19.854 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:19 np0005539564 nova_compute[226295]: 2025-11-29 08:31:19.875 226310 DEBUG nova.network.neutron [req-652c314c-0a11-4bfe-b96b-531b8b35e8bd req-a9a8d449-600b-4992-a7d6-ee512241cee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updated VIF entry in instance network info cache for port 61be4f14-5854-4137-968b-8b44c045bc1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:19 np0005539564 nova_compute[226295]: 2025-11-29 08:31:19.876 226310 DEBUG nova.network.neutron [req-652c314c-0a11-4bfe-b96b-531b8b35e8bd req-a9a8d449-600b-4992-a7d6-ee512241cee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updating instance_info_cache with network_info: [{"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:20.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:20 np0005539564 nova_compute[226295]: 2025-11-29 08:31:20.158 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:20 np0005539564 nova_compute[226295]: 2025-11-29 08:31:20.498 226310 DEBUG oslo_concurrency.lockutils [req-652c314c-0a11-4bfe-b96b-531b8b35e8bd req-a9a8d449-600b-4992-a7d6-ee512241cee9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:21 np0005539564 nova_compute[226295]: 2025-11-29 08:31:21.142 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:31:21 np0005539564 nova_compute[226295]: 2025-11-29 08:31:21.143 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:21.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:21 np0005539564 nova_compute[226295]: 2025-11-29 08:31:21.812 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:23.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:23 np0005539564 nova_compute[226295]: 2025-11-29 08:31:23.700 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:24.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e375 e375: 3 total, 3 up, 3 in
Nov 29 03:31:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:25.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:26.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:26 np0005539564 nova_compute[226295]: 2025-11-29 08:31:26.228 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405071.2232916, ef2296eb-4538-4e04-8c0b-42370d9e5b12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:26 np0005539564 nova_compute[226295]: 2025-11-29 08:31:26.229 226310 INFO nova.compute.manager [-] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:31:26 np0005539564 nova_compute[226295]: 2025-11-29 08:31:26.545 226310 DEBUG nova.compute.manager [None req-e8cb6143-5662-4245-95c1-1a52e0c355ce - - - - - -] [instance: ef2296eb-4538-4e04-8c0b-42370d9e5b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:26 np0005539564 nova_compute[226295]: 2025-11-29 08:31:26.815 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:27.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:28.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:28 np0005539564 nova_compute[226295]: 2025-11-29 08:31:28.703 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:29.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:30.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e376 e376: 3 total, 3 up, 3 in
Nov 29 03:31:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:31.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:31 np0005539564 podman[289589]: 2025-11-29 08:31:31.534631404 +0000 UTC m=+0.068836954 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:31:31 np0005539564 podman[289588]: 2025-11-29 08:31:31.536685449 +0000 UTC m=+0.078609048 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:31:31 np0005539564 podman[289587]: 2025-11-29 08:31:31.552415615 +0000 UTC m=+0.099896564 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:31 np0005539564 nova_compute[226295]: 2025-11-29 08:31:31.817 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e377 e377: 3 total, 3 up, 3 in
Nov 29 03:31:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:32.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:32Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:42:86 10.100.0.10
Nov 29 03:31:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:31:32Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:42:86 10.100.0.10
Nov 29 03:31:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:33.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:33 np0005539564 nova_compute[226295]: 2025-11-29 08:31:33.705 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:34.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:31:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 60K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1684 writes, 8325 keys, 1684 commit groups, 1.0 writes per commit group, ingest: 16.46 MB, 0.03 MB/s#012Interval WAL: 1683 writes, 1683 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     20.1      3.68              0.29        36    0.102       0      0       0.0       0.0#012  L6      1/0   13.00 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7     46.7     39.7      8.68              1.19        35    0.248    240K    19K       0.0       0.0#012 Sum      1/0   13.00 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     32.8     33.9     12.36              1.48        71    0.174    240K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     68.7     71.8      1.17              0.28        12    0.097     55K   3154       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     46.7     39.7      8.68              1.19        35    0.248    240K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     20.1      3.68              0.29        35    0.105       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.072, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.41 GB write, 0.09 MB/s write, 0.40 GB read, 0.08 MB/s read, 12.4 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 47.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000498 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2583,45.46 MB,14.9525%) FilterBlock(71,680.55 KB,0.218617%) IndexBlock(71,1.10 MB,0.361101%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:31:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:36 np0005539564 nova_compute[226295]: 2025-11-29 08:31:36.321 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:36.321 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:36.324 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:31:36 np0005539564 nova_compute[226295]: 2025-11-29 08:31:36.819 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:37.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:38.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:38 np0005539564 nova_compute[226295]: 2025-11-29 08:31:38.708 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e378 e378: 3 total, 3 up, 3 in
Nov 29 03:31:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:39.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:40.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:31:41.326 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:41 np0005539564 nova_compute[226295]: 2025-11-29 08:31:41.823 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:42.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:43.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:43 np0005539564 nova_compute[226295]: 2025-11-29 08:31:43.711 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:44.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:45.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e379 e379: 3 total, 3 up, 3 in
Nov 29 03:31:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:46.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:46 np0005539564 nova_compute[226295]: 2025-11-29 08:31:46.825 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:47.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:48.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:48 np0005539564 nova_compute[226295]: 2025-11-29 08:31:48.713 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:49 np0005539564 nova_compute[226295]: 2025-11-29 08:31:49.137 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:49.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:50.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:51.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:51 np0005539564 nova_compute[226295]: 2025-11-29 08:31:51.827 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:52.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:53.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:53 np0005539564 nova_compute[226295]: 2025-11-29 08:31:53.716 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e380 e380: 3 total, 3 up, 3 in
Nov 29 03:31:54 np0005539564 nova_compute[226295]: 2025-11-29 08:31:54.094 226310 DEBUG oslo_concurrency.lockutils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:54 np0005539564 nova_compute[226295]: 2025-11-29 08:31:54.095 226310 DEBUG oslo_concurrency.lockutils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:54.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:54 np0005539564 nova_compute[226295]: 2025-11-29 08:31:54.432 226310 DEBUG nova.objects.instance [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:54 np0005539564 nova_compute[226295]: 2025-11-29 08:31:54.606 226310 DEBUG oslo_concurrency.lockutils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:55.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:56 np0005539564 nova_compute[226295]: 2025-11-29 08:31:56.830 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:57.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:58 np0005539564 nova_compute[226295]: 2025-11-29 08:31:58.719 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.425 226310 DEBUG oslo_concurrency.lockutils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.426 226310 DEBUG oslo_concurrency.lockutils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.426 226310 INFO nova.compute.manager [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Attaching volume d568fd4b-b177-4a6f-a3fc-768d9d38f999 to /dev/vdb#033[00m
Nov 29 03:31:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:31:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.563 226310 DEBUG os_brick.utils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.564 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.582 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.583 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[8d53f996-066c-4561-a73f-572c0ce12318]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.584 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.596 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.596 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b054d5-6b24-461b-86ad-acee3655aab5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.597 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.611 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.611 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[73a7e05b-168f-4b1d-b376-128ccbdcd749]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.612 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[47554d4f-2151-428a-be7d-eb8babab3775]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.613 226310 DEBUG oslo_concurrency.processutils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.660 226310 DEBUG oslo_concurrency.processutils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "nvme version" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.664 226310 DEBUG os_brick.initiator.connectors.lightos [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.664 226310 DEBUG os_brick.initiator.connectors.lightos [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.665 226310 DEBUG os_brick.initiator.connectors.lightos [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.665 226310 DEBUG os_brick.utils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] <== get_connector_properties: return (101ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:31:59 np0005539564 nova_compute[226295]: 2025-11-29 08:31:59.666 226310 DEBUG nova.virt.block_device [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updating existing volume attachment record: d3a3c51b-829b-4ba0-ac37-21086c77f8ca _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:32:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:00.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:00 np0005539564 nova_compute[226295]: 2025-11-29 08:32:00.367 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:01 np0005539564 nova_compute[226295]: 2025-11-29 08:32:01.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:01.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:01 np0005539564 nova_compute[226295]: 2025-11-29 08:32:01.833 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:01 np0005539564 nova_compute[226295]: 2025-11-29 08:32:01.948 226310 DEBUG nova.objects.instance [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:02 np0005539564 ovn_controller[130591]: 2025-11-29T08:32:02Z|00677|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.117 226310 DEBUG nova.virt.libvirt.driver [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Attempting to attach volume d568fd4b-b177-4a6f-a3fc-768d9d38f999 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.121 226310 DEBUG nova.virt.libvirt.guest [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:32:02 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:02 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-d568fd4b-b177-4a6f-a3fc-768d9d38f999">
Nov 29 03:32:02 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:02 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:02 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:02 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:32:02 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:32:02 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:32:02 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:32:02 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:02 np0005539564 nova_compute[226295]:  <serial>d568fd4b-b177-4a6f-a3fc-768d9d38f999</serial>
Nov 29 03:32:02 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:32:02 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.145 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:32:02 np0005539564 podman[289681]: 2025-11-29 08:32:02.54944559 +0000 UTC m=+0.089085942 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:32:02 np0005539564 podman[289682]: 2025-11-29 08:32:02.550154219 +0000 UTC m=+0.087826217 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:32:02 np0005539564 podman[289680]: 2025-11-29 08:32:02.599891784 +0000 UTC m=+0.146576146 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.868 226310 DEBUG nova.virt.libvirt.driver [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.868 226310 DEBUG nova.virt.libvirt.driver [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.869 226310 DEBUG nova.virt.libvirt.driver [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:02 np0005539564 nova_compute[226295]: 2025-11-29 08:32:02.869 226310 DEBUG nova.virt.libvirt.driver [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No VIF found with MAC fa:16:3e:89:42:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:32:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:03.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:03 np0005539564 nova_compute[226295]: 2025-11-29 08:32:03.660 226310 DEBUG oslo_concurrency.lockutils [None req-5105d03e-eac5-4110-bf8e-9c945e6d3ff3 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:03 np0005539564 nova_compute[226295]: 2025-11-29 08:32:03.721 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:03.745 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:03.746 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:03.747 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:04 np0005539564 nova_compute[226295]: 2025-11-29 08:32:04.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:05 np0005539564 nova_compute[226295]: 2025-11-29 08:32:05.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:06.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:06 np0005539564 nova_compute[226295]: 2025-11-29 08:32:06.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:06 np0005539564 nova_compute[226295]: 2025-11-29 08:32:06.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:32:06 np0005539564 nova_compute[226295]: 2025-11-29 08:32:06.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:32:06 np0005539564 nova_compute[226295]: 2025-11-29 08:32:06.822 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:06 np0005539564 nova_compute[226295]: 2025-11-29 08:32:06.822 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:06 np0005539564 nova_compute[226295]: 2025-11-29 08:32:06.822 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:32:06 np0005539564 nova_compute[226295]: 2025-11-29 08:32:06.823 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:06 np0005539564 nova_compute[226295]: 2025-11-29 08:32:06.836 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:07 np0005539564 nova_compute[226295]: 2025-11-29 08:32:07.903 226310 DEBUG oslo_concurrency.lockutils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:07 np0005539564 nova_compute[226295]: 2025-11-29 08:32:07.904 226310 DEBUG oslo_concurrency.lockutils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:08.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:08 np0005539564 nova_compute[226295]: 2025-11-29 08:32:08.236 226310 DEBUG nova.objects.instance [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:08 np0005539564 nova_compute[226295]: 2025-11-29 08:32:08.696 226310 DEBUG oslo_concurrency.lockutils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:08 np0005539564 nova_compute[226295]: 2025-11-29 08:32:08.724 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:09 np0005539564 nova_compute[226295]: 2025-11-29 08:32:09.263 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updating instance_info_cache with network_info: [{"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:09 np0005539564 nova_compute[226295]: 2025-11-29 08:32:09.505 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-b4b22d90-9381-462a-bb31-7c87d8627c3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:09 np0005539564 nova_compute[226295]: 2025-11-29 08:32:09.506 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:32:09 np0005539564 nova_compute[226295]: 2025-11-29 08:32:09.507 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:09.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:09 np0005539564 nova_compute[226295]: 2025-11-29 08:32:09.900 226310 DEBUG oslo_concurrency.lockutils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:09 np0005539564 nova_compute[226295]: 2025-11-29 08:32:09.901 226310 DEBUG oslo_concurrency.lockutils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:09 np0005539564 nova_compute[226295]: 2025-11-29 08:32:09.901 226310 INFO nova.compute.manager [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Attaching volume 8a0ffcae-e95b-4559-b222-b006f2a6a46d to /dev/vdc#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.083 226310 DEBUG os_brick.utils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.085 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.105 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.106 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[ea51dc8a-54db-4381-9b6e-c85837b209a0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.108 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.122 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.122 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f8393e-d1f2-4f80-8978-9e93d280d1f6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.124 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.134 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.135 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9d3258-4650-4c8f-9552-21bb7882d6c4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.137 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0bd277-f122-429e-9656-875d1a23e600]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.137 226310 DEBUG oslo_concurrency.processutils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.183 226310 DEBUG oslo_concurrency.processutils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.185 226310 DEBUG os_brick.initiator.connectors.lightos [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.186 226310 DEBUG os_brick.initiator.connectors.lightos [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.186 226310 DEBUG os_brick.initiator.connectors.lightos [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:32:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.186 226310 DEBUG os_brick.utils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] <== get_connector_properties: return (103ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:32:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:10 np0005539564 nova_compute[226295]: 2025-11-29 08:32:10.188 226310 DEBUG nova.virt.block_device [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updating existing volume attachment record: 86294c41-53ca-46a2-b51c-d94e019aa3ed _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:11.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.600 226310 DEBUG nova.objects.instance [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.627 226310 DEBUG nova.virt.libvirt.driver [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Attempting to attach volume 8a0ffcae-e95b-4559-b222-b006f2a6a46d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:32:11 np0005539564 ovn_controller[130591]: 2025-11-29T08:32:11Z|00678|binding|INFO|Releasing lport 5c5c4b01-f2eb-4ea5-9341-cfa577051cf7 from this chassis (sb_readonly=0)
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.633 226310 DEBUG nova.virt.libvirt.guest [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:32:11 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:11 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-8a0ffcae-e95b-4559-b222-b006f2a6a46d">
Nov 29 03:32:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:11 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:32:11 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:32:11 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:32:11 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:32:11 np0005539564 nova_compute[226295]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:32:11 np0005539564 nova_compute[226295]:  <serial>8a0ffcae-e95b-4559-b222-b006f2a6a46d</serial>
Nov 29 03:32:11 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:32:11 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.706 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.838 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.886 226310 DEBUG nova.virt.libvirt.driver [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.887 226310 DEBUG nova.virt.libvirt.driver [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.887 226310 DEBUG nova.virt.libvirt.driver [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.888 226310 DEBUG nova.virt.libvirt.driver [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:11 np0005539564 nova_compute[226295]: 2025-11-29 08:32:11.888 226310 DEBUG nova.virt.libvirt.driver [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] No VIF found with MAC fa:16:3e:89:42:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:32:12 np0005539564 nova_compute[226295]: 2025-11-29 08:32:12.059 226310 DEBUG oslo_concurrency.lockutils [None req-c98a06a6-1d3e-4d48-a8a2-fa3dbceb644e 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:12.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.485386) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133485461, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1005, "num_deletes": 254, "total_data_size": 1942117, "memory_usage": 1965008, "flush_reason": "Manual Compaction"}
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133497220, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 1279833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60621, "largest_seqno": 61621, "table_properties": {"data_size": 1275156, "index_size": 2265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10750, "raw_average_key_size": 20, "raw_value_size": 1265604, "raw_average_value_size": 2396, "num_data_blocks": 99, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405069, "oldest_key_time": 1764405069, "file_creation_time": 1764405133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 12025 microseconds, and 6982 cpu microseconds.
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.497416) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 1279833 bytes OK
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.497500) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499230) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499251) EVENT_LOG_v1 {"time_micros": 1764405133499244, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499275) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1937072, prev total WAL file size 1937072, number of live WAL files 2.
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.500732) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(1249KB)], [117(12MB)]
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133500770, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14908539, "oldest_snapshot_seqno": -1}
Nov 29 03:32:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:13.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:13 np0005539564 nova_compute[226295]: 2025-11-29 08:32:13.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 9143 keys, 12981199 bytes, temperature: kUnknown
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133841354, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 12981199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12919911, "index_size": 37304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 238865, "raw_average_key_size": 26, "raw_value_size": 12757284, "raw_average_value_size": 1395, "num_data_blocks": 1447, "num_entries": 9143, "num_filter_entries": 9143, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.841705) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12981199 bytes
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.844022) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 43.8 rd, 38.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.0 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(21.8) write-amplify(10.1) OK, records in: 9668, records dropped: 525 output_compression: NoCompression
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.844051) EVENT_LOG_v1 {"time_micros": 1764405133844038, "job": 74, "event": "compaction_finished", "compaction_time_micros": 340715, "compaction_time_cpu_micros": 37597, "output_level": 6, "num_output_files": 1, "total_output_size": 12981199, "num_input_records": 9668, "num_output_records": 9143, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133845085, "job": 74, "event": "table_file_deletion", "file_number": 119}
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133849773, "job": 74, "event": "table_file_deletion", "file_number": 117}
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.500598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.850068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.850077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.850079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.850081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:32:13.850083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.429 226310 DEBUG oslo_concurrency.lockutils [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.430 226310 DEBUG oslo_concurrency.lockutils [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.473 226310 INFO nova.compute.manager [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Detaching volume d568fd4b-b177-4a6f-a3fc-768d9d38f999#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.618 226310 INFO nova.virt.block_device [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Attempting to driver detach volume d568fd4b-b177-4a6f-a3fc-768d9d38f999 from mountpoint /dev/vdb#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.631 226310 DEBUG nova.virt.libvirt.driver [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Attempting to detach device vdb from instance b4b22d90-9381-462a-bb31-7c87d8627c3b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.632 226310 DEBUG nova.virt.libvirt.guest [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-d568fd4b-b177-4a6f-a3fc-768d9d38f999">
Nov 29 03:32:14 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <serial>d568fd4b-b177-4a6f-a3fc-768d9d38f999</serial>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:32:14 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.643 226310 INFO nova.virt.libvirt.driver [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully detached device vdb from instance b4b22d90-9381-462a-bb31-7c87d8627c3b from the persistent domain config.#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.644 226310 DEBUG nova.virt.libvirt.driver [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b4b22d90-9381-462a-bb31-7c87d8627c3b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.645 226310 DEBUG nova.virt.libvirt.guest [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-d568fd4b-b177-4a6f-a3fc-768d9d38f999">
Nov 29 03:32:14 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <serial>d568fd4b-b177-4a6f-a3fc-768d9d38f999</serial>
Nov 29 03:32:14 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:32:14 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:32:14 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.772 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764405134.771651, b4b22d90-9381-462a-bb31-7c87d8627c3b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.775 226310 DEBUG nova.virt.libvirt.driver [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b4b22d90-9381-462a-bb31-7c87d8627c3b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:32:14 np0005539564 nova_compute[226295]: 2025-11-29 08:32:14.779 226310 INFO nova.virt.libvirt.driver [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully detached device vdb from instance b4b22d90-9381-462a-bb31-7c87d8627c3b from the live domain config.#033[00m
Nov 29 03:32:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:15.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:16 np0005539564 podman[289944]: 2025-11-29 08:32:16.063009545 +0000 UTC m=+0.688649212 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:32:16 np0005539564 podman[289944]: 2025-11-29 08:32:16.18739897 +0000 UTC m=+0.813038647 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:32:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:16.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:16 np0005539564 nova_compute[226295]: 2025-11-29 08:32:16.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:16 np0005539564 nova_compute[226295]: 2025-11-29 08:32:16.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:32:16 np0005539564 nova_compute[226295]: 2025-11-29 08:32:16.415 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:32:16 np0005539564 nova_compute[226295]: 2025-11-29 08:32:16.779 226310 DEBUG nova.objects.instance [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:16 np0005539564 nova_compute[226295]: 2025-11-29 08:32:16.839 226310 DEBUG oslo_concurrency.lockutils [None req-4a8a62ec-00d9-40dd-ab5a-954ac4d0e081 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:16 np0005539564 nova_compute[226295]: 2025-11-29 08:32:16.840 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:17.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:18.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:32:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.378 226310 DEBUG oslo_concurrency.lockutils [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.378 226310 DEBUG oslo_concurrency.lockutils [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.428 226310 INFO nova.compute.manager [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Detaching volume 8a0ffcae-e95b-4559-b222-b006f2a6a46d#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.588 226310 INFO nova.virt.block_device [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Attempting to driver detach volume 8a0ffcae-e95b-4559-b222-b006f2a6a46d from mountpoint /dev/vdc#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.597 226310 DEBUG nova.virt.libvirt.driver [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Attempting to detach device vdc from instance b4b22d90-9381-462a-bb31-7c87d8627c3b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.598 226310 DEBUG nova.virt.libvirt.guest [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-8a0ffcae-e95b-4559-b222-b006f2a6a46d">
Nov 29 03:32:18 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <serial>8a0ffcae-e95b-4559-b222-b006f2a6a46d</serial>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:32:18 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.606 226310 INFO nova.virt.libvirt.driver [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully detached device vdc from instance b4b22d90-9381-462a-bb31-7c87d8627c3b from the persistent domain config.#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.606 226310 DEBUG nova.virt.libvirt.driver [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance b4b22d90-9381-462a-bb31-7c87d8627c3b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.607 226310 DEBUG nova.virt.libvirt.guest [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-8a0ffcae-e95b-4559-b222-b006f2a6a46d">
Nov 29 03:32:18 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <serial>8a0ffcae-e95b-4559-b222-b006f2a6a46d</serial>
Nov 29 03:32:18 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:32:18 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:32:18 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.692 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764405138.6922631, b4b22d90-9381-462a-bb31-7c87d8627c3b => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.695 226310 DEBUG nova.virt.libvirt.driver [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance b4b22d90-9381-462a-bb31-7c87d8627c3b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.697 226310 INFO nova.virt.libvirt.driver [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully detached device vdc from instance b4b22d90-9381-462a-bb31-7c87d8627c3b from the live domain config.#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.883 226310 DEBUG nova.objects.instance [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'flavor' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:18 np0005539564 nova_compute[226295]: 2025-11-29 08:32:18.927 226310 DEBUG oslo_concurrency.lockutils [None req-2a822203-c025-42b4-a94d-d0197b51fad2 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:32:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:19 np0005539564 nova_compute[226295]: 2025-11-29 08:32:19.416 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e381 e381: 3 total, 3 up, 3 in
Nov 29 03:32:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:19.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:19 np0005539564 nova_compute[226295]: 2025-11-29 08:32:19.575 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:19 np0005539564 nova_compute[226295]: 2025-11-29 08:32:19.575 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:19 np0005539564 nova_compute[226295]: 2025-11-29 08:32:19.575 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:19 np0005539564 nova_compute[226295]: 2025-11-29 08:32:19.576 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:32:19 np0005539564 nova_compute[226295]: 2025-11-29 08:32:19.576 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2750205141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.111 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:20.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.259 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.260 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.498 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.500 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4023MB free_disk=20.84530258178711GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.501 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.501 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.837 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.837 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.838 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.838 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.838 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.839 226310 INFO nova.compute.manager [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Terminating instance#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.841 226310 DEBUG nova.compute.manager [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.892 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance b4b22d90-9381-462a-bb31-7c87d8627c3b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.893 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:32:20 np0005539564 nova_compute[226295]: 2025-11-29 08:32:20.894 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.058 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:21 np0005539564 kernel: tap61be4f14-58 (unregistering): left promiscuous mode
Nov 29 03:32:21 np0005539564 NetworkManager[48997]: <info>  [1764405141.0739] device (tap61be4f14-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:32:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:32:21Z|00679|binding|INFO|Releasing lport 61be4f14-5854-4137-968b-8b44c045bc1b from this chassis (sb_readonly=0)
Nov 29 03:32:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:32:21Z|00680|binding|INFO|Setting lport 61be4f14-5854-4137-968b-8b44c045bc1b down in Southbound
Nov 29 03:32:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:32:21Z|00681|binding|INFO|Removing iface tap61be4f14-58 ovn-installed in OVS
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.107 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:42:86 10.100.0.10'], port_security=['fa:16:3e:89:42:86 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4b22d90-9381-462a-bb31-7c87d8627c3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb0810bf6f5b4eb59638b7a2cf59ed5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dfe85a60-69af-44d9-855c-fe4b98539b8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1429573d-31ea-4b00-8580-1031fbde1ea5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=61be4f14-5854-4137-968b-8b44c045bc1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.109 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 61be4f14-5854-4137-968b-8b44c045bc1b in datapath a259ebcb-7cce-4363-8e50-c25ed4a3daec unbound from our chassis#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.111 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a259ebcb-7cce-4363-8e50-c25ed4a3daec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.114 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[068e31ce-6494-4987-a62f-2e6be16dfdea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.114 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec namespace which is not needed anymore#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.114 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.136 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:21 np0005539564 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Nov 29 03:32:21 np0005539564 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ac.scope: Consumed 17.561s CPU time.
Nov 29 03:32:21 np0005539564 systemd-machined[190128]: Machine qemu-81-instance-000000ac terminated.
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.287 226310 INFO nova.virt.libvirt.driver [-] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Instance destroyed successfully.#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.290 226310 DEBUG nova.objects.instance [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lazy-loading 'resources' on Instance uuid b4b22d90-9381-462a-bb31-7c87d8627c3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:21 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[289505]: [NOTICE]   (289509) : haproxy version is 2.8.14-c23fe91
Nov 29 03:32:21 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[289505]: [NOTICE]   (289509) : path to executable is /usr/sbin/haproxy
Nov 29 03:32:21 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[289505]: [WARNING]  (289509) : Exiting Master process...
Nov 29 03:32:21 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[289505]: [WARNING]  (289509) : Exiting Master process...
Nov 29 03:32:21 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[289505]: [ALERT]    (289509) : Current worker (289511) exited with code 143 (Terminated)
Nov 29 03:32:21 np0005539564 neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec[289505]: [WARNING]  (289509) : All workers exited. Exiting... (0)
Nov 29 03:32:21 np0005539564 systemd[1]: libpod-507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18.scope: Deactivated successfully.
Nov 29 03:32:21 np0005539564 conmon[289505]: conmon 507ac7917adac9e42712 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18.scope/container/memory.events
Nov 29 03:32:21 np0005539564 podman[290264]: 2025-11-29 08:32:21.321498569 +0000 UTC m=+0.070198210 container died 507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:32:21 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18-userdata-shm.mount: Deactivated successfully.
Nov 29 03:32:21 np0005539564 systemd[1]: var-lib-containers-storage-overlay-8bb08b2982fa07525c6aa73426b7832ba29db935e1e69d468c26845802408d19-merged.mount: Deactivated successfully.
Nov 29 03:32:21 np0005539564 podman[290264]: 2025-11-29 08:32:21.376426284 +0000 UTC m=+0.125125925 container cleanup 507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:32:21 np0005539564 systemd[1]: libpod-conmon-507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18.scope: Deactivated successfully.
Nov 29 03:32:21 np0005539564 podman[290304]: 2025-11-29 08:32:21.467442697 +0000 UTC m=+0.060551080 container remove 507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.475 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fe257d8f-93ec-43f9-801c-5c7c005b1340]: (4, ('Sat Nov 29 08:32:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec (507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18)\n507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18\nSat Nov 29 08:32:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec (507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18)\n507ac7917adac9e427127e1d9d1275a966b78838aa1b5cea7ba9c4bcd6d05d18\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.478 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee55279-a216-4dcb-9c3f-8fff8606650d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.479 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa259ebcb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.481 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:21 np0005539564 kernel: tapa259ebcb-70: left promiscuous mode
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.518 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.521 226310 DEBUG nova.virt.libvirt.vif [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:31:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-319067414',display_name='tempest-AttachVolumeTestJSON-server-319067414',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-319067414',id=172,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyqkXOK08CTph69txbTtq6hDXOFs7dzkRY0n3OL57A4cq65MQYsgTEl6dcAndCU2WpIdYkwjr+sWSAMTpecJCNMmd9pbWSWA1W/5SBqzNTXW4i70eNP1fMxq0ypOfA/2A==',key_name='tempest-keypair-318616355',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:31:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eb0810bf6f5b4eb59638b7a2cf59ed5b',ramdisk_id='',reservation_id='r-4luqh18o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-942041170',owner_user_name='tempest-AttachVolumeTestJSON-942041170-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a362a419f6a492aae2f102ad2bbd5e9',uuid=b4b22d90-9381-462a-bb31-7c87d8627c3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.522 226310 DEBUG nova.network.os_vif_util [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converting VIF {"id": "61be4f14-5854-4137-968b-8b44c045bc1b", "address": "fa:16:3e:89:42:86", "network": {"id": "a259ebcb-7cce-4363-8e50-c25ed4a3daec", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-376014059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb0810bf6f5b4eb59638b7a2cf59ed5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61be4f14-58", "ovs_interfaceid": "61be4f14-5854-4137-968b-8b44c045bc1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.523 226310 DEBUG nova.network.os_vif_util [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:42:86,bridge_name='br-int',has_traffic_filtering=True,id=61be4f14-5854-4137-968b-8b44c045bc1b,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61be4f14-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.524 226310 DEBUG os_vif [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:42:86,bridge_name='br-int',has_traffic_filtering=True,id=61be4f14-5854-4137-968b-8b44c045bc1b,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61be4f14-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:32:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:21.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.527 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.526 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c94181d2-46af-4a5c-b183-b893b42b2cee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.527 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61be4f14-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.529 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.531 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.532 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.536 226310 INFO os_vif [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:42:86,bridge_name='br-int',has_traffic_filtering=True,id=61be4f14-5854-4137-968b-8b44c045bc1b,network=Network(a259ebcb-7cce-4363-8e50-c25ed4a3daec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61be4f14-58')#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.548 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb45c3e-d50b-4128-817d-4d286ae65b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:21 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3905745017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.550 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6beadb-ef6b-4d8d-8f34-e423b8f04d5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.568 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[232d1d94-59ed-49ab-8874-868a3410281f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804821, 'reachable_time': 33017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290338, 'error': None, 'target': 'ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.570 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a259ebcb-7cce-4363-8e50-c25ed4a3daec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:32:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:21.571 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb7b90b-8b3a-47f4-8d94-6930e0a936cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:21 np0005539564 systemd[1]: run-netns-ovnmeta\x2da259ebcb\x2d7cce\x2d4363\x2d8e50\x2dc25ed4a3daec.mount: Deactivated successfully.
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.579 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.585 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.613 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.707 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:32:21 np0005539564 nova_compute[226295]: 2025-11-29 08:32:21.707 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.041 226310 DEBUG nova.compute.manager [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received event network-vif-unplugged-61be4f14-5854-4137-968b-8b44c045bc1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.042 226310 DEBUG oslo_concurrency.lockutils [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.043 226310 DEBUG oslo_concurrency.lockutils [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.044 226310 DEBUG oslo_concurrency.lockutils [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.044 226310 DEBUG nova.compute.manager [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] No waiting events found dispatching network-vif-unplugged-61be4f14-5854-4137-968b-8b44c045bc1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.045 226310 DEBUG nova.compute.manager [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received event network-vif-unplugged-61be4f14-5854-4137-968b-8b44c045bc1b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.046 226310 DEBUG nova.compute.manager [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received event network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.046 226310 DEBUG oslo_concurrency.lockutils [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.047 226310 DEBUG oslo_concurrency.lockutils [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.047 226310 DEBUG oslo_concurrency.lockutils [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.048 226310 DEBUG nova.compute.manager [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] No waiting events found dispatching network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.049 226310 WARNING nova.compute.manager [req-646a9852-fade-46c6-bfc9-ad0c2ed6d145 req-83c025d5-842a-4f25-b8ba-4a8b201fbaae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received unexpected event network-vif-plugged-61be4f14-5854-4137-968b-8b44c045bc1b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.093 226310 INFO nova.virt.libvirt.driver [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Deleting instance files /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b_del#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.095 226310 INFO nova.virt.libvirt.driver [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Deletion of /var/lib/nova/instances/b4b22d90-9381-462a-bb31-7c87d8627c3b_del complete#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.177 226310 INFO nova.compute.manager [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.178 226310 DEBUG oslo.service.loopingcall [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.179 226310 DEBUG nova.compute.manager [-] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:32:22 np0005539564 nova_compute[226295]: 2025-11-29 08:32:22.179 226310 DEBUG nova.network.neutron [-] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:32:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:22.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:23.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:23 np0005539564 nova_compute[226295]: 2025-11-29 08:32:23.736 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:24.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:24 np0005539564 nova_compute[226295]: 2025-11-29 08:32:24.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:24 np0005539564 nova_compute[226295]: 2025-11-29 08:32:24.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.433 226310 DEBUG nova.network.neutron [-] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:25.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.585 226310 DEBUG nova.compute.manager [req-a43bce30-b473-4a2d-8143-4ea14e9107aa req-ce10816d-1f35-4dc8-93ec-fbef89c1c203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Received event network-vif-deleted-61be4f14-5854-4137-968b-8b44c045bc1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.586 226310 INFO nova.compute.manager [req-a43bce30-b473-4a2d-8143-4ea14e9107aa req-ce10816d-1f35-4dc8-93ec-fbef89c1c203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Neutron deleted interface 61be4f14-5854-4137-968b-8b44c045bc1b; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.586 226310 DEBUG nova.network.neutron [req-a43bce30-b473-4a2d-8143-4ea14e9107aa req-ce10816d-1f35-4dc8-93ec-fbef89c1c203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.590 226310 INFO nova.compute.manager [-] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Took 3.41 seconds to deallocate network for instance.#033[00m
Nov 29 03:32:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:25 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.663 226310 DEBUG nova.compute.manager [req-a43bce30-b473-4a2d-8143-4ea14e9107aa req-ce10816d-1f35-4dc8-93ec-fbef89c1c203 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Detach interface failed, port_id=61be4f14-5854-4137-968b-8b44c045bc1b, reason: Instance b4b22d90-9381-462a-bb31-7c87d8627c3b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.725 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.725 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:25 np0005539564 nova_compute[226295]: 2025-11-29 08:32:25.787 226310 DEBUG oslo_concurrency.processutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:26.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2042810630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:26 np0005539564 nova_compute[226295]: 2025-11-29 08:32:26.237 226310 DEBUG oslo_concurrency.processutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:26 np0005539564 nova_compute[226295]: 2025-11-29 08:32:26.248 226310 DEBUG nova.compute.provider_tree [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:26 np0005539564 nova_compute[226295]: 2025-11-29 08:32:26.292 226310 DEBUG nova.scheduler.client.report [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:26 np0005539564 nova_compute[226295]: 2025-11-29 08:32:26.400 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:26 np0005539564 nova_compute[226295]: 2025-11-29 08:32:26.501 226310 INFO nova.scheduler.client.report [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Deleted allocations for instance b4b22d90-9381-462a-bb31-7c87d8627c3b#033[00m
Nov 29 03:32:26 np0005539564 nova_compute[226295]: 2025-11-29 08:32:26.530 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:26 np0005539564 nova_compute[226295]: 2025-11-29 08:32:26.734 226310 DEBUG oslo_concurrency.lockutils [None req-dc647315-4d15-4bca-88ba-b403bda23c23 7a362a419f6a492aae2f102ad2bbd5e9 eb0810bf6f5b4eb59638b7a2cf59ed5b - - default default] Lock "b4b22d90-9381-462a-bb31-7c87d8627c3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:27.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:28.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:28.679 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:28 np0005539564 nova_compute[226295]: 2025-11-29 08:32:28.679 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:28 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:28.681 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:32:28 np0005539564 nova_compute[226295]: 2025-11-29 08:32:28.738 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e382 e382: 3 total, 3 up, 3 in
Nov 29 03:32:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:29.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:32:29.683 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:30.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e383 e383: 3 total, 3 up, 3 in
Nov 29 03:32:31 np0005539564 nova_compute[226295]: 2025-11-29 08:32:31.533 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:32.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:33 np0005539564 podman[290418]: 2025-11-29 08:32:33.5127656 +0000 UTC m=+0.061904796 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 03:32:33 np0005539564 podman[290417]: 2025-11-29 08:32:33.520938171 +0000 UTC m=+0.074413644 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 03:32:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:33.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:33 np0005539564 podman[290416]: 2025-11-29 08:32:33.548100745 +0000 UTC m=+0.103252104 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:32:33 np0005539564 nova_compute[226295]: 2025-11-29 08:32:33.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:34.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e384 e384: 3 total, 3 up, 3 in
Nov 29 03:32:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:35.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:36.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:36 np0005539564 nova_compute[226295]: 2025-11-29 08:32:36.285 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405141.2833982, b4b22d90-9381-462a-bb31-7c87d8627c3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:36 np0005539564 nova_compute[226295]: 2025-11-29 08:32:36.286 226310 INFO nova.compute.manager [-] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:32:36 np0005539564 nova_compute[226295]: 2025-11-29 08:32:36.479 226310 DEBUG nova.compute.manager [None req-0c3b94e0-542a-49d3-823e-48919ca4a25c - - - - - -] [instance: b4b22d90-9381-462a-bb31-7c87d8627c3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:36 np0005539564 nova_compute[226295]: 2025-11-29 08:32:36.535 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:37 np0005539564 nova_compute[226295]: 2025-11-29 08:32:37.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:37.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:38.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:38 np0005539564 nova_compute[226295]: 2025-11-29 08:32:38.743 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e385 e385: 3 total, 3 up, 3 in
Nov 29 03:32:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:39.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:40.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:41 np0005539564 nova_compute[226295]: 2025-11-29 08:32:41.537 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:41.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:42 np0005539564 nova_compute[226295]: 2025-11-29 08:32:42.588 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:42 np0005539564 nova_compute[226295]: 2025-11-29 08:32:42.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:43.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:43 np0005539564 nova_compute[226295]: 2025-11-29 08:32:43.745 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 e386: 3 total, 3 up, 3 in
Nov 29 03:32:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:45.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:46 np0005539564 nova_compute[226295]: 2025-11-29 08:32:46.539 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:47.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:48.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:48 np0005539564 nova_compute[226295]: 2025-11-29 08:32:48.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:49.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:51 np0005539564 nova_compute[226295]: 2025-11-29 08:32:51.541 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:51.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:52.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 03:32:53 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 03:32:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:53.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:53 np0005539564 nova_compute[226295]: 2025-11-29 08:32:53.750 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:55.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:56 np0005539564 nova_compute[226295]: 2025-11-29 08:32:56.543 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:57.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:58 np0005539564 nova_compute[226295]: 2025-11-29 08:32:58.753 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:32:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:59.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:00.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:01 np0005539564 nova_compute[226295]: 2025-11-29 08:33:01.545 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:01.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:02.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:02 np0005539564 nova_compute[226295]: 2025-11-29 08:33:02.388 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:02 np0005539564 nova_compute[226295]: 2025-11-29 08:33:02.389 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:02 np0005539564 nova_compute[226295]: 2025-11-29 08:33:02.389 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:33:03 np0005539564 nova_compute[226295]: 2025-11-29 08:33:03.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:03.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:03.747 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:03.747 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:03.747 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:03 np0005539564 nova_compute[226295]: 2025-11-29 08:33:03.755 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:04 np0005539564 podman[290487]: 2025-11-29 08:33:04.542879669 +0000 UTC m=+0.078470954 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:33:04 np0005539564 podman[290486]: 2025-11-29 08:33:04.557944907 +0000 UTC m=+0.104277481 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:33:04 np0005539564 podman[290485]: 2025-11-29 08:33:04.604149077 +0000 UTC m=+0.151130410 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:33:05 np0005539564 nova_compute[226295]: 2025-11-29 08:33:05.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:05.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:06.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:06 np0005539564 nova_compute[226295]: 2025-11-29 08:33:06.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:06 np0005539564 nova_compute[226295]: 2025-11-29 08:33:06.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:33:06 np0005539564 nova_compute[226295]: 2025-11-29 08:33:06.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:33:06 np0005539564 nova_compute[226295]: 2025-11-29 08:33:06.367 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:33:06 np0005539564 nova_compute[226295]: 2025-11-29 08:33:06.368 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:06 np0005539564 nova_compute[226295]: 2025-11-29 08:33:06.547 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:07.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:08.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:08 np0005539564 nova_compute[226295]: 2025-11-29 08:33:08.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:08 np0005539564 nova_compute[226295]: 2025-11-29 08:33:08.758 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:09.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:11 np0005539564 nova_compute[226295]: 2025-11-29 08:33:11.549 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:11.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:12.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:12 np0005539564 nova_compute[226295]: 2025-11-29 08:33:12.670 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:12 np0005539564 nova_compute[226295]: 2025-11-29 08:33:12.670 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:12 np0005539564 nova_compute[226295]: 2025-11-29 08:33:12.700 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:33:12 np0005539564 nova_compute[226295]: 2025-11-29 08:33:12.803 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:12 np0005539564 nova_compute[226295]: 2025-11-29 08:33:12.804 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:12 np0005539564 nova_compute[226295]: 2025-11-29 08:33:12.811 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:33:12 np0005539564 nova_compute[226295]: 2025-11-29 08:33:12.811 226310 INFO nova.compute.claims [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:33:12 np0005539564 nova_compute[226295]: 2025-11-29 08:33:12.935 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:13 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/872567019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.326 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.334 226310 DEBUG nova.compute.provider_tree [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.407 226310 DEBUG nova.scheduler.client.report [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.442 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.443 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.496 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.497 226310 DEBUG nova.network.neutron [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.517 226310 INFO nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.533 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:33:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:13.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.659 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.661 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.661 226310 INFO nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Creating image(s)#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.688 226310 DEBUG nova.storage.rbd_utils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 782511b8-9841-4558-bc21-9a81d3913b54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.716 226310 DEBUG nova.storage.rbd_utils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 782511b8-9841-4558-bc21-9a81d3913b54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.740 226310 DEBUG nova.storage.rbd_utils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 782511b8-9841-4558-bc21-9a81d3913b54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.744 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.773 226310 DEBUG nova.policy [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.775 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.811 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.811 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.812 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.812 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.836 226310 DEBUG nova.storage.rbd_utils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 782511b8-9841-4558-bc21-9a81d3913b54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:13 np0005539564 nova_compute[226295]: 2025-11-29 08:33:13.840 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 782511b8-9841-4558-bc21-9a81d3913b54_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:14 np0005539564 nova_compute[226295]: 2025-11-29 08:33:14.179 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 782511b8-9841-4558-bc21-9a81d3913b54_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:14 np0005539564 nova_compute[226295]: 2025-11-29 08:33:14.258 226310 DEBUG nova.storage.rbd_utils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 782511b8-9841-4558-bc21-9a81d3913b54_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:33:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:14.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:14 np0005539564 nova_compute[226295]: 2025-11-29 08:33:14.377 226310 DEBUG nova.objects.instance [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:14 np0005539564 nova_compute[226295]: 2025-11-29 08:33:14.393 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:33:14 np0005539564 nova_compute[226295]: 2025-11-29 08:33:14.393 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Ensure instance console log exists: /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:33:14 np0005539564 nova_compute[226295]: 2025-11-29 08:33:14.394 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:14 np0005539564 nova_compute[226295]: 2025-11-29 08:33:14.394 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:14 np0005539564 nova_compute[226295]: 2025-11-29 08:33:14.394 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:15 np0005539564 nova_compute[226295]: 2025-11-29 08:33:15.071 226310 DEBUG nova.network.neutron [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Successfully created port: 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:33:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:15.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:15 np0005539564 nova_compute[226295]: 2025-11-29 08:33:15.917 226310 DEBUG nova.network.neutron [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Successfully updated port: 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:33:15 np0005539564 nova_compute[226295]: 2025-11-29 08:33:15.934 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:15 np0005539564 nova_compute[226295]: 2025-11-29 08:33:15.935 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:15 np0005539564 nova_compute[226295]: 2025-11-29 08:33:15.935 226310 DEBUG nova.network.neutron [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:16 np0005539564 nova_compute[226295]: 2025-11-29 08:33:16.029 226310 DEBUG nova.compute.manager [req-9ee4ca48-07af-4e44-9489-d01b5d4f332e req-8c83ecd0-d3a0-492c-be50-315bdb91e40d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:16 np0005539564 nova_compute[226295]: 2025-11-29 08:33:16.029 226310 DEBUG nova.compute.manager [req-9ee4ca48-07af-4e44-9489-d01b5d4f332e req-8c83ecd0-d3a0-492c-be50-315bdb91e40d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing instance network info cache due to event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:16 np0005539564 nova_compute[226295]: 2025-11-29 08:33:16.029 226310 DEBUG oslo_concurrency.lockutils [req-9ee4ca48-07af-4e44-9489-d01b5d4f332e req-8c83ecd0-d3a0-492c-be50-315bdb91e40d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:16 np0005539564 nova_compute[226295]: 2025-11-29 08:33:16.095 226310 DEBUG nova.network.neutron [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:16.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:16 np0005539564 nova_compute[226295]: 2025-11-29 08:33:16.551 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:17.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.669 226310 DEBUG nova.network.neutron [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.690 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.691 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance network_info: |[{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.691 226310 DEBUG oslo_concurrency.lockutils [req-9ee4ca48-07af-4e44-9489-d01b5d4f332e req-8c83ecd0-d3a0-492c-be50-315bdb91e40d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.692 226310 DEBUG nova.network.neutron [req-9ee4ca48-07af-4e44-9489-d01b5d4f332e req-8c83ecd0-d3a0-492c-be50-315bdb91e40d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.697 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Start _get_guest_xml network_info=[{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.704 226310 WARNING nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.711 226310 DEBUG nova.virt.libvirt.host [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.711 226310 DEBUG nova.virt.libvirt.host [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.721 226310 DEBUG nova.virt.libvirt.host [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.722 226310 DEBUG nova.virt.libvirt.host [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.723 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.724 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.724 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.725 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.725 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.725 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.725 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.726 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.726 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.726 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.727 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.727 226310 DEBUG nova.virt.hardware [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:17 np0005539564 nova_compute[226295]: 2025-11-29 08:33:17.730 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3608195406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.160 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.194 226310 DEBUG nova.storage.rbd_utils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 782511b8-9841-4558-bc21-9a81d3913b54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.198 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:18.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3236600486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.649 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.650 226310 DEBUG nova.virt.libvirt.vif [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1326828449',display_name='tempest-TestNetworkAdvancedServerOps-server-1326828449',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1326828449',id=175,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsTatB0ntZvgpT1iFQOjTdjEe6U2LspHqhHVlH5yZ8EV93LX7uxMrpvCyJRoDivS5erw2JcnGjpRKngF+GjO4y0hQO2CgxrKJ2TL+ibBoOMIlLXbWea/NN/kfP4yKXpw==',key_name='tempest-TestNetworkAdvancedServerOps-1995448580',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-d1ar0502',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:13Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=782511b8-9841-4558-bc21-9a81d3913b54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.651 226310 DEBUG nova.network.os_vif_util [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.652 226310 DEBUG nova.network.os_vif_util [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.654 226310 DEBUG nova.objects.instance [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.671 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <uuid>782511b8-9841-4558-bc21-9a81d3913b54</uuid>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <name>instance-000000af</name>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1326828449</nova:name>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:33:17</nova:creationTime>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <nova:port uuid="8a130a46-1e4c-4c18-8d1f-c60c770a5f49">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <entry name="serial">782511b8-9841-4558-bc21-9a81d3913b54</entry>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <entry name="uuid">782511b8-9841-4558-bc21-9a81d3913b54</entry>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/782511b8-9841-4558-bc21-9a81d3913b54_disk">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/782511b8-9841-4558-bc21-9a81d3913b54_disk.config">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:c0:6e:5a"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <target dev="tap8a130a46-1e"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/console.log" append="off"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:33:18 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:33:18 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:33:18 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:33:18 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.673 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Preparing to wait for external event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.674 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.674 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.674 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.675 226310 DEBUG nova.virt.libvirt.vif [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1326828449',display_name='tempest-TestNetworkAdvancedServerOps-server-1326828449',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1326828449',id=175,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsTatB0ntZvgpT1iFQOjTdjEe6U2LspHqhHVlH5yZ8EV93LX7uxMrpvCyJRoDivS5erw2JcnGjpRKngF+GjO4y0hQO2CgxrKJ2TL+ibBoOMIlLXbWea/NN/kfP4yKXpw==',key_name='tempest-TestNetworkAdvancedServerOps-1995448580',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-d1ar0502',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:13Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=782511b8-9841-4558-bc21-9a81d3913b54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.675 226310 DEBUG nova.network.os_vif_util [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.676 226310 DEBUG nova.network.os_vif_util [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.676 226310 DEBUG os_vif [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.677 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.677 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.677 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.680 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.681 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a130a46-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.681 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a130a46-1e, col_values=(('external_ids', {'iface-id': '8a130a46-1e4c-4c18-8d1f-c60c770a5f49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:6e:5a', 'vm-uuid': '782511b8-9841-4558-bc21-9a81d3913b54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.683 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:18 np0005539564 NetworkManager[48997]: <info>  [1764405198.6844] manager: (tap8a130a46-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.685 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.688 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.689 226310 INFO os_vif [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e')#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.759 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.760 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.760 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:c0:6e:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.760 226310 INFO nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Using config drive#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.793 226310 DEBUG nova.storage.rbd_utils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 782511b8-9841-4558-bc21-9a81d3913b54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:18 np0005539564 nova_compute[226295]: 2025-11-29 08:33:18.799 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.383 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.384 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.384 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.384 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.385 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:19.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.823 226310 INFO nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Creating config drive at /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/disk.config#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.833 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpanfxvmd8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.908 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.976 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.976 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:19 np0005539564 nova_compute[226295]: 2025-11-29 08:33:19.981 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpanfxvmd8" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.019 226310 DEBUG nova.storage.rbd_utils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 782511b8-9841-4558-bc21-9a81d3913b54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.025 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/disk.config 782511b8-9841-4558-bc21-9a81d3913b54_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.253 226310 DEBUG oslo_concurrency.processutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/disk.config 782511b8-9841-4558-bc21-9a81d3913b54_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.254 226310 INFO nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Deleting local config drive /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/disk.config because it was imported into RBD.#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.268 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.270 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4247MB free_disk=20.946605682373047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.270 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.271 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:20.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:20 np0005539564 kernel: tap8a130a46-1e: entered promiscuous mode
Nov 29 03:33:20 np0005539564 NetworkManager[48997]: <info>  [1764405200.3313] manager: (tap8a130a46-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.331 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:20 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:20Z|00682|binding|INFO|Claiming lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for this chassis.
Nov 29 03:33:20 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:20Z|00683|binding|INFO|8a130a46-1e4c-4c18-8d1f-c60c770a5f49: Claiming fa:16:3e:c0:6e:5a 10.100.0.4
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.344 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.348 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.357 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:6e:5a 10.100.0.4'], port_security=['fa:16:3e:c0:6e:5a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '782511b8-9841-4558-bc21-9a81d3913b54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfa4c1a9-d993-4c80-84c8-af76e286907f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d02d2157-b362-405a-8753-8c1be0d0ef4c, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=8a130a46-1e4c-4c18-8d1f-c60c770a5f49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.359 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 in datapath f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c bound to our chassis#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.361 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c#033[00m
Nov 29 03:33:20 np0005539564 systemd-udevd[290892]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.377 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[43980a55-2066-4d7c-94e2-63f2e24a2364]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.378 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4afd5c3-f1 in ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:33:20 np0005539564 NetworkManager[48997]: <info>  [1764405200.3808] device (tap8a130a46-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:33:20 np0005539564 NetworkManager[48997]: <info>  [1764405200.3820] device (tap8a130a46-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:33:20 np0005539564 systemd-machined[190128]: New machine qemu-82-instance-000000af.
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.381 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4afd5c3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.381 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6d9ea7-2de2-4949-a896-56794dc49116]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.383 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3743534b-1316-4b72-8162-2bc78780cb05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.394 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[36e73661-0e7f-4e58-8edc-d746f44e78c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.397 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 782511b8-9841-4558-bc21-9a81d3913b54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.398 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.398 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:33:20 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:20Z|00684|binding|INFO|Setting lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 ovn-installed in OVS
Nov 29 03:33:20 np0005539564 systemd[1]: Started Virtual Machine qemu-82-instance-000000af.
Nov 29 03:33:20 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:20Z|00685|binding|INFO|Setting lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 up in Southbound
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.411 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.428 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.436 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4138df-5a03-4931-88ab-efb3ac01e38a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.464 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[de8800c0-eb35-46d1-a34b-39e674f97c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 NetworkManager[48997]: <info>  [1764405200.4706] manager: (tapf4afd5c3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Nov 29 03:33:20 np0005539564 systemd-udevd[290896]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.469 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4043c5-f777-4a70-9936-1088d8d048f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.505 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b65064-dab4-4616-abb6-5e6e049e8701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.509 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b8746d-8dfd-46a1-ada5-3c02aa528f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.520 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.520 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:33:20 np0005539564 NetworkManager[48997]: <info>  [1764405200.5354] device (tapf4afd5c3-f0): carrier: link connected
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.537 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.542 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0f3222-9ece-4ac2-995f-8f1c1b79cbbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.558 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5d72cadc-0c0b-4d02-a72c-a428ca5bdce9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4afd5c3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:97:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817521, 'reachable_time': 28810, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290926, 'error': None, 'target': 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.559 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.570 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cb32a6a4-63e2-47f6-8b95-7d6978882699]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:97be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817521, 'tstamp': 817521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290927, 'error': None, 'target': 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.590 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3612a1-d3a9-41ac-927e-a2e9d4a7e66e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4afd5c3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:97:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817521, 'reachable_time': 28810, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290928, 'error': None, 'target': 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.603 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.620 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[effcc669-dd94-4fc9-b63e-a8a7230e1914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.682 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[23478d09-8cfc-4ca6-a884-1bc384a07f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.684 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4afd5c3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.684 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.684 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4afd5c3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:20 np0005539564 NetworkManager[48997]: <info>  [1764405200.6915] manager: (tapf4afd5c3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.691 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:20 np0005539564 kernel: tapf4afd5c3-f0: entered promiscuous mode
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.694 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4afd5c3-f0, col_values=(('external_ids', {'iface-id': 'ff03e0e0-7321-4974-89e3-44f271d6956a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.696 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:20 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:20Z|00686|binding|INFO|Releasing lport ff03e0e0-7321-4974-89e3-44f271d6956a from this chassis (sb_readonly=0)
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.708 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.709 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b26b41e6-1674-4df5-84db-69d8cb03efe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.710 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c.pid.haproxy
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:33:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:20.710 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'env', 'PROCESS_TAG=haproxy-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.715 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.832 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405200.8314726, 782511b8-9841-4558-bc21-9a81d3913b54 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.833 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] VM Started (Lifecycle Event)#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.858 226310 DEBUG nova.compute.manager [req-b3d80aca-b8f6-4e3f-9a49-3b9ed3812fce req-578064a3-5c3b-49dd-b43a-9ff4fe57212d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.858 226310 DEBUG oslo_concurrency.lockutils [req-b3d80aca-b8f6-4e3f-9a49-3b9ed3812fce req-578064a3-5c3b-49dd-b43a-9ff4fe57212d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.859 226310 DEBUG oslo_concurrency.lockutils [req-b3d80aca-b8f6-4e3f-9a49-3b9ed3812fce req-578064a3-5c3b-49dd-b43a-9ff4fe57212d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.859 226310 DEBUG oslo_concurrency.lockutils [req-b3d80aca-b8f6-4e3f-9a49-3b9ed3812fce req-578064a3-5c3b-49dd-b43a-9ff4fe57212d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.860 226310 DEBUG nova.compute.manager [req-b3d80aca-b8f6-4e3f-9a49-3b9ed3812fce req-578064a3-5c3b-49dd-b43a-9ff4fe57212d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Processing event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.862 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.863 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.870 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.874 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.883 226310 INFO nova.virt.libvirt.driver [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance spawned successfully.#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.884 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.897 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.898 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405200.8371005, 782511b8-9841-4558-bc21-9a81d3913b54 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.898 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.909 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.910 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.910 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.910 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.911 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.911 226310 DEBUG nova.virt.libvirt.driver [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.916 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.919 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405200.887461, 782511b8-9841-4558-bc21-9a81d3913b54 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.920 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.947 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.950 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:20 np0005539564 nova_compute[226295]: 2025-11-29 08:33:20.989 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.009 226310 INFO nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Took 7.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.009 226310 DEBUG nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:21 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4064002057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.054 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.059 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.094 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.115 226310 INFO nova.compute.manager [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Took 8.35 seconds to build instance.#033[00m
Nov 29 03:33:21 np0005539564 podman[291022]: 2025-11-29 08:33:21.117235562 +0000 UTC m=+0.056094679 container create d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.133 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.134 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.151 226310 DEBUG nova.network.neutron [req-9ee4ca48-07af-4e44-9489-d01b5d4f332e req-8c83ecd0-d3a0-492c-be50-315bdb91e40d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updated VIF entry in instance network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.152 226310 DEBUG nova.network.neutron [req-9ee4ca48-07af-4e44-9489-d01b5d4f332e req-8c83ecd0-d3a0-492c-be50-315bdb91e40d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.154 226310 DEBUG oslo_concurrency.lockutils [None req-70a79fc4-1c7f-4fb7-8a3f-3ce4aa92f91e 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:21 np0005539564 systemd[1]: Started libpod-conmon-d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd.scope.
Nov 29 03:33:21 np0005539564 nova_compute[226295]: 2025-11-29 08:33:21.169 226310 DEBUG oslo_concurrency.lockutils [req-9ee4ca48-07af-4e44-9489-d01b5d4f332e req-8c83ecd0-d3a0-492c-be50-315bdb91e40d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:21 np0005539564 podman[291022]: 2025-11-29 08:33:21.08835002 +0000 UTC m=+0.027209177 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:33:21 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:33:21 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada6b0f0e055a59f533cd382677ef17f49516cccb9bf181bb168df837e52a153/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:21 np0005539564 podman[291022]: 2025-11-29 08:33:21.214279047 +0000 UTC m=+0.153138214 container init d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:33:21 np0005539564 podman[291022]: 2025-11-29 08:33:21.222059947 +0000 UTC m=+0.160919064 container start d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:33:21 np0005539564 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[291037]: [NOTICE]   (291041) : New worker (291043) forked
Nov 29 03:33:21 np0005539564 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[291037]: [NOTICE]   (291041) : Loading success.
Nov 29 03:33:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:21.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:22.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:23 np0005539564 nova_compute[226295]: 2025-11-29 08:33:23.009 226310 DEBUG nova.compute.manager [req-590a481e-89b5-45a4-9aa0-159526c9ac5f req-e0d06237-cd31-4532-8b8e-30a1500138d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:23 np0005539564 nova_compute[226295]: 2025-11-29 08:33:23.011 226310 DEBUG oslo_concurrency.lockutils [req-590a481e-89b5-45a4-9aa0-159526c9ac5f req-e0d06237-cd31-4532-8b8e-30a1500138d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:23 np0005539564 nova_compute[226295]: 2025-11-29 08:33:23.011 226310 DEBUG oslo_concurrency.lockutils [req-590a481e-89b5-45a4-9aa0-159526c9ac5f req-e0d06237-cd31-4532-8b8e-30a1500138d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:23 np0005539564 nova_compute[226295]: 2025-11-29 08:33:23.012 226310 DEBUG oslo_concurrency.lockutils [req-590a481e-89b5-45a4-9aa0-159526c9ac5f req-e0d06237-cd31-4532-8b8e-30a1500138d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:23 np0005539564 nova_compute[226295]: 2025-11-29 08:33:23.013 226310 DEBUG nova.compute.manager [req-590a481e-89b5-45a4-9aa0-159526c9ac5f req-e0d06237-cd31-4532-8b8e-30a1500138d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:23 np0005539564 nova_compute[226295]: 2025-11-29 08:33:23.013 226310 WARNING nova.compute.manager [req-590a481e-89b5-45a4-9aa0-159526c9ac5f req-e0d06237-cd31-4532-8b8e-30a1500138d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:33:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:23.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:23 np0005539564 nova_compute[226295]: 2025-11-29 08:33:23.685 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:23 np0005539564 nova_compute[226295]: 2025-11-29 08:33:23.764 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:24.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:25 np0005539564 nova_compute[226295]: 2025-11-29 08:33:25.391 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:25 np0005539564 NetworkManager[48997]: <info>  [1764405205.3929] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Nov 29 03:33:25 np0005539564 NetworkManager[48997]: <info>  [1764405205.3948] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Nov 29 03:33:25 np0005539564 nova_compute[226295]: 2025-11-29 08:33:25.556 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:25Z|00687|binding|INFO|Releasing lport ff03e0e0-7321-4974-89e3-44f271d6956a from this chassis (sb_readonly=0)
Nov 29 03:33:25 np0005539564 nova_compute[226295]: 2025-11-29 08:33:25.578 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:25.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:27.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:28 np0005539564 nova_compute[226295]: 2025-11-29 08:33:28.007 226310 DEBUG nova.compute.manager [req-a7062fee-b976-4f95-b119-1bb9c57bad94 req-ca55f7da-9ae7-4d4d-b2ae-c9ce9b96c70c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:28 np0005539564 nova_compute[226295]: 2025-11-29 08:33:28.007 226310 DEBUG nova.compute.manager [req-a7062fee-b976-4f95-b119-1bb9c57bad94 req-ca55f7da-9ae7-4d4d-b2ae-c9ce9b96c70c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing instance network info cache due to event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:28 np0005539564 nova_compute[226295]: 2025-11-29 08:33:28.007 226310 DEBUG oslo_concurrency.lockutils [req-a7062fee-b976-4f95-b119-1bb9c57bad94 req-ca55f7da-9ae7-4d4d-b2ae-c9ce9b96c70c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:28 np0005539564 nova_compute[226295]: 2025-11-29 08:33:28.008 226310 DEBUG oslo_concurrency.lockutils [req-a7062fee-b976-4f95-b119-1bb9c57bad94 req-ca55f7da-9ae7-4d4d-b2ae-c9ce9b96c70c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:28 np0005539564 nova_compute[226295]: 2025-11-29 08:33:28.008 226310 DEBUG nova.network.neutron [req-a7062fee-b976-4f95-b119-1bb9c57bad94 req-ca55f7da-9ae7-4d4d-b2ae-c9ce9b96c70c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:33:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:33:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:28.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:28 np0005539564 nova_compute[226295]: 2025-11-29 08:33:28.689 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:28 np0005539564 nova_compute[226295]: 2025-11-29 08:33:28.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:29.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:29 np0005539564 nova_compute[226295]: 2025-11-29 08:33:29.639 226310 DEBUG nova.network.neutron [req-a7062fee-b976-4f95-b119-1bb9c57bad94 req-ca55f7da-9ae7-4d4d-b2ae-c9ce9b96c70c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updated VIF entry in instance network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:29 np0005539564 nova_compute[226295]: 2025-11-29 08:33:29.640 226310 DEBUG nova.network.neutron [req-a7062fee-b976-4f95-b119-1bb9c57bad94 req-ca55f7da-9ae7-4d4d-b2ae-c9ce9b96c70c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:29 np0005539564 nova_compute[226295]: 2025-11-29 08:33:29.661 226310 DEBUG oslo_concurrency.lockutils [req-a7062fee-b976-4f95-b119-1bb9c57bad94 req-ca55f7da-9ae7-4d4d-b2ae-c9ce9b96c70c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:30.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:31.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:32.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:33.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:33 np0005539564 nova_compute[226295]: 2025-11-29 08:33:33.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:33 np0005539564 nova_compute[226295]: 2025-11-29 08:33:33.769 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:34.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:34 np0005539564 podman[291213]: 2025-11-29 08:33:34.888773977 +0000 UTC m=+0.105367842 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:33:34 np0005539564 podman[291212]: 2025-11-29 08:33:34.897050641 +0000 UTC m=+0.115332121 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:33:34 np0005539564 podman[291211]: 2025-11-29 08:33:34.904301986 +0000 UTC m=+0.124099298 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:33:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:35.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:35Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:6e:5a 10.100.0.4
Nov 29 03:33:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:35Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:6e:5a 10.100.0.4
Nov 29 03:33:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:37.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:38.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:38 np0005539564 nova_compute[226295]: 2025-11-29 08:33:38.699 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:38 np0005539564 nova_compute[226295]: 2025-11-29 08:33:38.770 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e387 e387: 3 total, 3 up, 3 in
Nov 29 03:33:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:39.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:40.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e388 e388: 3 total, 3 up, 3 in
Nov 29 03:33:41 np0005539564 nova_compute[226295]: 2025-11-29 08:33:41.521 226310 INFO nova.compute.manager [None req-85f98681-9ff5-498f-90f2-315d3f270b03 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Get console output#033[00m
Nov 29 03:33:41 np0005539564 nova_compute[226295]: 2025-11-29 08:33:41.531 270504 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:33:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:42.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e389 e389: 3 total, 3 up, 3 in
Nov 29 03:33:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:43.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:43 np0005539564 nova_compute[226295]: 2025-11-29 08:33:43.705 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:43 np0005539564 nova_compute[226295]: 2025-11-29 08:33:43.774 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e390 e390: 3 total, 3 up, 3 in
Nov 29 03:33:44 np0005539564 nova_compute[226295]: 2025-11-29 08:33:44.191 226310 INFO nova.compute.manager [None req-0bbb451b-f9ba-40f7-a7b5-211b1b7e7474 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Get console output#033[00m
Nov 29 03:33:44 np0005539564 nova_compute[226295]: 2025-11-29 08:33:44.206 270504 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:33:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:45.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:47 np0005539564 nova_compute[226295]: 2025-11-29 08:33:47.127 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:47.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:48 np0005539564 nova_compute[226295]: 2025-11-29 08:33:48.143 226310 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:48 np0005539564 nova_compute[226295]: 2025-11-29 08:33:48.144 226310 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:48 np0005539564 nova_compute[226295]: 2025-11-29 08:33:48.144 226310 DEBUG nova.network.neutron [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:48 np0005539564 nova_compute[226295]: 2025-11-29 08:33:48.709 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:48 np0005539564 nova_compute[226295]: 2025-11-29 08:33:48.776 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e391 e391: 3 total, 3 up, 3 in
Nov 29 03:33:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:49.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:49 np0005539564 nova_compute[226295]: 2025-11-29 08:33:49.869 226310 DEBUG nova.network.neutron [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:49 np0005539564 nova_compute[226295]: 2025-11-29 08:33:49.899 226310 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.136 226310 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.137 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Creating file /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/b4c78acd11454cb39cf90eb5bfaef13d.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.137 226310 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/b4c78acd11454cb39cf90eb5bfaef13d.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:50.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.690 226310 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/b4c78acd11454cb39cf90eb5bfaef13d.tmp" returned: 1 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.691 226310 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/b4c78acd11454cb39cf90eb5bfaef13d.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.692 226310 DEBUG nova.virt.libvirt.volume.remotefs [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Creating directory /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.692 226310 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.920 226310 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:50 np0005539564 nova_compute[226295]: 2025-11-29 08:33:50.926 226310 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:33:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:51.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e392 e392: 3 total, 3 up, 3 in
Nov 29 03:33:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:52.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e393 e393: 3 total, 3 up, 3 in
Nov 29 03:33:53 np0005539564 kernel: tap8a130a46-1e (unregistering): left promiscuous mode
Nov 29 03:33:53 np0005539564 NetworkManager[48997]: <info>  [1764405233.6564] device (tap8a130a46-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:33:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:53Z|00688|binding|INFO|Releasing lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 from this chassis (sb_readonly=0)
Nov 29 03:33:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:53Z|00689|binding|INFO|Setting lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 down in Southbound
Nov 29 03:33:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:33:53Z|00690|binding|INFO|Removing iface tap8a130a46-1e ovn-installed in OVS
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.665 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.672 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.677 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:6e:5a 10.100.0.4'], port_security=['fa:16:3e:c0:6e:5a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '782511b8-9841-4558-bc21-9a81d3913b54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfa4c1a9-d993-4c80-84c8-af76e286907f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d02d2157-b362-405a-8753-8c1be0d0ef4c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=8a130a46-1e4c-4c18-8d1f-c60c770a5f49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.679 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 in datapath f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c unbound from our chassis#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.681 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.682 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1067fe39-e959-4ede-9daa-4551854b19f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.683 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c namespace which is not needed anymore#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.688 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.711 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000af.scope: Deactivated successfully.
Nov 29 03:33:53 np0005539564 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000af.scope: Consumed 15.616s CPU time.
Nov 29 03:33:53 np0005539564 systemd-machined[190128]: Machine qemu-82-instance-000000af terminated.
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.779 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[291037]: [NOTICE]   (291041) : haproxy version is 2.8.14-c23fe91
Nov 29 03:33:53 np0005539564 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[291037]: [NOTICE]   (291041) : path to executable is /usr/sbin/haproxy
Nov 29 03:33:53 np0005539564 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[291037]: [WARNING]  (291041) : Exiting Master process...
Nov 29 03:33:53 np0005539564 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[291037]: [ALERT]    (291041) : Current worker (291043) exited with code 143 (Terminated)
Nov 29 03:33:53 np0005539564 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[291037]: [WARNING]  (291041) : All workers exited. Exiting... (0)
Nov 29 03:33:53 np0005539564 systemd[1]: libpod-d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd.scope: Deactivated successfully.
Nov 29 03:33:53 np0005539564 podman[291329]: 2025-11-29 08:33:53.82771679 +0000 UTC m=+0.050063364 container died d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:33:53 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd-userdata-shm.mount: Deactivated successfully.
Nov 29 03:33:53 np0005539564 systemd[1]: var-lib-containers-storage-overlay-ada6b0f0e055a59f533cd382677ef17f49516cccb9bf181bb168df837e52a153-merged.mount: Deactivated successfully.
Nov 29 03:33:53 np0005539564 podman[291329]: 2025-11-29 08:33:53.873662924 +0000 UTC m=+0.096009498 container cleanup d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:33:53 np0005539564 systemd[1]: libpod-conmon-d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd.scope: Deactivated successfully.
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.894 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.902 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.946 226310 DEBUG nova.compute.manager [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.947 226310 DEBUG oslo_concurrency.lockutils [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.947 226310 DEBUG oslo_concurrency.lockutils [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.948 226310 DEBUG oslo_concurrency.lockutils [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.948 226310 DEBUG nova.compute.manager [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.948 226310 WARNING nova.compute.manager [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.950 226310 INFO nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:33:53 np0005539564 podman[291358]: 2025-11-29 08:33:53.953228296 +0000 UTC m=+0.057764113 container remove d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.956 226310 INFO nova.virt.libvirt.driver [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance destroyed successfully.#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.957 226310 DEBUG nova.virt.libvirt.vif [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1326828449',display_name='tempest-TestNetworkAdvancedServerOps-server-1326828449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1326828449',id=175,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsTatB0ntZvgpT1iFQOjTdjEe6U2LspHqhHVlH5yZ8EV93LX7uxMrpvCyJRoDivS5erw2JcnGjpRKngF+GjO4y0hQO2CgxrKJ2TL+ibBoOMIlLXbWea/NN/kfP4yKXpw==',key_name='tempest-TestNetworkAdvancedServerOps-1995448580',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-d1ar0502',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:33:47Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=782511b8-9841-4558-bc21-9a81d3913b54,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1013637972", "vif_mac": "fa:16:3e:c0:6e:5a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.957 226310 DEBUG nova.network.os_vif_util [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1013637972", "vif_mac": "fa:16:3e:c0:6e:5a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.958 226310 DEBUG nova.network.os_vif_util [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.959 226310 DEBUG os_vif [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.961 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.961 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a130a46-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.960 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbbcf6a-fc1d-4d93-ac22-1cfceaa742cd]: (4, ('Sat Nov 29 08:33:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c (d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd)\nd9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd\nSat Nov 29 08:33:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c (d9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd)\nd9196cdc7b4c1ed4a86fdbd5426fb60b2ec03490af065d08918a554a835cc7bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.962 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.962 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7d04f81a-fb87-4d4f-b3bf-5fbe67910e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.964 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4afd5c3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.964 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 kernel: tapf4afd5c3-f0: left promiscuous mode
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.968 226310 INFO os_vif [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e')#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.972 226310 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.972 226310 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:53 np0005539564 nova_compute[226295]: 2025-11-29 08:33:53.984 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:53.987 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcf72e0-21f6-4d2f-9628-f8a0d727b819]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:54.011 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ece22f-042f-485e-825f-b92cb4c3d177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:54.012 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb6827b-e75f-4c00-9d26-c871f1c506a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:54.030 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc45c2a-3775-4b9e-93bb-8b8dae6d5f86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817514, 'reachable_time': 15418, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291381, 'error': None, 'target': 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:54.033 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:33:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:54.034 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0d1c58-12b9-452d-9363-afb3339703bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:54 np0005539564 systemd[1]: run-netns-ovnmeta\x2df4afd5c3\x2df1f9\x2d4e62\x2d9e1b\x2dd55edb60d97c.mount: Deactivated successfully.
Nov 29 03:33:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:54.102 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:33:54.103 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:33:54 np0005539564 nova_compute[226295]: 2025-11-29 08:33:54.104 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:54 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Nov 29 03:33:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:54.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e394 e394: 3 total, 3 up, 3 in
Nov 29 03:33:54 np0005539564 nova_compute[226295]: 2025-11-29 08:33:54.635 226310 DEBUG neutronclient.v2_0.client [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:33:54 np0005539564 nova_compute[226295]: 2025-11-29 08:33:54.724 226310 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:54 np0005539564 nova_compute[226295]: 2025-11-29 08:33:54.724 226310 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:54 np0005539564 nova_compute[226295]: 2025-11-29 08:33:54.725 226310 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e395 e395: 3 total, 3 up, 3 in
Nov 29 03:33:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:55.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.055 226310 DEBUG nova.compute.manager [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.055 226310 DEBUG oslo_concurrency.lockutils [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.056 226310 DEBUG oslo_concurrency.lockutils [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.056 226310 DEBUG oslo_concurrency.lockutils [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.056 226310 DEBUG nova.compute.manager [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.056 226310 WARNING nova.compute.manager [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:33:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:33:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.553 226310 DEBUG nova.compute.manager [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.554 226310 DEBUG nova.compute.manager [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing instance network info cache due to event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.554 226310 DEBUG oslo_concurrency.lockutils [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.554 226310 DEBUG oslo_concurrency.lockutils [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:56 np0005539564 nova_compute[226295]: 2025-11-29 08:33:56.554 226310 DEBUG nova.network.neutron [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:58.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:58 np0005539564 nova_compute[226295]: 2025-11-29 08:33:58.645 226310 DEBUG nova.network.neutron [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updated VIF entry in instance network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:58 np0005539564 nova_compute[226295]: 2025-11-29 08:33:58.645 226310 DEBUG nova.network.neutron [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:58 np0005539564 nova_compute[226295]: 2025-11-29 08:33:58.678 226310 DEBUG oslo_concurrency.lockutils [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e396 e396: 3 total, 3 up, 3 in
Nov 29 03:33:58 np0005539564 nova_compute[226295]: 2025-11-29 08:33:58.783 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:58 np0005539564 nova_compute[226295]: 2025-11-29 08:33:58.963 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:33:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:00.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:00 np0005539564 nova_compute[226295]: 2025-11-29 08:34:00.806 226310 DEBUG nova.compute.manager [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:00 np0005539564 nova_compute[226295]: 2025-11-29 08:34:00.807 226310 DEBUG oslo_concurrency.lockutils [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:00 np0005539564 nova_compute[226295]: 2025-11-29 08:34:00.807 226310 DEBUG oslo_concurrency.lockutils [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:00 np0005539564 nova_compute[226295]: 2025-11-29 08:34:00.808 226310 DEBUG oslo_concurrency.lockutils [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:00 np0005539564 nova_compute[226295]: 2025-11-29 08:34:00.808 226310 DEBUG nova.compute.manager [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:00 np0005539564 nova_compute[226295]: 2025-11-29 08:34:00.809 226310 WARNING nova.compute.manager [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:34:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:01.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.283 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.284 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.284 226310 DEBUG nova.compute.manager [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Going to confirm migration 21 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:34:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:02.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.731 226310 DEBUG neutronclient.v2_0.client [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.732 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.733 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.733 226310 DEBUG nova.network.neutron [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.733 226310 DEBUG nova.objects.instance [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'info_cache' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.966 226310 DEBUG nova.compute.manager [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.967 226310 DEBUG oslo_concurrency.lockutils [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.967 226310 DEBUG oslo_concurrency.lockutils [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.968 226310 DEBUG oslo_concurrency.lockutils [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.968 226310 DEBUG nova.compute.manager [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:02 np0005539564 nova_compute[226295]: 2025-11-29 08:34:02.969 226310 WARNING nova.compute.manager [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:34:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:34:03.105 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:03 np0005539564 nova_compute[226295]: 2025-11-29 08:34:03.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:34:03.748 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:34:03.748 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:34:03.748 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:03 np0005539564 nova_compute[226295]: 2025-11-29 08:34:03.815 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:03 np0005539564 nova_compute[226295]: 2025-11-29 08:34:03.964 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.100 226310 DEBUG nova.network.neutron [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.127 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.127 226310 DEBUG nova.objects.instance [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.216 226310 DEBUG nova.storage.rbd_utils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] removing snapshot(nova-resize) on rbd image(782511b8-9841-4558-bc21-9a81d3913b54_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:34:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e397 e397: 3 total, 3 up, 3 in
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.433 226310 DEBUG nova.virt.libvirt.vif [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1326828449',display_name='tempest-TestNetworkAdvancedServerOps-server-1326828449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1326828449',id=175,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsTatB0ntZvgpT1iFQOjTdjEe6U2LspHqhHVlH5yZ8EV93LX7uxMrpvCyJRoDivS5erw2JcnGjpRKngF+GjO4y0hQO2CgxrKJ2TL+ibBoOMIlLXbWea/NN/kfP4yKXpw==',key_name='tempest-TestNetworkAdvancedServerOps-1995448580',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-d1ar0502',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:01Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=782511b8-9841-4558-bc21-9a81d3913b54,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.434 226310 DEBUG nova.network.os_vif_util [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.435 226310 DEBUG nova.network.os_vif_util [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.435 226310 DEBUG os_vif [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.437 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.437 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a130a46-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.438 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.440 226310 INFO os_vif [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e')#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.441 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.441 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:04 np0005539564 nova_compute[226295]: 2025-11-29 08:34:04.530 226310 DEBUG oslo_concurrency.processutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:05 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3284355851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:05 np0005539564 nova_compute[226295]: 2025-11-29 08:34:05.027 226310 DEBUG oslo_concurrency.processutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:05 np0005539564 nova_compute[226295]: 2025-11-29 08:34:05.035 226310 DEBUG nova.compute.provider_tree [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:05 np0005539564 nova_compute[226295]: 2025-11-29 08:34:05.057 226310 DEBUG nova.scheduler.client.report [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:05 np0005539564 nova_compute[226295]: 2025-11-29 08:34:05.103 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:05 np0005539564 nova_compute[226295]: 2025-11-29 08:34:05.237 226310 INFO nova.scheduler.client.report [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocation for migration 29af94fb-b970-4b57-a643-370ba5509b9a#033[00m
Nov 29 03:34:05 np0005539564 podman[291443]: 2025-11-29 08:34:05.271948083 +0000 UTC m=+0.072566004 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:05 np0005539564 podman[291444]: 2025-11-29 08:34:05.277400411 +0000 UTC m=+0.062606886 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:34:05 np0005539564 nova_compute[226295]: 2025-11-29 08:34:05.290 226310 DEBUG oslo_concurrency.lockutils [None req-7fbaeab2-767c-430e-a2bd-07f9a0998074 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 3.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:05 np0005539564 podman[291442]: 2025-11-29 08:34:05.308323187 +0000 UTC m=+0.108885557 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:34:05 np0005539564 nova_compute[226295]: 2025-11-29 08:34:05.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:05.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:06 np0005539564 nova_compute[226295]: 2025-11-29 08:34:06.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:06.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:07 np0005539564 nova_compute[226295]: 2025-11-29 08:34:07.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:07 np0005539564 nova_compute[226295]: 2025-11-29 08:34:07.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:34:07 np0005539564 nova_compute[226295]: 2025-11-29 08:34:07.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:34:07 np0005539564 nova_compute[226295]: 2025-11-29 08:34:07.397 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:34:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:07.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:08.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:08 np0005539564 nova_compute[226295]: 2025-11-29 08:34:08.818 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:08 np0005539564 nova_compute[226295]: 2025-11-29 08:34:08.903 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405233.9004033, 782511b8-9841-4558-bc21-9a81d3913b54 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:08 np0005539564 nova_compute[226295]: 2025-11-29 08:34:08.903 226310 INFO nova.compute.manager [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:34:08 np0005539564 nova_compute[226295]: 2025-11-29 08:34:08.928 226310 DEBUG nova.compute.manager [None req-b10f8399-871e-4744-9b97-bff52086c80a - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:08 np0005539564 nova_compute[226295]: 2025-11-29 08:34:08.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:09 np0005539564 nova_compute[226295]: 2025-11-29 08:34:09.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:09.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:10.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:11.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:12.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:13.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:13 np0005539564 nova_compute[226295]: 2025-11-29 08:34:13.821 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:13 np0005539564 nova_compute[226295]: 2025-11-29 08:34:13.992 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e398 e398: 3 total, 3 up, 3 in
Nov 29 03:34:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:14.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:15 np0005539564 nova_compute[226295]: 2025-11-29 08:34:15.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:15.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:16.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e399 e399: 3 total, 3 up, 3 in
Nov 29 03:34:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:17.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:18.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:18 np0005539564 nova_compute[226295]: 2025-11-29 08:34:18.822 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539564 nova_compute[226295]: 2025-11-29 08:34:18.994 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:19.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:20 np0005539564 nova_compute[226295]: 2025-11-29 08:34:20.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:20.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:20 np0005539564 nova_compute[226295]: 2025-11-29 08:34:20.411 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:20 np0005539564 nova_compute[226295]: 2025-11-29 08:34:20.411 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:20 np0005539564 nova_compute[226295]: 2025-11-29 08:34:20.412 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:20 np0005539564 nova_compute[226295]: 2025-11-29 08:34:20.412 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:34:20 np0005539564 nova_compute[226295]: 2025-11-29 08:34:20.412 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/293133485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:20 np0005539564 nova_compute[226295]: 2025-11-29 08:34:20.931 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:21 np0005539564 nova_compute[226295]: 2025-11-29 08:34:21.189 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:21 np0005539564 nova_compute[226295]: 2025-11-29 08:34:21.191 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4261MB free_disk=20.896869659423828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:34:21 np0005539564 nova_compute[226295]: 2025-11-29 08:34:21.192 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:21 np0005539564 nova_compute[226295]: 2025-11-29 08:34:21.192 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:21 np0005539564 nova_compute[226295]: 2025-11-29 08:34:21.701 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:34:21 np0005539564 nova_compute[226295]: 2025-11-29 08:34:21.702 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:34:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:21.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:21 np0005539564 nova_compute[226295]: 2025-11-29 08:34:21.796 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/518622181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:22 np0005539564 nova_compute[226295]: 2025-11-29 08:34:22.242 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:22 np0005539564 nova_compute[226295]: 2025-11-29 08:34:22.253 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:22 np0005539564 nova_compute[226295]: 2025-11-29 08:34:22.278 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:22 np0005539564 nova_compute[226295]: 2025-11-29 08:34:22.308 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:34:22 np0005539564 nova_compute[226295]: 2025-11-29 08:34:22.309 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:22.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:23.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:23 np0005539564 nova_compute[226295]: 2025-11-29 08:34:23.866 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:23 np0005539564 nova_compute[226295]: 2025-11-29 08:34:23.995 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:24.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:25.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:26.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:27.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:28.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:28 np0005539564 nova_compute[226295]: 2025-11-29 08:34:28.873 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:28 np0005539564 nova_compute[226295]: 2025-11-29 08:34:28.996 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e400 e400: 3 total, 3 up, 3 in
Nov 29 03:34:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:29.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:30.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:31.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:32.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:33.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:33 np0005539564 nova_compute[226295]: 2025-11-29 08:34:33.876 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539564 nova_compute[226295]: 2025-11-29 08:34:33.998 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:34.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:35 np0005539564 podman[291626]: 2025-11-29 08:34:35.439057445 +0000 UTC m=+0.061847315 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:35 np0005539564 podman[291625]: 2025-11-29 08:34:35.475236073 +0000 UTC m=+0.100047497 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:35 np0005539564 podman[291624]: 2025-11-29 08:34:35.504002971 +0000 UTC m=+0.129208666 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:34:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:35.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:36.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:37.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:38.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:38 np0005539564 nova_compute[226295]: 2025-11-29 08:34:38.880 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:39 np0005539564 nova_compute[226295]: 2025-11-29 08:34:39.000 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.424770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279424853, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2058, "num_deletes": 262, "total_data_size": 4524385, "memory_usage": 4583632, "flush_reason": "Manual Compaction"}
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279458021, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 2950223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61626, "largest_seqno": 63679, "table_properties": {"data_size": 2941679, "index_size": 5230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17721, "raw_average_key_size": 19, "raw_value_size": 2924274, "raw_average_value_size": 3274, "num_data_blocks": 226, "num_entries": 893, "num_filter_entries": 893, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405134, "oldest_key_time": 1764405134, "file_creation_time": 1764405279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 33310 microseconds, and 11826 cpu microseconds.
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.458089) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 2950223 bytes OK
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.458118) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.460054) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.460084) EVENT_LOG_v1 {"time_micros": 1764405279460075, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.460114) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 4515057, prev total WAL file size 4515057, number of live WAL files 2.
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.462602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323532' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(2881KB)], [120(12MB)]
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279462684, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 15931422, "oldest_snapshot_seqno": -1}
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 9494 keys, 14787928 bytes, temperature: kUnknown
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279657050, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 14787928, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14722643, "index_size": 40469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 248182, "raw_average_key_size": 26, "raw_value_size": 14551957, "raw_average_value_size": 1532, "num_data_blocks": 1565, "num_entries": 9494, "num_filter_entries": 9494, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.657514) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 14787928 bytes
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.659161) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.9 rd, 76.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 12.4 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(10.4) write-amplify(5.0) OK, records in: 10036, records dropped: 542 output_compression: NoCompression
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.659185) EVENT_LOG_v1 {"time_micros": 1764405279659174, "job": 76, "event": "compaction_finished", "compaction_time_micros": 194528, "compaction_time_cpu_micros": 44599, "output_level": 6, "num_output_files": 1, "total_output_size": 14787928, "num_input_records": 10036, "num_output_records": 9494, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279660014, "job": 76, "event": "table_file_deletion", "file_number": 122}
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279663036, "job": 76, "event": "table_file_deletion", "file_number": 120}
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.462301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.663160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.663170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.663173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.663176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:34:39.663178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:40 np0005539564 nova_compute[226295]: 2025-11-29 08:34:40.040 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:40 np0005539564 nova_compute[226295]: 2025-11-29 08:34:40.289 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:34:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:40.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:34:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:41.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:42.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:43 np0005539564 nova_compute[226295]: 2025-11-29 08:34:43.915 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:44 np0005539564 nova_compute[226295]: 2025-11-29 08:34:44.001 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:44.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e401 e401: 3 total, 3 up, 3 in
Nov 29 03:34:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e402 e402: 3 total, 3 up, 3 in
Nov 29 03:34:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:45.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:46.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:46 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:48.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:48 np0005539564 nova_compute[226295]: 2025-11-29 08:34:48.954 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:49 np0005539564 nova_compute[226295]: 2025-11-29 08:34:49.003 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e403 e403: 3 total, 3 up, 3 in
Nov 29 03:34:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:50.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:51.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:52.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:53.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:53 np0005539564 nova_compute[226295]: 2025-11-29 08:34:53.958 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539564 nova_compute[226295]: 2025-11-29 08:34:54.005 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:54.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 e404: 3 total, 3 up, 3 in
Nov 29 03:34:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:34:55.143 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:55 np0005539564 nova_compute[226295]: 2025-11-29 08:34:55.144 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:34:55.145 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:34:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:34:55.146 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:55.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:56.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:58.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:58 np0005539564 nova_compute[226295]: 2025-11-29 08:34:58.981 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:59 np0005539564 nova_compute[226295]: 2025-11-29 08:34:59.006 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:34:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:59.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:00.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:35:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3260366582' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:35:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:35:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3260366582' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:35:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:02.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:35:03.749 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:35:03.750 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:35:03.750 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:03.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:03 np0005539564 nova_compute[226295]: 2025-11-29 08:35:03.984 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:04 np0005539564 nova_compute[226295]: 2025-11-29 08:35:04.008 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:04.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:05 np0005539564 nova_compute[226295]: 2025-11-29 08:35:05.310 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:05 np0005539564 nova_compute[226295]: 2025-11-29 08:35:05.311 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:05 np0005539564 nova_compute[226295]: 2025-11-29 08:35:05.312 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:35:05 np0005539564 nova_compute[226295]: 2025-11-29 08:35:05.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:06 np0005539564 nova_compute[226295]: 2025-11-29 08:35:06.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:06.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:06 np0005539564 podman[291920]: 2025-11-29 08:35:06.551726048 +0000 UTC m=+0.092060601 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:06 np0005539564 podman[291919]: 2025-11-29 08:35:06.558890952 +0000 UTC m=+0.103247284 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:35:06 np0005539564 podman[291918]: 2025-11-29 08:35:06.590442626 +0000 UTC m=+0.140179964 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:07 np0005539564 nova_compute[226295]: 2025-11-29 08:35:07.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:07 np0005539564 nova_compute[226295]: 2025-11-29 08:35:07.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:35:07 np0005539564 nova_compute[226295]: 2025-11-29 08:35:07.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:35:07 np0005539564 nova_compute[226295]: 2025-11-29 08:35:07.396 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:35:07 np0005539564 nova_compute[226295]: 2025-11-29 08:35:07.399 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:07.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:08.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:08 np0005539564 nova_compute[226295]: 2025-11-29 08:35:08.986 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:09 np0005539564 nova_compute[226295]: 2025-11-29 08:35:09.009 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:09 np0005539564 nova_compute[226295]: 2025-11-29 08:35:09.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.621786) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309621835, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 253, "total_data_size": 996856, "memory_usage": 1009816, "flush_reason": "Manual Compaction"}
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309629435, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 503251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63684, "largest_seqno": 64322, "table_properties": {"data_size": 500227, "index_size": 932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8401, "raw_average_key_size": 21, "raw_value_size": 493784, "raw_average_value_size": 1253, "num_data_blocks": 40, "num_entries": 394, "num_filter_entries": 394, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405279, "oldest_key_time": 1764405279, "file_creation_time": 1764405309, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 7830 microseconds, and 3374 cpu microseconds.
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.629614) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 503251 bytes OK
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.629703) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.631194) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.631216) EVENT_LOG_v1 {"time_micros": 1764405309631209, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.631236) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 993254, prev total WAL file size 993254, number of live WAL files 2.
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.632504) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303134' seq:72057594037927935, type:22 .. '6D6772737461740032323636' seq:0, type:0; will stop at (end)
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(491KB)], [123(14MB)]
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309632548, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 15291179, "oldest_snapshot_seqno": -1}
Nov 29 03:35:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:09.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 9376 keys, 11574922 bytes, temperature: kUnknown
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309781649, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 11574922, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11514724, "index_size": 35638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 246043, "raw_average_key_size": 26, "raw_value_size": 11350267, "raw_average_value_size": 1210, "num_data_blocks": 1363, "num_entries": 9376, "num_filter_entries": 9376, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405309, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.782048) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11574922 bytes
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.783334) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.5 rd, 77.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(53.4) write-amplify(23.0) OK, records in: 9888, records dropped: 512 output_compression: NoCompression
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.783363) EVENT_LOG_v1 {"time_micros": 1764405309783350, "job": 78, "event": "compaction_finished", "compaction_time_micros": 149195, "compaction_time_cpu_micros": 37251, "output_level": 6, "num_output_files": 1, "total_output_size": 11574922, "num_input_records": 9888, "num_output_records": 9376, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309783743, "job": 78, "event": "table_file_deletion", "file_number": 125}
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309788810, "job": 78, "event": "table_file_deletion", "file_number": 123}
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.632384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.788878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.788884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.788887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.788889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:09.788892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:10.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:11.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:12.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:13.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:13 np0005539564 nova_compute[226295]: 2025-11-29 08:35:13.990 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539564 nova_compute[226295]: 2025-11-29 08:35:14.010 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:14.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:15.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.331498) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316331537, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 333, "num_deletes": 251, "total_data_size": 176130, "memory_usage": 182904, "flush_reason": "Manual Compaction"}
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316334344, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 115345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64328, "largest_seqno": 64655, "table_properties": {"data_size": 113274, "index_size": 234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5302, "raw_average_key_size": 18, "raw_value_size": 109211, "raw_average_value_size": 381, "num_data_blocks": 10, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405309, "oldest_key_time": 1764405309, "file_creation_time": 1764405316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 2891 microseconds, and 1138 cpu microseconds.
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.334389) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 115345 bytes OK
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.334412) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.336905) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.336963) EVENT_LOG_v1 {"time_micros": 1764405316336954, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.336985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 173812, prev total WAL file size 173812, number of live WAL files 2.
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.337398) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(112KB)], [126(11MB)]
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316337432, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 11690267, "oldest_snapshot_seqno": -1}
Nov 29 03:35:16 np0005539564 nova_compute[226295]: 2025-11-29 08:35:16.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 9152 keys, 9788206 bytes, temperature: kUnknown
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316430288, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 9788206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9731175, "index_size": 33046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 242091, "raw_average_key_size": 26, "raw_value_size": 9572240, "raw_average_value_size": 1045, "num_data_blocks": 1248, "num_entries": 9152, "num_filter_entries": 9152, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.430612) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9788206 bytes
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.432726) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.7 rd, 105.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(186.2) write-amplify(84.9) OK, records in: 9662, records dropped: 510 output_compression: NoCompression
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.432757) EVENT_LOG_v1 {"time_micros": 1764405316432743, "job": 80, "event": "compaction_finished", "compaction_time_micros": 92965, "compaction_time_cpu_micros": 25679, "output_level": 6, "num_output_files": 1, "total_output_size": 9788206, "num_input_records": 9662, "num_output_records": 9152, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316433016, "job": 80, "event": "table_file_deletion", "file_number": 128}
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316436595, "job": 80, "event": "table_file_deletion", "file_number": 126}
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.337343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.436731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.436741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.436744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.436747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:35:16.436750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:16.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:19 np0005539564 nova_compute[226295]: 2025-11-29 08:35:18.999 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:19 np0005539564 nova_compute[226295]: 2025-11-29 08:35:19.012 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:20.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:21 np0005539564 nova_compute[226295]: 2025-11-29 08:35:21.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:21.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:22.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:22 np0005539564 nova_compute[226295]: 2025-11-29 08:35:22.729 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:22 np0005539564 nova_compute[226295]: 2025-11-29 08:35:22.730 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:22 np0005539564 nova_compute[226295]: 2025-11-29 08:35:22.731 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:22 np0005539564 nova_compute[226295]: 2025-11-29 08:35:22.731 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:35:22 np0005539564 nova_compute[226295]: 2025-11-29 08:35:22.731 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:23 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/178896073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:23 np0005539564 nova_compute[226295]: 2025-11-29 08:35:23.214 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:23 np0005539564 nova_compute[226295]: 2025-11-29 08:35:23.357 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:35:23 np0005539564 nova_compute[226295]: 2025-11-29 08:35:23.358 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4275MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:35:23 np0005539564 nova_compute[226295]: 2025-11-29 08:35:23.358 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:23 np0005539564 nova_compute[226295]: 2025-11-29 08:35:23.359 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:23.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:24 np0005539564 nova_compute[226295]: 2025-11-29 08:35:24.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539564 nova_compute[226295]: 2025-11-29 08:35:24.013 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:24.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:25.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:25 np0005539564 nova_compute[226295]: 2025-11-29 08:35:25.937 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:35:25 np0005539564 nova_compute[226295]: 2025-11-29 08:35:25.938 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:35:26 np0005539564 nova_compute[226295]: 2025-11-29 08:35:26.000 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1290390513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:26 np0005539564 nova_compute[226295]: 2025-11-29 08:35:26.470 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:26 np0005539564 nova_compute[226295]: 2025-11-29 08:35:26.482 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:26 np0005539564 nova_compute[226295]: 2025-11-29 08:35:26.502 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:26 np0005539564 nova_compute[226295]: 2025-11-29 08:35:26.506 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:35:26 np0005539564 nova_compute[226295]: 2025-11-29 08:35:26.507 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:26.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:27.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:35:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/993834800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:35:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:35:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/993834800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:35:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:29 np0005539564 nova_compute[226295]: 2025-11-29 08:35:29.005 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539564 nova_compute[226295]: 2025-11-29 08:35:29.014 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:29.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:30.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:31.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:32.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:33.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:34 np0005539564 nova_compute[226295]: 2025-11-29 08:35:34.016 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:34 np0005539564 nova_compute[226295]: 2025-11-29 08:35:34.018 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:34 np0005539564 nova_compute[226295]: 2025-11-29 08:35:34.018 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:35:34 np0005539564 nova_compute[226295]: 2025-11-29 08:35:34.018 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:35:34 np0005539564 nova_compute[226295]: 2025-11-29 08:35:34.047 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:34 np0005539564 nova_compute[226295]: 2025-11-29 08:35:34.048 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:35:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:34.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:35.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:36.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:37 np0005539564 podman[292030]: 2025-11-29 08:35:37.554435937 +0000 UTC m=+0.080891608 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 03:35:37 np0005539564 podman[292029]: 2025-11-29 08:35:37.558448677 +0000 UTC m=+0.095473095 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:35:37 np0005539564 podman[292028]: 2025-11-29 08:35:37.59222688 +0000 UTC m=+0.142070245 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:35:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:37.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:38.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:39 np0005539564 nova_compute[226295]: 2025-11-29 08:35:39.048 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:39 np0005539564 nova_compute[226295]: 2025-11-29 08:35:39.050 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:39 np0005539564 nova_compute[226295]: 2025-11-29 08:35:39.050 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:35:39 np0005539564 nova_compute[226295]: 2025-11-29 08:35:39.050 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:35:39 np0005539564 nova_compute[226295]: 2025-11-29 08:35:39.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539564 nova_compute[226295]: 2025-11-29 08:35:39.052 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:35:39 np0005539564 nova_compute[226295]: 2025-11-29 08:35:39.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:39 np0005539564 ovn_controller[130591]: 2025-11-29T08:35:39Z|00691|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 03:35:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:39.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:40.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:41.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:42.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:44 np0005539564 nova_compute[226295]: 2025-11-29 08:35:44.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:45.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:46.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:49 np0005539564 nova_compute[226295]: 2025-11-29 08:35:49.057 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:35:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:35:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:49.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.510 226310 DEBUG nova.compute.manager [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.603 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.604 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.640 226310 DEBUG nova.objects.instance [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'pci_requests' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.656 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.657 226310 INFO nova.compute.claims [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.657 226310 DEBUG nova.objects.instance [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'resources' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.672 226310 DEBUG nova.objects.instance [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'numa_topology' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.687 226310 DEBUG nova.objects.instance [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'pci_devices' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.737 226310 INFO nova.compute.resource_tracker [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating resource usage from migration c2890240-6091-4f4d-923d-07b9818675b5#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.738 226310 DEBUG nova.compute.resource_tracker [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Starting to track incoming migration c2890240-6091-4f4d-923d-07b9818675b5 with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:35:51 np0005539564 nova_compute[226295]: 2025-11-29 08:35:51.816 226310 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:51.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2946036246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:52 np0005539564 nova_compute[226295]: 2025-11-29 08:35:52.286 226310 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:52 np0005539564 nova_compute[226295]: 2025-11-29 08:35:52.297 226310 DEBUG nova.compute.provider_tree [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:52 np0005539564 nova_compute[226295]: 2025-11-29 08:35:52.321 226310 DEBUG nova.scheduler.client.report [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:52 np0005539564 nova_compute[226295]: 2025-11-29 08:35:52.378 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:52 np0005539564 nova_compute[226295]: 2025-11-29 08:35:52.379 226310 INFO nova.compute.manager [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Migrating#033[00m
Nov 29 03:35:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:52.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:53.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:54 np0005539564 nova_compute[226295]: 2025-11-29 08:35:54.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:54 np0005539564 nova_compute[226295]: 2025-11-29 08:35:54.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:54 np0005539564 nova_compute[226295]: 2025-11-29 08:35:54.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:35:54 np0005539564 nova_compute[226295]: 2025-11-29 08:35:54.063 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:35:54 np0005539564 nova_compute[226295]: 2025-11-29 08:35:54.090 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:54 np0005539564 nova_compute[226295]: 2025-11-29 08:35:54.092 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:35:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:54.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:55 np0005539564 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:35:55 np0005539564 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:35:55 np0005539564 systemd-logind[785]: New session 57 of user nova.
Nov 29 03:35:55 np0005539564 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:35:55 np0005539564 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:35:55 np0005539564 systemd[292249]: Queued start job for default target Main User Target.
Nov 29 03:35:55 np0005539564 systemd[292249]: Created slice User Application Slice.
Nov 29 03:35:55 np0005539564 systemd[292249]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:35:55 np0005539564 systemd[292249]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:35:55 np0005539564 systemd[292249]: Reached target Paths.
Nov 29 03:35:55 np0005539564 systemd[292249]: Reached target Timers.
Nov 29 03:35:55 np0005539564 systemd[292249]: Starting D-Bus User Message Bus Socket...
Nov 29 03:35:55 np0005539564 systemd[292249]: Starting Create User's Volatile Files and Directories...
Nov 29 03:35:55 np0005539564 systemd[292249]: Finished Create User's Volatile Files and Directories.
Nov 29 03:35:55 np0005539564 systemd[292249]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:35:55 np0005539564 systemd[292249]: Reached target Sockets.
Nov 29 03:35:55 np0005539564 systemd[292249]: Reached target Basic System.
Nov 29 03:35:55 np0005539564 systemd[292249]: Reached target Main User Target.
Nov 29 03:35:55 np0005539564 systemd[292249]: Startup finished in 160ms.
Nov 29 03:35:55 np0005539564 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:35:55 np0005539564 systemd[1]: Started Session 57 of User nova.
Nov 29 03:35:55 np0005539564 systemd[1]: session-57.scope: Deactivated successfully.
Nov 29 03:35:55 np0005539564 systemd-logind[785]: Session 57 logged out. Waiting for processes to exit.
Nov 29 03:35:55 np0005539564 systemd-logind[785]: Removed session 57.
Nov 29 03:35:55 np0005539564 nova_compute[226295]: 2025-11-29 08:35:55.499 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:55 np0005539564 systemd-logind[785]: New session 59 of user nova.
Nov 29 03:35:55 np0005539564 systemd[1]: Started Session 59 of User nova.
Nov 29 03:35:55 np0005539564 systemd[1]: session-59.scope: Deactivated successfully.
Nov 29 03:35:55 np0005539564 systemd-logind[785]: Session 59 logged out. Waiting for processes to exit.
Nov 29 03:35:55 np0005539564 systemd-logind[785]: Removed session 59.
Nov 29 03:35:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:55.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:56 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:56 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:56.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:57.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:35:58.467 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:35:58.468 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:35:58 np0005539564 nova_compute[226295]: 2025-11-29 08:35:58.470 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:58.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:59 np0005539564 nova_compute[226295]: 2025-11-29 08:35:59.092 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:35:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:59.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:00 np0005539564 nova_compute[226295]: 2025-11-29 08:36:00.321 226310 INFO nova.network.neutron [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating port 56a99c82-c7f3-45ce-8952-bb1fdd178381 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:36:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:00.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.086 226310 DEBUG nova.compute.manager [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.087 226310 DEBUG oslo_concurrency.lockutils [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.087 226310 DEBUG oslo_concurrency.lockutils [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.088 226310 DEBUG oslo_concurrency.lockutils [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.088 226310 DEBUG nova.compute.manager [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.089 226310 WARNING nova.compute.manager [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.702 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.703 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:01 np0005539564 nova_compute[226295]: 2025-11-29 08:36:01.704 226310 DEBUG nova.network.neutron [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:01.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.272 226310 DEBUG nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.273 226310 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.274 226310 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.274 226310 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.275 226310 DEBUG nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.275 226310 WARNING nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.276 226310 DEBUG nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.276 226310 DEBUG nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing instance network info cache due to event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.277 226310 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:36:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:03.471 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.540 226310 DEBUG nova.network.neutron [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.565 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.569 226310 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.569 226310 DEBUG nova.network.neutron [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.671 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.673 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.673 226310 INFO nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Creating image(s)#033[00m
Nov 29 03:36:03 np0005539564 nova_compute[226295]: 2025-11-29 08:36:03.714 226310 DEBUG nova.storage.rbd_utils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] creating snapshot(nova-resize) on rbd image(e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:36:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:03.751 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:03.751 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:03.751 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.095 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e405 e405: 3 total, 3 up, 3 in
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.465 226310 DEBUG nova.objects.instance [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'trusted_certs' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:04.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.614 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.615 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Ensure instance console log exists: /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.616 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.616 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.617 226310 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.620 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Start _get_guest_xml network_info=[{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--413829212", "vif_mac": "fa:16:3e:dc:ee:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.633 226310 WARNING nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.642 226310 DEBUG nova.virt.libvirt.host [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.645 226310 DEBUG nova.virt.libvirt.host [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.664 226310 DEBUG nova.virt.libvirt.host [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.666 226310 DEBUG nova.virt.libvirt.host [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.668 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.668 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.669 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.670 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.670 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.671 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.671 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.672 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.673 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.673 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.674 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.674 226310 DEBUG nova.virt.hardware [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.675 226310 DEBUG nova.objects.instance [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'vcpu_model' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:04 np0005539564 nova_compute[226295]: 2025-11-29 08:36:04.705 226310 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:05 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3269295202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.184 226310 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.251 226310 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:05 np0005539564 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:36:05 np0005539564 systemd[292249]: Activating special unit Exit the Session...
Nov 29 03:36:05 np0005539564 systemd[292249]: Stopped target Main User Target.
Nov 29 03:36:05 np0005539564 systemd[292249]: Stopped target Basic System.
Nov 29 03:36:05 np0005539564 systemd[292249]: Stopped target Paths.
Nov 29 03:36:05 np0005539564 systemd[292249]: Stopped target Sockets.
Nov 29 03:36:05 np0005539564 systemd[292249]: Stopped target Timers.
Nov 29 03:36:05 np0005539564 systemd[292249]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:36:05 np0005539564 systemd[292249]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:36:05 np0005539564 systemd[292249]: Closed D-Bus User Message Bus Socket.
Nov 29 03:36:05 np0005539564 systemd[292249]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:36:05 np0005539564 systemd[292249]: Removed slice User Application Slice.
Nov 29 03:36:05 np0005539564 systemd[292249]: Reached target Shutdown.
Nov 29 03:36:05 np0005539564 systemd[292249]: Finished Exit the Session.
Nov 29 03:36:05 np0005539564 systemd[292249]: Reached target Exit the Session.
Nov 29 03:36:05 np0005539564 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:36:05 np0005539564 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.687 226310 DEBUG nova.network.neutron [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated VIF entry in instance network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:05 np0005539564 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.688 226310 DEBUG nova.network.neutron [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:05 np0005539564 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:36:05 np0005539564 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:36:05 np0005539564 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.720 226310 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:05 np0005539564 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:36:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:05 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2560117873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.741 226310 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.744 226310 DEBUG nova.virt.libvirt.vif [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:35:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:35:59Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--413829212", "vif_mac": "fa:16:3e:dc:ee:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.745 226310 DEBUG nova.network.os_vif_util [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--413829212", "vif_mac": "fa:16:3e:dc:ee:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.746 226310 DEBUG nova.network.os_vif_util [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.755 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <uuid>e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</uuid>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <name>instance-000000b1</name>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1486764034</nova:name>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:36:04</nova:creationTime>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <nova:port uuid="56a99c82-c7f3-45ce-8952-bb1fdd178381">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <entry name="serial">e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</entry>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <entry name="uuid">e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</entry>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk.config">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:dc:ee:a4"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <target dev="tap56a99c82-c7"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/console.log" append="off"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:36:05 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:36:05 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:36:05 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:36:05 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.758 226310 DEBUG nova.virt.libvirt.vif [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:35:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:35:59Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--413829212", "vif_mac": "fa:16:3e:dc:ee:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.759 226310 DEBUG nova.network.os_vif_util [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--413829212", "vif_mac": "fa:16:3e:dc:ee:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.760 226310 DEBUG nova.network.os_vif_util [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.761 226310 DEBUG os_vif [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.762 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.763 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.763 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.768 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.768 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56a99c82-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.769 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56a99c82-c7, col_values=(('external_ids', {'iface-id': '56a99c82-c7f3-45ce-8952-bb1fdd178381', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:ee:a4', 'vm-uuid': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:05 np0005539564 NetworkManager[48997]: <info>  [1764405365.7734] manager: (tap56a99c82-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.774 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.786 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.788 226310 INFO os_vif [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7')#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.851 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.852 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.853 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] No VIF found with MAC fa:16:3e:dc:ee:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:36:05 np0005539564 nova_compute[226295]: 2025-11-29 08:36:05.854 226310 INFO nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Using config drive#033[00m
Nov 29 03:36:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:05.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:06 np0005539564 kernel: tap56a99c82-c7: entered promiscuous mode
Nov 29 03:36:06 np0005539564 NetworkManager[48997]: <info>  [1764405366.0052] manager: (tap56a99c82-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Nov 29 03:36:06 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:06Z|00692|binding|INFO|Claiming lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 for this chassis.
Nov 29 03:36:06 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:06Z|00693|binding|INFO|56a99c82-c7f3-45ce-8952-bb1fdd178381: Claiming fa:16:3e:dc:ee:a4 10.100.0.11
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.006 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.014 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.025 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.030 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.043 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 NetworkManager[48997]: <info>  [1764405366.0442] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Nov 29 03:36:06 np0005539564 NetworkManager[48997]: <info>  [1764405366.0457] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Nov 29 03:36:06 np0005539564 systemd-udevd[292489]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:06 np0005539564 systemd-machined[190128]: New machine qemu-83-instance-000000b1.
Nov 29 03:36:06 np0005539564 NetworkManager[48997]: <info>  [1764405366.0875] device (tap56a99c82-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:36:06 np0005539564 NetworkManager[48997]: <info>  [1764405366.0889] device (tap56a99c82-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:36:06 np0005539564 systemd[1]: Started Virtual Machine qemu-83-instance-000000b1.
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.109 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:ee:a4 10.100.0.11'], port_security=['fa:16:3e:dc:ee:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd444d77c-01c4-4fb8-ba04-a10761695979', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0167e73-34b5-4b34-9484-783b07e45b22, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=56a99c82-c7f3-45ce-8952-bb1fdd178381) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.112 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 56a99c82-c7f3-45ce-8952-bb1fdd178381 in datapath e259a30d-7e3f-48b9-abdf-dc7aa571c14c bound to our chassis#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.114 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e259a30d-7e3f-48b9-abdf-dc7aa571c14c#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.135 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cf386b59-c83b-4002-830d-9a6dd5cc4c24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.137 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape259a30d-71 in ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.139 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape259a30d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.140 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6feb67a5-5e6d-447f-88c8-1ede42df4971]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.141 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce6ec02-12b9-481b-a5df-a7b054915d25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.162 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[576c826c-911a-41ec-acad-14606416fc91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.201 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a699951b-4ac7-4eec-a7d7-646987cdd03a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.209 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.227 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:06Z|00694|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 ovn-installed in OVS
Nov 29 03:36:06 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:06Z|00695|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 up in Southbound
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.237 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.245 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5240cd-67c6-4205-b6b3-9950b73c7740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.252 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1d690afa-b90e-4d2f-85ff-234a4450783f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 NetworkManager[48997]: <info>  [1764405366.2535] manager: (tape259a30d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Nov 29 03:36:06 np0005539564 systemd-udevd[292492]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.300 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[312cbb1c-8cfc-4bf9-9c3d-85a61f69d3f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.305 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b15e325e-a322-438c-a490-8c74d51b5ada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 NetworkManager[48997]: <info>  [1764405366.3392] device (tape259a30d-70): carrier: link connected
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.352 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c497ac53-65ba-443f-ba8e-b32d682d55c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.383 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c115e8-2226-48da-9222-13d6b13577a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape259a30d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:ff:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834102, 'reachable_time': 29557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292523, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.410 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8418faed-8394-4eb5-b637-441ef4d5856f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:ff36'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 834102, 'tstamp': 834102}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292524, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.444 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5e51bf02-2ed0-4c35-9586-7f14d3e0dcef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape259a30d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:ff:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834102, 'reachable_time': 29557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292525, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.494 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a65a68-9607-4298-b887-38c94a231f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.559 226310 DEBUG nova.compute.manager [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.561 226310 DEBUG oslo_concurrency.lockutils [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.561 226310 DEBUG oslo_concurrency.lockutils [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.562 226310 DEBUG oslo_concurrency.lockutils [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.563 226310 DEBUG nova.compute.manager [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.563 226310 WARNING nova.compute.manager [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.596 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[37385977-ff99-4ae6-bb1d-b672931ec575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.598 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape259a30d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.598 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.599 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape259a30d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.601 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 kernel: tape259a30d-70: entered promiscuous mode
Nov 29 03:36:06 np0005539564 NetworkManager[48997]: <info>  [1764405366.6028] manager: (tape259a30d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.607 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape259a30d-70, col_values=(('external_ids', {'iface-id': '26e698e1-ae82-4653-b3c1-2c8f8d7f1139'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:06 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:06Z|00696|binding|INFO|Releasing lport 26e698e1-ae82-4653-b3c1-2c8f8d7f1139 from this chassis (sb_readonly=0)
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.608 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.611 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.612 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6199e3-daa0-4a89-929e-42347df4cad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.613 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-e259a30d-7e3f-48b9-abdf-dc7aa571c14c
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID e259a30d-7e3f-48b9-abdf-dc7aa571c14c
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:36:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:06.614 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'env', 'PROCESS_TAG=haproxy-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:36:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:06.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.641 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.713 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405366.7133703, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.714 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.716 226310 DEBUG nova.compute.manager [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.721 226310 INFO nova.virt.libvirt.driver [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance running successfully.#033[00m
Nov 29 03:36:06 np0005539564 virtqemud[225880]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.724 226310 DEBUG nova.virt.libvirt.guest [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.725 226310 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.732 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.737 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.759 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.760 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405366.7140977, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.761 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.800 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.805 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:06 np0005539564 nova_compute[226295]: 2025-11-29 08:36:06.827 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:36:07 np0005539564 podman[292599]: 2025-11-29 08:36:07.060980237 +0000 UTC m=+0.086418639 container create df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:36:07 np0005539564 podman[292599]: 2025-11-29 08:36:07.013678908 +0000 UTC m=+0.039117350 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:36:07 np0005539564 systemd[1]: Started libpod-conmon-df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3.scope.
Nov 29 03:36:07 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:36:07 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4524857db462d437b5482d68e993a227b86acdf1da5bb74ddc50144b53075b9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:07 np0005539564 podman[292599]: 2025-11-29 08:36:07.173734888 +0000 UTC m=+0.199173300 container init df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:36:07 np0005539564 podman[292599]: 2025-11-29 08:36:07.181899029 +0000 UTC m=+0.207337421 container start df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:36:07 np0005539564 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[292614]: [NOTICE]   (292618) : New worker (292620) forked
Nov 29 03:36:07 np0005539564 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[292614]: [NOTICE]   (292618) : Loading success.
Nov 29 03:36:07 np0005539564 nova_compute[226295]: 2025-11-29 08:36:07.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:07 np0005539564 nova_compute[226295]: 2025-11-29 08:36:07.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:36:07 np0005539564 nova_compute[226295]: 2025-11-29 08:36:07.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:36:07 np0005539564 nova_compute[226295]: 2025-11-29 08:36:07.852 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:07 np0005539564 nova_compute[226295]: 2025-11-29 08:36:07.854 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:07 np0005539564 nova_compute[226295]: 2025-11-29 08:36:07.854 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:36:07 np0005539564 nova_compute[226295]: 2025-11-29 08:36:07.855 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:07.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:08 np0005539564 podman[292631]: 2025-11-29 08:36:08.497322047 +0000 UTC m=+0.055298748 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:36:08 np0005539564 podman[292630]: 2025-11-29 08:36:08.500734349 +0000 UTC m=+0.059358357 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:36:08 np0005539564 podman[292629]: 2025-11-29 08:36:08.531658765 +0000 UTC m=+0.083829779 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:36:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:08.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:08 np0005539564 nova_compute[226295]: 2025-11-29 08:36:08.730 226310 DEBUG nova.compute.manager [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:08 np0005539564 nova_compute[226295]: 2025-11-29 08:36:08.731 226310 DEBUG oslo_concurrency.lockutils [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:08 np0005539564 nova_compute[226295]: 2025-11-29 08:36:08.731 226310 DEBUG oslo_concurrency.lockutils [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:08 np0005539564 nova_compute[226295]: 2025-11-29 08:36:08.731 226310 DEBUG oslo_concurrency.lockutils [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:08 np0005539564 nova_compute[226295]: 2025-11-29 08:36:08.732 226310 DEBUG nova.compute.manager [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:08 np0005539564 nova_compute[226295]: 2025-11-29 08:36:08.732 226310 WARNING nova.compute.manager [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:36:09 np0005539564 nova_compute[226295]: 2025-11-29 08:36:09.098 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:09.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:09 np0005539564 nova_compute[226295]: 2025-11-29 08:36:09.872 226310 DEBUG nova.network.neutron [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Port 56a99c82-c7f3-45ce-8952-bb1fdd178381 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:36:09 np0005539564 nova_compute[226295]: 2025-11-29 08:36:09.872 226310 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:10 np0005539564 nova_compute[226295]: 2025-11-29 08:36:10.204 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:10 np0005539564 nova_compute[226295]: 2025-11-29 08:36:10.233 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:10 np0005539564 nova_compute[226295]: 2025-11-29 08:36:10.234 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:36:10 np0005539564 nova_compute[226295]: 2025-11-29 08:36:10.234 226310 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:10 np0005539564 nova_compute[226295]: 2025-11-29 08:36:10.234 226310 DEBUG nova.network.neutron [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:10 np0005539564 nova_compute[226295]: 2025-11-29 08:36:10.236 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:10 np0005539564 nova_compute[226295]: 2025-11-29 08:36:10.773 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:11 np0005539564 nova_compute[226295]: 2025-11-29 08:36:11.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:11.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.307 226310 DEBUG nova.network.neutron [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.327 226310 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:12 np0005539564 kernel: tap56a99c82-c7 (unregistering): left promiscuous mode
Nov 29 03:36:12 np0005539564 NetworkManager[48997]: <info>  [1764405372.4381] device (tap56a99c82-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:36:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:12Z|00697|binding|INFO|Releasing lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 from this chassis (sb_readonly=0)
Nov 29 03:36:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:12Z|00698|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 down in Southbound
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.451 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:12Z|00699|binding|INFO|Removing iface tap56a99c82-c7 ovn-installed in OVS
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.453 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.460 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:ee:a4 10.100.0.11'], port_security=['fa:16:3e:dc:ee:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd444d77c-01c4-4fb8-ba04-a10761695979', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0167e73-34b5-4b34-9484-783b07e45b22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=56a99c82-c7f3-45ce-8952-bb1fdd178381) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.462 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 56a99c82-c7f3-45ce-8952-bb1fdd178381 in datapath e259a30d-7e3f-48b9-abdf-dc7aa571c14c unbound from our chassis#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.463 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e259a30d-7e3f-48b9-abdf-dc7aa571c14c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.464 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cc52c93d-3434-4de0-8b0e-22e7b2c465fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.464 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c namespace which is not needed anymore#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.470 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539564 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Nov 29 03:36:12 np0005539564 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b1.scope: Consumed 6.463s CPU time.
Nov 29 03:36:12 np0005539564 systemd-machined[190128]: Machine qemu-83-instance-000000b1 terminated.
Nov 29 03:36:12 np0005539564 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[292614]: [NOTICE]   (292618) : haproxy version is 2.8.14-c23fe91
Nov 29 03:36:12 np0005539564 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[292614]: [NOTICE]   (292618) : path to executable is /usr/sbin/haproxy
Nov 29 03:36:12 np0005539564 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[292614]: [WARNING]  (292618) : Exiting Master process...
Nov 29 03:36:12 np0005539564 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[292614]: [ALERT]    (292618) : Current worker (292620) exited with code 143 (Terminated)
Nov 29 03:36:12 np0005539564 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[292614]: [WARNING]  (292618) : All workers exited. Exiting... (0)
Nov 29 03:36:12 np0005539564 systemd[1]: libpod-df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3.scope: Deactivated successfully.
Nov 29 03:36:12 np0005539564 podman[292717]: 2025-11-29 08:36:12.622546091 +0000 UTC m=+0.046443528 container died df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.683 226310 INFO nova.virt.libvirt.driver [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance destroyed successfully.#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.684 226310 DEBUG nova.objects.instance [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:12 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3-userdata-shm.mount: Deactivated successfully.
Nov 29 03:36:12 np0005539564 systemd[1]: var-lib-containers-storage-overlay-4524857db462d437b5482d68e993a227b86acdf1da5bb74ddc50144b53075b9c-merged.mount: Deactivated successfully.
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.698 226310 DEBUG nova.virt.libvirt.vif [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:06Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.698 226310 DEBUG nova.network.os_vif_util [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.699 226310 DEBUG nova.network.os_vif_util [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.700 226310 DEBUG os_vif [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.702 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.702 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56a99c82-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.705 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.709 226310 DEBUG nova.compute.manager [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:12 np0005539564 podman[292717]: 2025-11-29 08:36:12.709428121 +0000 UTC m=+0.133325568 container cleanup df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.710 226310 DEBUG oslo_concurrency.lockutils [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.711 226310 DEBUG oslo_concurrency.lockutils [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.711 226310 DEBUG oslo_concurrency.lockutils [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.712 226310 DEBUG nova.compute.manager [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.712 226310 WARNING nova.compute.manager [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.715 226310 INFO os_vif [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7')#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.719 226310 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.720 226310 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:12 np0005539564 systemd[1]: libpod-conmon-df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3.scope: Deactivated successfully.
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.741 226310 DEBUG nova.objects.instance [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:12 np0005539564 podman[292758]: 2025-11-29 08:36:12.789525868 +0000 UTC m=+0.050316873 container remove df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.805 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5e82b9-aa7b-4730-862c-c8b39513820e]: (4, ('Sat Nov 29 08:36:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c (df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3)\ndf977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3\nSat Nov 29 08:36:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c (df977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3)\ndf977f004e781d76787ce7ef664e0e047220243604dc503aa86697768e9d83d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.807 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[780b72ba-7e9e-475d-ac2e-5ec7e11c4a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.808 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape259a30d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.810 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539564 kernel: tape259a30d-70: left promiscuous mode
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.816 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ca656536-a71e-4e11-b005-f4b6a5718254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.823 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.833 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e57acc25-23a5-4fcc-a1a2-fe90c6a991fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.835 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0a7369-43d2-47c7-af7e-674ddf972ee9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:12 np0005539564 nova_compute[226295]: 2025-11-29 08:36:12.839 226310 DEBUG oslo_concurrency.processutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.853 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cf21b79f-fccb-447c-be5a-50598ea09ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834091, 'reachable_time': 33661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292773, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.856 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:36:12 np0005539564 systemd[1]: run-netns-ovnmeta\x2de259a30d\x2d7e3f\x2d48b9\x2dabdf\x2ddc7aa571c14c.mount: Deactivated successfully.
Nov 29 03:36:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:12.856 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5d14c9-131e-452b-ad84-345910fa541b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539564 nova_compute[226295]: 2025-11-29 08:36:13.298 226310 DEBUG oslo_concurrency.processutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:13 np0005539564 nova_compute[226295]: 2025-11-29 08:36:13.305 226310 DEBUG nova.compute.provider_tree [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:13 np0005539564 nova_compute[226295]: 2025-11-29 08:36:13.340 226310 DEBUG nova.scheduler.client.report [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:13 np0005539564 nova_compute[226295]: 2025-11-29 08:36:13.510 226310 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:13.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.100 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.138 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.139 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.157 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.260 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.261 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.275 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.275 226310 INFO nova.compute.claims [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:36:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.449 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.911 226310 DEBUG nova.compute.manager [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.912 226310 DEBUG oslo_concurrency.lockutils [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.913 226310 DEBUG oslo_concurrency.lockutils [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.914 226310 DEBUG oslo_concurrency.lockutils [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.914 226310 DEBUG nova.compute.manager [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.915 226310 WARNING nova.compute.manager [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:36:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:14 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1735074097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.935 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.945 226310 DEBUG nova.compute.provider_tree [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:14 np0005539564 nova_compute[226295]: 2025-11-29 08:36:14.968 226310 DEBUG nova.scheduler.client.report [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.011 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.012 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.131 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.132 226310 DEBUG nova.network.neutron [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.197 226310 INFO nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.221 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.390 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.393 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.394 226310 INFO nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Creating image(s)#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.441 226310 DEBUG nova.storage.rbd_utils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] rbd image 474b1011-d98c-4f65-b0c1-a27fa5964442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.481 226310 DEBUG nova.storage.rbd_utils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] rbd image 474b1011-d98c-4f65-b0c1-a27fa5964442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.517 226310 DEBUG nova.storage.rbd_utils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] rbd image 474b1011-d98c-4f65-b0c1-a27fa5964442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.523 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.560 226310 DEBUG nova.policy [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6126044d5f7d49d19e3feffbc3034024', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3827b5eea76e4810b48ea1733ae5edc4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.603 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.604 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.605 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.605 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.638 226310 DEBUG nova.storage.rbd_utils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] rbd image 474b1011-d98c-4f65-b0c1-a27fa5964442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.642 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 474b1011-d98c-4f65-b0c1-a27fa5964442_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.912 226310 DEBUG nova.compute.manager [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.912 226310 DEBUG nova.compute.manager [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing instance network info cache due to event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.913 226310 DEBUG oslo_concurrency.lockutils [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.915 226310 DEBUG oslo_concurrency.lockutils [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.920 226310 DEBUG nova.network.neutron [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:15 np0005539564 nova_compute[226295]: 2025-11-29 08:36:15.995 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 474b1011-d98c-4f65-b0c1-a27fa5964442_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:16 np0005539564 nova_compute[226295]: 2025-11-29 08:36:16.111 226310 DEBUG nova.storage.rbd_utils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] resizing rbd image 474b1011-d98c-4f65-b0c1-a27fa5964442_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:36:16 np0005539564 nova_compute[226295]: 2025-11-29 08:36:16.257 226310 DEBUG nova.objects.instance [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lazy-loading 'migration_context' on Instance uuid 474b1011-d98c-4f65-b0c1-a27fa5964442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:16 np0005539564 nova_compute[226295]: 2025-11-29 08:36:16.298 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:36:16 np0005539564 nova_compute[226295]: 2025-11-29 08:36:16.299 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Ensure instance console log exists: /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:36:16 np0005539564 nova_compute[226295]: 2025-11-29 08:36:16.299 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:16 np0005539564 nova_compute[226295]: 2025-11-29 08:36:16.300 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:16 np0005539564 nova_compute[226295]: 2025-11-29 08:36:16.300 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:16.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:17 np0005539564 nova_compute[226295]: 2025-11-29 08:36:17.695 226310 DEBUG nova.network.neutron [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Successfully created port: 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:36:17 np0005539564 nova_compute[226295]: 2025-11-29 08:36:17.707 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:17.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:18 np0005539564 nova_compute[226295]: 2025-11-29 08:36:18.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:18.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:18 np0005539564 nova_compute[226295]: 2025-11-29 08:36:18.873 226310 DEBUG nova.network.neutron [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated VIF entry in instance network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:18 np0005539564 nova_compute[226295]: 2025-11-29 08:36:18.875 226310 DEBUG nova.network.neutron [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:18 np0005539564 nova_compute[226295]: 2025-11-29 08:36:18.907 226310 DEBUG oslo_concurrency.lockutils [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.103 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.217 226310 DEBUG nova.network.neutron [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Successfully updated port: 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.252 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.252 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquired lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.253 226310 DEBUG nova.network.neutron [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.320 226310 DEBUG nova.compute.manager [req-b0d48736-dc57-4cbf-920a-8f54d3d1761c req-0ad846be-249c-4f1d-b4c3-2729caf08a0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received event network-changed-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.320 226310 DEBUG nova.compute.manager [req-b0d48736-dc57-4cbf-920a-8f54d3d1761c req-0ad846be-249c-4f1d-b4c3-2729caf08a0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Refreshing instance network info cache due to event network-changed-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.321 226310 DEBUG oslo_concurrency.lockutils [req-b0d48736-dc57-4cbf-920a-8f54d3d1761c req-0ad846be-249c-4f1d-b4c3-2729caf08a0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:19 np0005539564 nova_compute[226295]: 2025-11-29 08:36:19.869 226310 DEBUG nova.network.neutron [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:36:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:19.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:20.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e406 e406: 3 total, 3 up, 3 in
Nov 29 03:36:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:21 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326930679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.272 226310 DEBUG nova.network.neutron [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updating instance_info_cache with network_info: [{"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.305 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Releasing lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.306 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Instance network_info: |[{"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.307 226310 DEBUG oslo_concurrency.lockutils [req-b0d48736-dc57-4cbf-920a-8f54d3d1761c req-0ad846be-249c-4f1d-b4c3-2729caf08a0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.308 226310 DEBUG nova.network.neutron [req-b0d48736-dc57-4cbf-920a-8f54d3d1761c req-0ad846be-249c-4f1d-b4c3-2729caf08a0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Refreshing network info cache for port 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.313 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Start _get_guest_xml network_info=[{"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.320 226310 WARNING nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.326 226310 DEBUG nova.virt.libvirt.host [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.327 226310 DEBUG nova.virt.libvirt.host [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.340 226310 DEBUG nova.virt.libvirt.host [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.341 226310 DEBUG nova.virt.libvirt.host [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.344 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.344 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.345 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.345 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.346 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.346 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.347 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.347 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.348 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.348 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.348 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.349 226310 DEBUG nova.virt.hardware [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.354 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:21 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1283798638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.799 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.841 226310 DEBUG nova.storage.rbd_utils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] rbd image 474b1011-d98c-4f65-b0c1-a27fa5964442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:21 np0005539564 nova_compute[226295]: 2025-11-29 08:36:21.847 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:21.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.192 226310 DEBUG nova.compute.manager [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.192 226310 DEBUG oslo_concurrency.lockutils [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.193 226310 DEBUG oslo_concurrency.lockutils [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.193 226310 DEBUG oslo_concurrency.lockutils [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.193 226310 DEBUG nova.compute.manager [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.194 226310 WARNING nova.compute.manager [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1127947561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.377 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.378 226310 DEBUG nova.virt.libvirt.vif [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1116560805',display_name='tempest-TestEncryptedCinderVolumes-server-1116560805',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1116560805',id=178,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF6IfcPxXBtbc/f284zGMaia5ypyAl0PsSlPhQNW797gSuX/YRD00VVTcoCqhNkp5uyH86FHfw+PvgtuVGRajrl4XgqYAfvp97CAy6jL7xigmon1ta3aSuIZvA6EeQGZQg==',key_name='tempest-keypair-1106333239',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3827b5eea76e4810b48ea1733ae5edc4',ramdisk_id='',reservation_id='r-0q9ztlc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1380543897',owner_user_name='tempest-TestEncryptedCinderVolumes-1380543897-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6126044d5f7d49d19e3feffbc3034024',uuid=474b1011-d98c-4f65-b0c1-a27fa5964442,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.378 226310 DEBUG nova.network.os_vif_util [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Converting VIF {"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.379 226310 DEBUG nova.network.os_vif_util [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:15:c7,bridge_name='br-int',has_traffic_filtering=True,id=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf,network=Network(fbe8f8c4-9ee0-49cc-ba19-f04282093b22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eb5eadd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.380 226310 DEBUG nova.objects.instance [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 474b1011-d98c-4f65-b0c1-a27fa5964442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.434 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.435 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.435 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.436 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.437 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.492 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <uuid>474b1011-d98c-4f65-b0c1-a27fa5964442</uuid>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <name>instance-000000b2</name>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1116560805</nova:name>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:36:21</nova:creationTime>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <nova:user uuid="6126044d5f7d49d19e3feffbc3034024">tempest-TestEncryptedCinderVolumes-1380543897-project-member</nova:user>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <nova:project uuid="3827b5eea76e4810b48ea1733ae5edc4">tempest-TestEncryptedCinderVolumes-1380543897</nova:project>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <nova:port uuid="2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <entry name="serial">474b1011-d98c-4f65-b0c1-a27fa5964442</entry>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <entry name="uuid">474b1011-d98c-4f65-b0c1-a27fa5964442</entry>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/474b1011-d98c-4f65-b0c1-a27fa5964442_disk">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/474b1011-d98c-4f65-b0c1-a27fa5964442_disk.config">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:d1:15:c7"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <target dev="tap2eb5eadd-55"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442/console.log" append="off"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:36:22 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:36:22 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:36:22 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:36:22 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.494 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Preparing to wait for external event network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.495 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.496 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.496 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.498 226310 DEBUG nova.virt.libvirt.vif [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1116560805',display_name='tempest-TestEncryptedCinderVolumes-server-1116560805',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1116560805',id=178,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF6IfcPxXBtbc/f284zGMaia5ypyAl0PsSlPhQNW797gSuX/YRD00VVTcoCqhNkp5uyH86FHfw+PvgtuVGRajrl4XgqYAfvp97CAy6jL7xigmon1ta3aSuIZvA6EeQGZQg==',key_name='tempest-keypair-1106333239',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3827b5eea76e4810b48ea1733ae5edc4',ramdisk_id='',reservation_id='r-0q9ztlc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1380543897',owner_user_name='tempest-TestEncryptedCinderVolumes-1380543897-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6126044d5f7d49d19e3feffbc3034024',uuid=474b1011-d98c-4f65-b0c1-a27fa5964442,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.498 226310 DEBUG nova.network.os_vif_util [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Converting VIF {"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.500 226310 DEBUG nova.network.os_vif_util [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:15:c7,bridge_name='br-int',has_traffic_filtering=True,id=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf,network=Network(fbe8f8c4-9ee0-49cc-ba19-f04282093b22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eb5eadd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.501 226310 DEBUG os_vif [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:15:c7,bridge_name='br-int',has_traffic_filtering=True,id=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf,network=Network(fbe8f8c4-9ee0-49cc-ba19-f04282093b22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eb5eadd-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.502 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.503 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.504 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.510 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.510 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eb5eadd-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.511 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2eb5eadd-55, col_values=(('external_ids', {'iface-id': '2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:15:c7', 'vm-uuid': '474b1011-d98c-4f65-b0c1-a27fa5964442'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.514 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539564 NetworkManager[48997]: <info>  [1764405382.5150] manager: (tap2eb5eadd-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.518 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.523 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.524 226310 INFO os_vif [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:15:c7,bridge_name='br-int',has_traffic_filtering=True,id=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf,network=Network(fbe8f8c4-9ee0-49cc-ba19-f04282093b22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eb5eadd-55')#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.611 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.612 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.612 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] No VIF found with MAC fa:16:3e:d1:15:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.613 226310 INFO nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Using config drive#033[00m
Nov 29 03:36:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:22.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:22 np0005539564 nova_compute[226295]: 2025-11-29 08:36:22.648 226310 DEBUG nova.storage.rbd_utils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] rbd image 474b1011-d98c-4f65-b0c1-a27fa5964442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/55526148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.002 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.149 226310 DEBUG nova.network.neutron [req-b0d48736-dc57-4cbf-920a-8f54d3d1761c req-0ad846be-249c-4f1d-b4c3-2729caf08a0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updated VIF entry in instance network info cache for port 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.150 226310 DEBUG nova.network.neutron [req-b0d48736-dc57-4cbf-920a-8f54d3d1761c req-0ad846be-249c-4f1d-b4c3-2729caf08a0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updating instance_info_cache with network_info: [{"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.168 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.168 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.239 226310 INFO nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Creating config drive at /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442/disk.config#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.252 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdow4id9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.303 226310 DEBUG oslo_concurrency.lockutils [req-b0d48736-dc57-4cbf-920a-8f54d3d1761c req-0ad846be-249c-4f1d-b4c3-2729caf08a0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.414 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdow4id9x" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.461 226310 DEBUG nova.storage.rbd_utils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] rbd image 474b1011-d98c-4f65-b0c1-a27fa5964442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.467 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442/disk.config 474b1011-d98c-4f65-b0c1-a27fa5964442_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.557 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.558 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4284MB free_disk=20.921905517578125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.559 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.559 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.731 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 474b1011-d98c-4f65-b0c1-a27fa5964442 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.732 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.732 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:36:23 np0005539564 nova_compute[226295]: 2025-11-29 08:36:23.782 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:23.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/224494438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.526 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.534 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.565 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.604 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.604 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.607 226310 DEBUG oslo_concurrency.processutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442/disk.config 474b1011-d98c-4f65-b0c1-a27fa5964442_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.608 226310 INFO nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Deleting local config drive /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442/disk.config because it was imported into RBD.#033[00m
Nov 29 03:36:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:24.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:24 np0005539564 kernel: tap2eb5eadd-55: entered promiscuous mode
Nov 29 03:36:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:24Z|00700|binding|INFO|Claiming lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf for this chassis.
Nov 29 03:36:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:24Z|00701|binding|INFO|2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf: Claiming fa:16:3e:d1:15:c7 10.100.0.7
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.686 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539564 NetworkManager[48997]: <info>  [1764405384.6891] manager: (tap2eb5eadd-55): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.695 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:15:c7 10.100.0.7'], port_security=['fa:16:3e:d1:15:c7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '474b1011-d98c-4f65-b0c1-a27fa5964442', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3827b5eea76e4810b48ea1733ae5edc4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd93ebbd6-e5fd-45d5-b566-bb2376e0445f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3285817d-ffa0-40db-ad02-760e149d061a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.697 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf in datapath fbe8f8c4-9ee0-49cc-ba19-f04282093b22 bound to our chassis#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.698 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbe8f8c4-9ee0-49cc-ba19-f04282093b22#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.715 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2333e6a3-889e-43b6-9081-f0a733843458]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:24Z|00702|binding|INFO|Setting lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf ovn-installed in OVS
Nov 29 03:36:24 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:24Z|00703|binding|INFO|Setting lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf up in Southbound
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.717 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbe8f8c4-91 in ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.720 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbe8f8c4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:36:24 np0005539564 nova_compute[226295]: 2025-11-29 08:36:24.720 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.720 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ac2882-cb12-44a0-8e8e-078663b6a2f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.722 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6e8cdb-69d6-4443-9429-bf01e4dde8df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 systemd-udevd[293168]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.739 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[56d07b7a-4cbd-42f9-a9b8-2999b074ae38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 systemd-machined[190128]: New machine qemu-84-instance-000000b2.
Nov 29 03:36:24 np0005539564 NetworkManager[48997]: <info>  [1764405384.7447] device (tap2eb5eadd-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:36:24 np0005539564 NetworkManager[48997]: <info>  [1764405384.7461] device (tap2eb5eadd-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.756 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[05ce7b49-d1b4-4a17-bda6-39a7aff37a0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 systemd[1]: Started Virtual Machine qemu-84-instance-000000b2.
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.792 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[45af75f9-73fc-4653-b19a-9593c585ad98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 NetworkManager[48997]: <info>  [1764405384.7990] manager: (tapfbe8f8c4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.798 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcd7329-4e2c-47e8-a9fb-171af1422e92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 systemd-udevd[293172]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.833 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[426e9475-cabf-46db-b0fc-c129b9806fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.837 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[17c25317-c794-4110-922e-f7a2e62ebe22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 NetworkManager[48997]: <info>  [1764405384.8591] device (tapfbe8f8c4-90): carrier: link connected
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.862 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bc83eb65-ba16-44ed-8fce-06451015bc8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.890 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d56fb88c-eeed-4bfc-9a7e-b3e18fa7a96a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbe8f8c4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:4b:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835954, 'reachable_time': 33730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293201, 'error': None, 'target': 'ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.914 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cb0c74-0ec5-458e-9830-50da6555130c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:4b62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835954, 'tstamp': 835954}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293202, 'error': None, 'target': 'ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.935 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc0d45b-b093-4256-9179-b3e950eeaf54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbe8f8c4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:4b:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835954, 'reachable_time': 33730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293203, 'error': None, 'target': 'ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:24.972 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[593de9fe-f056-43e7-a5b5-bd47500d53c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.045 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6cdf97c9-8ddf-4370-99d8-1c4818cb8c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.048 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbe8f8c4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.049 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.050 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbe8f8c4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:25 np0005539564 NetworkManager[48997]: <info>  [1764405385.0538] manager: (tapfbe8f8c4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Nov 29 03:36:25 np0005539564 kernel: tapfbe8f8c4-90: entered promiscuous mode
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.063 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbe8f8c4-90, col_values=(('external_ids', {'iface-id': '7d6286ea-1676-407e-9c7d-942018c1227a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.065 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:25Z|00704|binding|INFO|Releasing lport 7d6286ea-1676-407e-9c7d-942018c1227a from this chassis (sb_readonly=0)
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.080 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbe8f8c4-9ee0-49cc-ba19-f04282093b22.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbe8f8c4-9ee0-49cc-ba19-f04282093b22.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.082 226310 DEBUG nova.compute.manager [req-391912df-668d-4c09-8086-752a32f7d60d req-78f3bfae-078a-4688-a5b8-6ef84e75f516 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received event network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.082 226310 DEBUG oslo_concurrency.lockutils [req-391912df-668d-4c09-8086-752a32f7d60d req-78f3bfae-078a-4688-a5b8-6ef84e75f516 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.083 226310 DEBUG oslo_concurrency.lockutils [req-391912df-668d-4c09-8086-752a32f7d60d req-78f3bfae-078a-4688-a5b8-6ef84e75f516 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.084 226310 DEBUG oslo_concurrency.lockutils [req-391912df-668d-4c09-8086-752a32f7d60d req-78f3bfae-078a-4688-a5b8-6ef84e75f516 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.084 226310 DEBUG nova.compute.manager [req-391912df-668d-4c09-8086-752a32f7d60d req-78f3bfae-078a-4688-a5b8-6ef84e75f516 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Processing event network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.085 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.087 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[75d2f3f0-57b2-4ed5-84cb-977d3f7c8fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.088 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-fbe8f8c4-9ee0-49cc-ba19-f04282093b22
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/fbe8f8c4-9ee0-49cc-ba19-f04282093b22.pid.haproxy
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID fbe8f8c4-9ee0-49cc-ba19-f04282093b22
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:36:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:25.090 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'env', 'PROCESS_TAG=haproxy-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbe8f8c4-9ee0-49cc-ba19-f04282093b22.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.553 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405385.5524623, 474b1011-d98c-4f65-b0c1-a27fa5964442 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.554 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] VM Started (Lifecycle Event)#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.555 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.560 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.563 226310 INFO nova.virt.libvirt.driver [-] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Instance spawned successfully.#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.564 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.576 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:25 np0005539564 podman[293271]: 2025-11-29 08:36:25.482392429 +0000 UTC m=+0.023338662 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.583 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.588 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.588 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.589 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.589 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.590 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.590 226310 DEBUG nova.virt.libvirt.driver [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.617 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.618 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405385.552747, 474b1011-d98c-4f65-b0c1-a27fa5964442 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.618 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.644 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.648 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405385.559281, 474b1011-d98c-4f65-b0c1-a27fa5964442 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.648 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.654 226310 INFO nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Took 10.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.655 226310 DEBUG nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.666 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.669 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.695 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.718 226310 INFO nova.compute.manager [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Took 11.49 seconds to build instance.#033[00m
Nov 29 03:36:25 np0005539564 nova_compute[226295]: 2025-11-29 08:36:25.738 226310 DEBUG oslo_concurrency.lockutils [None req-5a43eaed-0934-4ea3-aceb-bc87b3cd3876 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:25 np0005539564 podman[293271]: 2025-11-29 08:36:25.762955389 +0000 UTC m=+0.303901592 container create bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:25 np0005539564 systemd[1]: Started libpod-conmon-bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696.scope.
Nov 29 03:36:25 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:36:25 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa51e3efbfc3d758a44216ad84b753b2a462e3a54a022e824cc811489ae6692/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:25 np0005539564 podman[293271]: 2025-11-29 08:36:25.857755394 +0000 UTC m=+0.398701617 container init bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:25 np0005539564 podman[293271]: 2025-11-29 08:36:25.86830596 +0000 UTC m=+0.409252173 container start bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:25 np0005539564 neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22[293292]: [NOTICE]   (293296) : New worker (293298) forked
Nov 29 03:36:25 np0005539564 neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22[293292]: [NOTICE]   (293296) : Loading success.
Nov 29 03:36:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.188 226310 DEBUG nova.compute.manager [req-5d6d44d6-44b5-404d-bf8c-59df6d4f2b1a req-423a878b-7fa8-4920-b427-861c5561a652 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received event network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.190 226310 DEBUG oslo_concurrency.lockutils [req-5d6d44d6-44b5-404d-bf8c-59df6d4f2b1a req-423a878b-7fa8-4920-b427-861c5561a652 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.191 226310 DEBUG oslo_concurrency.lockutils [req-5d6d44d6-44b5-404d-bf8c-59df6d4f2b1a req-423a878b-7fa8-4920-b427-861c5561a652 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.191 226310 DEBUG oslo_concurrency.lockutils [req-5d6d44d6-44b5-404d-bf8c-59df6d4f2b1a req-423a878b-7fa8-4920-b427-861c5561a652 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.192 226310 DEBUG nova.compute.manager [req-5d6d44d6-44b5-404d-bf8c-59df6d4f2b1a req-423a878b-7fa8-4920-b427-861c5561a652 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] No waiting events found dispatching network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.193 226310 WARNING nova.compute.manager [req-5d6d44d6-44b5-404d-bf8c-59df6d4f2b1a req-423a878b-7fa8-4920-b427-861c5561a652 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received unexpected event network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf for instance with vm_state active and task_state None.#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.515 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.681 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405372.6800957, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.682 226310 INFO nova.compute.manager [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:36:27 np0005539564 nova_compute[226295]: 2025-11-29 08:36:27.723 226310 DEBUG nova.compute.manager [None req-8a0b641f-672c-45b5-980b-ac952837d51d - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:36:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1570808786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:36:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:36:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1570808786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:36:28 np0005539564 nova_compute[226295]: 2025-11-29 08:36:28.655 226310 DEBUG nova.compute.manager [req-bf13e71c-6335-4998-920e-d4c526047553 req-10e44946-cc16-474f-9ff8-f0175c25d14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received event network-changed-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:28 np0005539564 nova_compute[226295]: 2025-11-29 08:36:28.657 226310 DEBUG nova.compute.manager [req-bf13e71c-6335-4998-920e-d4c526047553 req-10e44946-cc16-474f-9ff8-f0175c25d14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Refreshing instance network info cache due to event network-changed-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:28 np0005539564 nova_compute[226295]: 2025-11-29 08:36:28.658 226310 DEBUG oslo_concurrency.lockutils [req-bf13e71c-6335-4998-920e-d4c526047553 req-10e44946-cc16-474f-9ff8-f0175c25d14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:28 np0005539564 nova_compute[226295]: 2025-11-29 08:36:28.659 226310 DEBUG oslo_concurrency.lockutils [req-bf13e71c-6335-4998-920e-d4c526047553 req-10e44946-cc16-474f-9ff8-f0175c25d14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:28 np0005539564 nova_compute[226295]: 2025-11-29 08:36:28.659 226310 DEBUG nova.network.neutron [req-bf13e71c-6335-4998-920e-d4c526047553 req-10e44946-cc16-474f-9ff8-f0175c25d14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Refreshing network info cache for port 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:29 np0005539564 nova_compute[226295]: 2025-11-29 08:36:29.149 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 e407: 3 total, 3 up, 3 in
Nov 29 03:36:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:30.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:31.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:32 np0005539564 nova_compute[226295]: 2025-11-29 08:36:32.042 226310 DEBUG nova.network.neutron [req-bf13e71c-6335-4998-920e-d4c526047553 req-10e44946-cc16-474f-9ff8-f0175c25d14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updated VIF entry in instance network info cache for port 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:32 np0005539564 nova_compute[226295]: 2025-11-29 08:36:32.043 226310 DEBUG nova.network.neutron [req-bf13e71c-6335-4998-920e-d4c526047553 req-10e44946-cc16-474f-9ff8-f0175c25d14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updating instance_info_cache with network_info: [{"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:32 np0005539564 nova_compute[226295]: 2025-11-29 08:36:32.064 226310 DEBUG oslo_concurrency.lockutils [req-bf13e71c-6335-4998-920e-d4c526047553 req-10e44946-cc16-474f-9ff8-f0175c25d14d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:32 np0005539564 nova_compute[226295]: 2025-11-29 08:36:32.519 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:34 np0005539564 nova_compute[226295]: 2025-11-29 08:36:34.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:34.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:35.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:36.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:37 np0005539564 nova_compute[226295]: 2025-11-29 08:36:37.523 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:37.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:38.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:39 np0005539564 nova_compute[226295]: 2025-11-29 08:36:39.159 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:39 np0005539564 podman[293309]: 2025-11-29 08:36:39.50365621 +0000 UTC m=+0.059436028 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, config_id=multipathd)
Nov 29 03:36:39 np0005539564 podman[293310]: 2025-11-29 08:36:39.524834564 +0000 UTC m=+0.074542978 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:36:39 np0005539564 podman[293308]: 2025-11-29 08:36:39.544950778 +0000 UTC m=+0.100605663 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:39 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:39Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:15:c7 10.100.0.7
Nov 29 03:36:39 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:39Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:15:c7 10.100.0.7
Nov 29 03:36:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:39.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:40.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:41.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:42 np0005539564 nova_compute[226295]: 2025-11-29 08:36:42.526 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:42.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:44 np0005539564 nova_compute[226295]: 2025-11-29 08:36:44.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:44.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:45.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:46.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:47 np0005539564 nova_compute[226295]: 2025-11-29 08:36:47.530 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:47.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:48.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:49 np0005539564 nova_compute[226295]: 2025-11-29 08:36:49.164 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:49.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:50.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:52 np0005539564 nova_compute[226295]: 2025-11-29 08:36:52.534 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:52.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:54 np0005539564 nova_compute[226295]: 2025-11-29 08:36:54.167 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:54.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:36:55Z|00705|binding|INFO|Releasing lport 7d6286ea-1676-407e-9c7d-942018c1227a from this chassis (sb_readonly=0)
Nov 29 03:36:55 np0005539564 nova_compute[226295]: 2025-11-29 08:36:55.093 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:55.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:56.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:36:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:36:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:36:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:36:57 np0005539564 nova_compute[226295]: 2025-11-29 08:36:57.538 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:57.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:58.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:59 np0005539564 nova_compute[226295]: 2025-11-29 08:36:59.169 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:59.298 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:59 np0005539564 nova_compute[226295]: 2025-11-29 08:36:59.299 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:36:59.301 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:36:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:36:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:59.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:00.305 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:00.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:01.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.053 226310 DEBUG oslo_concurrency.lockutils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.054 226310 DEBUG oslo_concurrency.lockutils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.071 226310 DEBUG nova.objects.instance [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lazy-loading 'flavor' on Instance uuid 474b1011-d98c-4f65-b0c1-a27fa5964442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.108 226310 DEBUG oslo_concurrency.lockutils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.584 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.670 226310 DEBUG oslo_concurrency.lockutils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.671 226310 DEBUG oslo_concurrency.lockutils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.672 226310 INFO nova.compute.manager [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Attaching volume c7b25c31-f722-44d5-bbae-250821bb79df to /dev/vdb#033[00m
Nov 29 03:37:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:02.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.820 226310 DEBUG os_brick.utils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.823 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.842 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.843 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4499ff-4692-4f43-9817-89bc3681586f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.845 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.857 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.858 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[21b73f22-975e-4d17-9e34-58f1d34d47bd]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.860 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.873 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.874 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5ceb48-b580-4064-b675-e4b99e848b21]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.876 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[1d805e70-ebf5-4871-a915-19623f7e7832]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.876 226310 DEBUG oslo_concurrency.processutils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.904 226310 DEBUG oslo_concurrency.processutils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.907 226310 DEBUG os_brick.initiator.connectors.lightos [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.907 226310 DEBUG os_brick.initiator.connectors.lightos [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.908 226310 DEBUG os_brick.initiator.connectors.lightos [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.908 226310 DEBUG os_brick.utils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] <== get_connector_properties: return (86ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:37:02 np0005539564 nova_compute[226295]: 2025-11-29 08:37:02.908 226310 DEBUG nova.virt.block_device [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updating existing volume attachment record: 277dba60-c316-4d1d-8959-92dc5d0dcc4b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:37:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:03.752 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:03.752 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:03.753 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:03 np0005539564 nova_compute[226295]: 2025-11-29 08:37:03.950 226310 DEBUG os_brick.encryptors [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Using volume encryption metadata '{'encryption_key_id': 'bdf77128-7c64-47d2-ba44-8cc8070a1d1f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c7b25c31-f722-44d5-bbae-250821bb79df', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c7b25c31-f722-44d5-bbae-250821bb79df', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '474b1011-d98c-4f65-b0c1-a27fa5964442', 'attached_at': '', 'detached_at': '', 'volume_id': 'c7b25c31-f722-44d5-bbae-250821bb79df', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Nov 29 03:37:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:03.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:03 np0005539564 nova_compute[226295]: 2025-11-29 08:37:03.961 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Nov 29 03:37:03 np0005539564 nova_compute[226295]: 2025-11-29 08:37:03.994 226310 DEBUG barbicanclient.v1.secrets [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Nov 29 03:37:03 np0005539564 nova_compute[226295]: 2025-11-29 08:37:03.995 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.029 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.030 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.053 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.054 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.088 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.088 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.126 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.127 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.156 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.157 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.172 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.202 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.203 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.233 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.234 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.256 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.257 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.277 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.277 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:37:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.347 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.348 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.399 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.400 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.441 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.442 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.469 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.470 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.494 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.495 226310 INFO barbicanclient.base [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Calculated Secrets uuid ref: secrets/bdf77128-7c64-47d2-ba44-8cc8070a1d1f#033[00m
Nov 29 03:37:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.519 226310 DEBUG barbicanclient.client [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.520 226310 DEBUG nova.virt.libvirt.host [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  <usage type="volume">
Nov 29 03:37:04 np0005539564 nova_compute[226295]:    <volume>c7b25c31-f722-44d5-bbae-250821bb79df</volume>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  </usage>
Nov 29 03:37:04 np0005539564 nova_compute[226295]: </secret>
Nov 29 03:37:04 np0005539564 nova_compute[226295]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.536 226310 DEBUG nova.objects.instance [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lazy-loading 'flavor' on Instance uuid 474b1011-d98c-4f65-b0c1-a27fa5964442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.559 226310 DEBUG nova.virt.libvirt.driver [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Attempting to attach volume c7b25c31-f722-44d5-bbae-250821bb79df with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:37:04 np0005539564 nova_compute[226295]: 2025-11-29 08:37:04.562 226310 DEBUG nova.virt.libvirt.guest [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-c7b25c31-f722-44d5-bbae-250821bb79df">
Nov 29 03:37:04 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  <auth username="openstack">
Nov 29 03:37:04 np0005539564 nova_compute[226295]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  </auth>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  <serial>c7b25c31-f722-44d5-bbae-250821bb79df</serial>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  <encryption format="luks">
Nov 29 03:37:04 np0005539564 nova_compute[226295]:    <secret type="passphrase" uuid="e1de5e66-2936-4e5b-b11b-89146f97532d"/>
Nov 29 03:37:04 np0005539564 nova_compute[226295]:  </encryption>
Nov 29 03:37:04 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:37:04 np0005539564 nova_compute[226295]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:37:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:04.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:05.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:06 np0005539564 nova_compute[226295]: 2025-11-29 08:37:06.599 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:06 np0005539564 nova_compute[226295]: 2025-11-29 08:37:06.601 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:06 np0005539564 nova_compute[226295]: 2025-11-29 08:37:06.601 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:06 np0005539564 nova_compute[226295]: 2025-11-29 08:37:06.602 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:37:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:06.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:07 np0005539564 nova_compute[226295]: 2025-11-29 08:37:07.149 226310 DEBUG nova.virt.libvirt.driver [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:07 np0005539564 nova_compute[226295]: 2025-11-29 08:37:07.149 226310 DEBUG nova.virt.libvirt.driver [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:07 np0005539564 nova_compute[226295]: 2025-11-29 08:37:07.150 226310 DEBUG nova.virt.libvirt.driver [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:07 np0005539564 nova_compute[226295]: 2025-11-29 08:37:07.150 226310 DEBUG nova.virt.libvirt.driver [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] No VIF found with MAC fa:16:3e:d1:15:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:37:07 np0005539564 nova_compute[226295]: 2025-11-29 08:37:07.230 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:07 np0005539564 nova_compute[226295]: 2025-11-29 08:37:07.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:07 np0005539564 nova_compute[226295]: 2025-11-29 08:37:07.587 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:07.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:07 np0005539564 nova_compute[226295]: 2025-11-29 08:37:07.986 226310 DEBUG oslo_concurrency.lockutils [None req-3b663b74-619d-4019-89ba-41a96ca75b20 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:08.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:09 np0005539564 nova_compute[226295]: 2025-11-29 08:37:09.174 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:09 np0005539564 nova_compute[226295]: 2025-11-29 08:37:09.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:09 np0005539564 nova_compute[226295]: 2025-11-29 08:37:09.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:37:09 np0005539564 nova_compute[226295]: 2025-11-29 08:37:09.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:37:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:09.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:10 np0005539564 nova_compute[226295]: 2025-11-29 08:37:10.257 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:10 np0005539564 nova_compute[226295]: 2025-11-29 08:37:10.258 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:10 np0005539564 nova_compute[226295]: 2025-11-29 08:37:10.259 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:37:10 np0005539564 nova_compute[226295]: 2025-11-29 08:37:10.259 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 474b1011-d98c-4f65-b0c1-a27fa5964442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:10 np0005539564 podman[293581]: 2025-11-29 08:37:10.546171874 +0000 UTC m=+0.090957321 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:10 np0005539564 podman[293580]: 2025-11-29 08:37:10.552672211 +0000 UTC m=+0.099342579 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:37:10 np0005539564 podman[293579]: 2025-11-29 08:37:10.598592383 +0000 UTC m=+0.145944039 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:37:10 np0005539564 nova_compute[226295]: 2025-11-29 08:37:10.600 226310 DEBUG oslo_concurrency.lockutils [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:10 np0005539564 nova_compute[226295]: 2025-11-29 08:37:10.601 226310 DEBUG oslo_concurrency.lockutils [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:10 np0005539564 nova_compute[226295]: 2025-11-29 08:37:10.767 226310 INFO nova.compute.manager [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Detaching volume c7b25c31-f722-44d5-bbae-250821bb79df#033[00m
Nov 29 03:37:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.251 226310 INFO nova.virt.block_device [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Attempting to driver detach volume c7b25c31-f722-44d5-bbae-250821bb79df from mountpoint /dev/vdb#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.414 226310 DEBUG os_brick.encryptors [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Using volume encryption metadata '{'encryption_key_id': 'bdf77128-7c64-47d2-ba44-8cc8070a1d1f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c7b25c31-f722-44d5-bbae-250821bb79df', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c7b25c31-f722-44d5-bbae-250821bb79df', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '474b1011-d98c-4f65-b0c1-a27fa5964442', 'attached_at': '', 'detached_at': '', 'volume_id': 'c7b25c31-f722-44d5-bbae-250821bb79df', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.427 226310 DEBUG nova.virt.libvirt.driver [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Attempting to detach device vdb from instance 474b1011-d98c-4f65-b0c1-a27fa5964442 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.429 226310 DEBUG nova.virt.libvirt.guest [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-c7b25c31-f722-44d5-bbae-250821bb79df">
Nov 29 03:37:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <serial>c7b25c31-f722-44d5-bbae-250821bb79df</serial>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <encryption format="luks">
Nov 29 03:37:11 np0005539564 nova_compute[226295]:    <secret type="passphrase" uuid="e1de5e66-2936-4e5b-b11b-89146f97532d"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  </encryption>
Nov 29 03:37:11 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:37:11 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.441 226310 INFO nova.virt.libvirt.driver [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Successfully detached device vdb from instance 474b1011-d98c-4f65-b0c1-a27fa5964442 from the persistent domain config.#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.442 226310 DEBUG nova.virt.libvirt.driver [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 474b1011-d98c-4f65-b0c1-a27fa5964442 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.442 226310 DEBUG nova.virt.libvirt.guest [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <source protocol="rbd" name="volumes/volume-c7b25c31-f722-44d5-bbae-250821bb79df">
Nov 29 03:37:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  </source>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <serial>c7b25c31-f722-44d5-bbae-250821bb79df</serial>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  <encryption format="luks">
Nov 29 03:37:11 np0005539564 nova_compute[226295]:    <secret type="passphrase" uuid="e1de5e66-2936-4e5b-b11b-89146f97532d"/>
Nov 29 03:37:11 np0005539564 nova_compute[226295]:  </encryption>
Nov 29 03:37:11 np0005539564 nova_compute[226295]: </disk>
Nov 29 03:37:11 np0005539564 nova_compute[226295]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.849 226310 DEBUG nova.virt.libvirt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Received event <DeviceRemovedEvent: 1764405431.84879, 474b1011-d98c-4f65-b0c1-a27fa5964442 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.852 226310 DEBUG nova.virt.libvirt.driver [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 474b1011-d98c-4f65-b0c1-a27fa5964442 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:37:11 np0005539564 nova_compute[226295]: 2025-11-29 08:37:11.856 226310 INFO nova.virt.libvirt.driver [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Successfully detached device vdb from instance 474b1011-d98c-4f65-b0c1-a27fa5964442 from the live domain config.#033[00m
Nov 29 03:37:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:11.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:12 np0005539564 nova_compute[226295]: 2025-11-29 08:37:12.590 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:12.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:13.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:14 np0005539564 nova_compute[226295]: 2025-11-29 08:37:14.217 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:14 np0005539564 nova_compute[226295]: 2025-11-29 08:37:14.272 226310 DEBUG nova.objects.instance [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lazy-loading 'flavor' on Instance uuid 474b1011-d98c-4f65-b0c1-a27fa5964442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:14.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:15 np0005539564 nova_compute[226295]: 2025-11-29 08:37:15.491 226310 DEBUG oslo_concurrency.lockutils [None req-1d571320-9057-4dfb-9624-72b4877c6699 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 4.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:15 np0005539564 nova_compute[226295]: 2025-11-29 08:37:15.957 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updating instance_info_cache with network_info: [{"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:15.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.044 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-474b1011-d98c-4f65-b0c1-a27fa5964442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.044 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.045 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.046 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:16.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.952 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.954 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.954 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.954 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.954 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.955 226310 INFO nova.compute.manager [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Terminating instance#033[00m
Nov 29 03:37:16 np0005539564 nova_compute[226295]: 2025-11-29 08:37:16.956 226310 DEBUG nova.compute.manager [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:37:17 np0005539564 kernel: tap2eb5eadd-55 (unregistering): left promiscuous mode
Nov 29 03:37:17 np0005539564 NetworkManager[48997]: <info>  [1764405437.2427] device (tap2eb5eadd-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00706|binding|INFO|Releasing lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf from this chassis (sb_readonly=0)
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00707|binding|INFO|Setting lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf down in Southbound
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00708|binding|INFO|Removing iface tap2eb5eadd-55 ovn-installed in OVS
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.261 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.280 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.306 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:15:c7 10.100.0.7'], port_security=['fa:16:3e:d1:15:c7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '474b1011-d98c-4f65-b0c1-a27fa5964442', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3827b5eea76e4810b48ea1733ae5edc4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93ebbd6-e5fd-45d5-b566-bb2376e0445f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3285817d-ffa0-40db-ad02-760e149d061a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.308 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf in datapath fbe8f8c4-9ee0-49cc-ba19-f04282093b22 unbound from our chassis#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.309 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbe8f8c4-9ee0-49cc-ba19-f04282093b22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:37:17 np0005539564 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Nov 29 03:37:17 np0005539564 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b2.scope: Consumed 19.077s CPU time.
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.311 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[94a0420c-9b77-4347-aa50-885726253835]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.312 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22 namespace which is not needed anymore#033[00m
Nov 29 03:37:17 np0005539564 systemd-machined[190128]: Machine qemu-84-instance-000000b2 terminated.
Nov 29 03:37:17 np0005539564 kernel: tap2eb5eadd-55: entered promiscuous mode
Nov 29 03:37:17 np0005539564 systemd-udevd[293646]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:37:17 np0005539564 NetworkManager[48997]: <info>  [1764405437.3841] manager: (tap2eb5eadd-55): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00709|binding|INFO|Claiming lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf for this chassis.
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.385 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 kernel: tap2eb5eadd-55 (unregistering): left promiscuous mode
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00710|binding|INFO|2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf: Claiming fa:16:3e:d1:15:c7 10.100.0.7
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00711|binding|INFO|Setting lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf ovn-installed in OVS
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.409 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00712|if_status|INFO|Dropped 2 log messages in last 1056 seconds (most recently, 1056 seconds ago) due to excessive rate
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00713|if_status|INFO|Not setting lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf down as sb is readonly
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.414 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:37:17Z|00714|binding|INFO|Releasing lport 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf from this chassis (sb_readonly=0)
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.417 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:15:c7 10.100.0.7'], port_security=['fa:16:3e:d1:15:c7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '474b1011-d98c-4f65-b0c1-a27fa5964442', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3827b5eea76e4810b48ea1733ae5edc4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93ebbd6-e5fd-45d5-b566-bb2376e0445f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3285817d-ffa0-40db-ad02-760e149d061a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.433 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.433 226310 INFO nova.virt.libvirt.driver [-] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Instance destroyed successfully.#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.434 226310 DEBUG nova.objects.instance [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lazy-loading 'resources' on Instance uuid 474b1011-d98c-4f65-b0c1-a27fa5964442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:17 np0005539564 neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22[293292]: [NOTICE]   (293296) : haproxy version is 2.8.14-c23fe91
Nov 29 03:37:17 np0005539564 neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22[293292]: [NOTICE]   (293296) : path to executable is /usr/sbin/haproxy
Nov 29 03:37:17 np0005539564 neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22[293292]: [WARNING]  (293296) : Exiting Master process...
Nov 29 03:37:17 np0005539564 neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22[293292]: [ALERT]    (293296) : Current worker (293298) exited with code 143 (Terminated)
Nov 29 03:37:17 np0005539564 neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22[293292]: [WARNING]  (293296) : All workers exited. Exiting... (0)
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.494 226310 DEBUG nova.virt.libvirt.vif [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1116560805',display_name='tempest-TestEncryptedCinderVolumes-server-1116560805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1116560805',id=178,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF6IfcPxXBtbc/f284zGMaia5ypyAl0PsSlPhQNW797gSuX/YRD00VVTcoCqhNkp5uyH86FHfw+PvgtuVGRajrl4XgqYAfvp97CAy6jL7xigmon1ta3aSuIZvA6EeQGZQg==',key_name='tempest-keypair-1106333239',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3827b5eea76e4810b48ea1733ae5edc4',ramdisk_id='',reservation_id='r-0q9ztlc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1380543897',owner_user_name='tempest-TestEncryptedCinderVolumes-1380543897-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6126044d5f7d49d19e3feffbc3034024',uuid=474b1011-d98c-4f65-b0c1-a27fa5964442,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.494 226310 DEBUG nova.network.os_vif_util [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Converting VIF {"id": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "address": "fa:16:3e:d1:15:c7", "network": {"id": "fbe8f8c4-9ee0-49cc-ba19-f04282093b22", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1964942420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3827b5eea76e4810b48ea1733ae5edc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eb5eadd-55", "ovs_interfaceid": "2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:17 np0005539564 systemd[1]: libpod-bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696.scope: Deactivated successfully.
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.495 226310 DEBUG nova.network.os_vif_util [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:15:c7,bridge_name='br-int',has_traffic_filtering=True,id=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf,network=Network(fbe8f8c4-9ee0-49cc-ba19-f04282093b22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eb5eadd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.496 226310 DEBUG os_vif [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:15:c7,bridge_name='br-int',has_traffic_filtering=True,id=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf,network=Network(fbe8f8c4-9ee0-49cc-ba19-f04282093b22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eb5eadd-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.498 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.498 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eb5eadd-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.499 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.501 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 podman[293671]: 2025-11-29 08:37:17.502715687 +0000 UTC m=+0.055208995 container died bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.503 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:15:c7 10.100.0.7'], port_security=['fa:16:3e:d1:15:c7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '474b1011-d98c-4f65-b0c1-a27fa5964442', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3827b5eea76e4810b48ea1733ae5edc4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93ebbd6-e5fd-45d5-b566-bb2376e0445f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3285817d-ffa0-40db-ad02-760e149d061a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.504 226310 INFO os_vif [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:15:c7,bridge_name='br-int',has_traffic_filtering=True,id=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf,network=Network(fbe8f8c4-9ee0-49cc-ba19-f04282093b22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eb5eadd-55')#033[00m
Nov 29 03:37:17 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696-userdata-shm.mount: Deactivated successfully.
Nov 29 03:37:17 np0005539564 systemd[1]: var-lib-containers-storage-overlay-5aa51e3efbfc3d758a44216ad84b753b2a462e3a54a022e824cc811489ae6692-merged.mount: Deactivated successfully.
Nov 29 03:37:17 np0005539564 podman[293671]: 2025-11-29 08:37:17.552209756 +0000 UTC m=+0.104703054 container cleanup bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:37:17 np0005539564 systemd[1]: libpod-conmon-bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696.scope: Deactivated successfully.
Nov 29 03:37:17 np0005539564 podman[293721]: 2025-11-29 08:37:17.626806594 +0000 UTC m=+0.049487500 container remove bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.633 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b9862ea8-e177-4b63-8f99-7aa5c5520546]: (4, ('Sat Nov 29 08:37:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22 (bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696)\nbffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696\nSat Nov 29 08:37:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22 (bffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696)\nbffafd2a94e2648d6e68655eca4cfd333e2c6c591bd6a64584860aaab53a4696\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.637 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf61d95-1ffa-4556-8726-4958d28f65b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.638 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbe8f8c4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.640 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 kernel: tapfbe8f8c4-90: left promiscuous mode
Nov 29 03:37:17 np0005539564 nova_compute[226295]: 2025-11-29 08:37:17.656 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.661 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[552b7b2d-b7d9-4f9b-b444-dde4bde64640]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.678 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebc8484-cbe7-4712-a269-9dbdc9da2be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.680 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f7214784-71fa-41b5-ad33-7fd7e4b84600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.702 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[10f5777c-0db5-452b-b976-c3524b50dcec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835946, 'reachable_time': 22585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293736, 'error': None, 'target': 'ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 systemd[1]: run-netns-ovnmeta\x2dfbe8f8c4\x2d9ee0\x2d49cc\x2dba19\x2df04282093b22.mount: Deactivated successfully.
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.707 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbe8f8c4-9ee0-49cc-ba19-f04282093b22 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.708 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[e10a4e96-0e4f-47c8-bc9c-5a4fb5837107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.710 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf in datapath fbe8f8c4-9ee0-49cc-ba19-f04282093b22 unbound from our chassis#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.712 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbe8f8c4-9ee0-49cc-ba19-f04282093b22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.713 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[672f26f5-4b53-4dcc-83f1-52d52d21d40b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.714 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf in datapath fbe8f8c4-9ee0-49cc-ba19-f04282093b22 unbound from our chassis#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.716 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbe8f8c4-9ee0-49cc-ba19-f04282093b22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:37:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:17.717 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[67576bd6-2978-484e-953a-d83e7779019d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:17.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:18 np0005539564 nova_compute[226295]: 2025-11-29 08:37:18.633 226310 DEBUG nova.compute.manager [req-2bd31d21-6d93-43aa-bcaf-47c91f281aad req-38c8e130-7b32-4b66-a49b-31dbe048ecac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received event network-vif-unplugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:18 np0005539564 nova_compute[226295]: 2025-11-29 08:37:18.634 226310 DEBUG oslo_concurrency.lockutils [req-2bd31d21-6d93-43aa-bcaf-47c91f281aad req-38c8e130-7b32-4b66-a49b-31dbe048ecac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:18 np0005539564 nova_compute[226295]: 2025-11-29 08:37:18.635 226310 DEBUG oslo_concurrency.lockutils [req-2bd31d21-6d93-43aa-bcaf-47c91f281aad req-38c8e130-7b32-4b66-a49b-31dbe048ecac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:18 np0005539564 nova_compute[226295]: 2025-11-29 08:37:18.635 226310 DEBUG oslo_concurrency.lockutils [req-2bd31d21-6d93-43aa-bcaf-47c91f281aad req-38c8e130-7b32-4b66-a49b-31dbe048ecac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:18 np0005539564 nova_compute[226295]: 2025-11-29 08:37:18.636 226310 DEBUG nova.compute.manager [req-2bd31d21-6d93-43aa-bcaf-47c91f281aad req-38c8e130-7b32-4b66-a49b-31dbe048ecac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] No waiting events found dispatching network-vif-unplugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:18 np0005539564 nova_compute[226295]: 2025-11-29 08:37:18.636 226310 DEBUG nova.compute.manager [req-2bd31d21-6d93-43aa-bcaf-47c91f281aad req-38c8e130-7b32-4b66-a49b-31dbe048ecac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received event network-vif-unplugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:37:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:18.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:19 np0005539564 nova_compute[226295]: 2025-11-29 08:37:19.220 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539564 nova_compute[226295]: 2025-11-29 08:37:19.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:19.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:20.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.362 226310 INFO nova.virt.libvirt.driver [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Deleting instance files /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442_del#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.363 226310 INFO nova.virt.libvirt.driver [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Deletion of /var/lib/nova/instances/474b1011-d98c-4f65-b0c1-a27fa5964442_del complete#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.384 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.661 226310 DEBUG nova.compute.manager [req-d1f452a0-ea81-4d94-a6e8-03025cb5a5c2 req-8730ebed-7d9c-4808-ab89-76389ca965b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received event network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.662 226310 DEBUG oslo_concurrency.lockutils [req-d1f452a0-ea81-4d94-a6e8-03025cb5a5c2 req-8730ebed-7d9c-4808-ab89-76389ca965b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.662 226310 DEBUG oslo_concurrency.lockutils [req-d1f452a0-ea81-4d94-a6e8-03025cb5a5c2 req-8730ebed-7d9c-4808-ab89-76389ca965b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.662 226310 DEBUG oslo_concurrency.lockutils [req-d1f452a0-ea81-4d94-a6e8-03025cb5a5c2 req-8730ebed-7d9c-4808-ab89-76389ca965b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.663 226310 DEBUG nova.compute.manager [req-d1f452a0-ea81-4d94-a6e8-03025cb5a5c2 req-8730ebed-7d9c-4808-ab89-76389ca965b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] No waiting events found dispatching network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.663 226310 WARNING nova.compute.manager [req-d1f452a0-ea81-4d94-a6e8-03025cb5a5c2 req-8730ebed-7d9c-4808-ab89-76389ca965b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received unexpected event network-vif-plugged-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.682 226310 INFO nova.compute.manager [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Took 4.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.682 226310 DEBUG oslo.service.loopingcall [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.683 226310 DEBUG nova.compute.manager [-] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:37:21 np0005539564 nova_compute[226295]: 2025-11-29 08:37:21.683 226310 DEBUG nova.network.neutron [-] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:37:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:22 np0005539564 nova_compute[226295]: 2025-11-29 08:37:22.384 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:22 np0005539564 nova_compute[226295]: 2025-11-29 08:37:22.449 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:22 np0005539564 nova_compute[226295]: 2025-11-29 08:37:22.450 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:22 np0005539564 nova_compute[226295]: 2025-11-29 08:37:22.450 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:22 np0005539564 nova_compute[226295]: 2025-11-29 08:37:22.450 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:37:22 np0005539564 nova_compute[226295]: 2025-11-29 08:37:22.450 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:22 np0005539564 nova_compute[226295]: 2025-11-29 08:37:22.500 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:22.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/303534460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:22 np0005539564 nova_compute[226295]: 2025-11-29 08:37:22.927 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.104 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.105 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4295MB free_disk=20.965614318847656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.105 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.106 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.274 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 474b1011-d98c-4f65-b0c1-a27fa5964442 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.275 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.275 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.374 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:23 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1220308300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.863 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:23 np0005539564 nova_compute[226295]: 2025-11-29 08:37:23.871 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:23.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:24 np0005539564 nova_compute[226295]: 2025-11-29 08:37:24.089 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:24 np0005539564 nova_compute[226295]: 2025-11-29 08:37:24.119 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:37:24 np0005539564 nova_compute[226295]: 2025-11-29 08:37:24.119 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:24 np0005539564 nova_compute[226295]: 2025-11-29 08:37:24.223 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:24.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.048 226310 DEBUG nova.compute.manager [req-8b6b5458-402b-4039-a718-f393f96fdc2a req-b385b3db-ec07-4164-a8a0-de3bf181c31b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Received event network-vif-deleted-2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.049 226310 INFO nova.compute.manager [req-8b6b5458-402b-4039-a718-f393f96fdc2a req-b385b3db-ec07-4164-a8a0-de3bf181c31b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Neutron deleted interface 2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.049 226310 DEBUG nova.network.neutron [req-8b6b5458-402b-4039-a718-f393f96fdc2a req-b385b3db-ec07-4164-a8a0-de3bf181c31b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.056 226310 DEBUG nova.network.neutron [-] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.221 226310 INFO nova.compute.manager [-] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Took 4.54 seconds to deallocate network for instance.#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.228 226310 DEBUG nova.compute.manager [req-8b6b5458-402b-4039-a718-f393f96fdc2a req-b385b3db-ec07-4164-a8a0-de3bf181c31b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Detach interface failed, port_id=2eb5eadd-55fb-43e5-80c6-a9d8d7d6cedf, reason: Instance 474b1011-d98c-4f65-b0c1-a27fa5964442 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.702 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.703 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:26 np0005539564 nova_compute[226295]: 2025-11-29 08:37:26.753 226310 DEBUG oslo_concurrency.processutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/512438549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:27 np0005539564 nova_compute[226295]: 2025-11-29 08:37:27.243 226310 DEBUG oslo_concurrency.processutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:27 np0005539564 nova_compute[226295]: 2025-11-29 08:37:27.253 226310 DEBUG nova.compute.provider_tree [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:27 np0005539564 nova_compute[226295]: 2025-11-29 08:37:27.303 226310 DEBUG nova.scheduler.client.report [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:27 np0005539564 nova_compute[226295]: 2025-11-29 08:37:27.504 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:27 np0005539564 nova_compute[226295]: 2025-11-29 08:37:27.626 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:27 np0005539564 nova_compute[226295]: 2025-11-29 08:37:27.757 226310 INFO nova.scheduler.client.report [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Deleted allocations for instance 474b1011-d98c-4f65-b0c1-a27fa5964442#033[00m
Nov 29 03:37:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:28 np0005539564 nova_compute[226295]: 2025-11-29 08:37:28.401 226310 DEBUG oslo_concurrency.lockutils [None req-6c86de8a-e509-47ac-88b2-99957cbfab3a 6126044d5f7d49d19e3feffbc3034024 3827b5eea76e4810b48ea1733ae5edc4 - - default default] Lock "474b1011-d98c-4f65-b0c1-a27fa5964442" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:28.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:29 np0005539564 nova_compute[226295]: 2025-11-29 08:37:29.261 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:30.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:32 np0005539564 nova_compute[226295]: 2025-11-29 08:37:32.433 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405437.4313662, 474b1011-d98c-4f65-b0c1-a27fa5964442 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:32 np0005539564 nova_compute[226295]: 2025-11-29 08:37:32.434 226310 INFO nova.compute.manager [-] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:37:32 np0005539564 nova_compute[226295]: 2025-11-29 08:37:32.507 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:32.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:32 np0005539564 nova_compute[226295]: 2025-11-29 08:37:32.891 226310 DEBUG nova.compute.manager [None req-4c3125ec-9f2d-48b5-bea6-144f7038b755 - - - - - -] [instance: 474b1011-d98c-4f65-b0c1-a27fa5964442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:34 np0005539564 nova_compute[226295]: 2025-11-29 08:37:34.263 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:34.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.020755) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455020805, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1655, "num_deletes": 257, "total_data_size": 3737465, "memory_usage": 3797584, "flush_reason": "Manual Compaction"}
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455050260, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 2453955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64660, "largest_seqno": 66310, "table_properties": {"data_size": 2447023, "index_size": 4002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14745, "raw_average_key_size": 20, "raw_value_size": 2432972, "raw_average_value_size": 3301, "num_data_blocks": 176, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405317, "oldest_key_time": 1764405317, "file_creation_time": 1764405455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 29567 microseconds, and 12229 cpu microseconds.
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.050319) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 2453955 bytes OK
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.050352) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.052864) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.052889) EVENT_LOG_v1 {"time_micros": 1764405455052881, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.052945) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3729931, prev total WAL file size 3729931, number of live WAL files 2.
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.054888) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353137' seq:0, type:0; will stop at (end)
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(2396KB)], [129(9558KB)]
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455055002, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 12242161, "oldest_snapshot_seqno": -1}
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 9356 keys, 12081135 bytes, temperature: kUnknown
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455188727, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 12081135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12020225, "index_size": 36413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 247347, "raw_average_key_size": 26, "raw_value_size": 11855401, "raw_average_value_size": 1267, "num_data_blocks": 1389, "num_entries": 9356, "num_filter_entries": 9356, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.189331) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 12081135 bytes
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.191979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 91.4 rd, 90.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.3 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 9889, records dropped: 533 output_compression: NoCompression
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.192014) EVENT_LOG_v1 {"time_micros": 1764405455191997, "job": 82, "event": "compaction_finished", "compaction_time_micros": 133987, "compaction_time_cpu_micros": 46852, "output_level": 6, "num_output_files": 1, "total_output_size": 12081135, "num_input_records": 9889, "num_output_records": 9356, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455193000, "job": 82, "event": "table_file_deletion", "file_number": 131}
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455196730, "job": 82, "event": "table_file_deletion", "file_number": 129}
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.054709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:37 np0005539564 nova_compute[226295]: 2025-11-29 08:37:37.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:37 np0005539564 nova_compute[226295]: 2025-11-29 08:37:37.510 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:37.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:38.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:39 np0005539564 nova_compute[226295]: 2025-11-29 08:37:39.265 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:39 np0005539564 nova_compute[226295]: 2025-11-29 08:37:39.870 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:40.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:40.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:41 np0005539564 podman[293813]: 2025-11-29 08:37:41.556368375 +0000 UTC m=+0.096813671 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 03:37:41 np0005539564 podman[293815]: 2025-11-29 08:37:41.580464117 +0000 UTC m=+0.118517198 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:37:41 np0005539564 podman[293811]: 2025-11-29 08:37:41.589780469 +0000 UTC m=+0.134466879 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 03:37:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:42 np0005539564 nova_compute[226295]: 2025-11-29 08:37:42.513 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:42.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:42.968 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:42 np0005539564 nova_compute[226295]: 2025-11-29 08:37:42.968 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:42.969 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:37:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:44 np0005539564 nova_compute[226295]: 2025-11-29 08:37:44.268 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:44.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:46.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:46.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:47 np0005539564 nova_compute[226295]: 2025-11-29 08:37:47.516 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:48.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:48.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:48 np0005539564 nova_compute[226295]: 2025-11-29 08:37:48.867 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:49 np0005539564 nova_compute[226295]: 2025-11-29 08:37:49.061 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:49 np0005539564 nova_compute[226295]: 2025-11-29 08:37:49.271 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:50.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:50.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:37:50.971 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:52.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:52 np0005539564 nova_compute[226295]: 2025-11-29 08:37:52.339 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:52 np0005539564 nova_compute[226295]: 2025-11-29 08:37:52.518 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:52.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:54 np0005539564 nova_compute[226295]: 2025-11-29 08:37:54.273 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:54.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:56.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:56.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:57 np0005539564 nova_compute[226295]: 2025-11-29 08:37:57.520 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:58.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:37:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:58.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:59 np0005539564 nova_compute[226295]: 2025-11-29 08:37:59.275 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:00.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:00.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:02 np0005539564 nova_compute[226295]: 2025-11-29 08:38:02.522 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:02.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:03.753 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:03.753 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:03.753 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:04 np0005539564 nova_compute[226295]: 2025-11-29 08:38:04.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:04.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:05 np0005539564 nova_compute[226295]: 2025-11-29 08:38:05.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:05 np0005539564 nova_compute[226295]: 2025-11-29 08:38:05.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:05 np0005539564 nova_compute[226295]: 2025-11-29 08:38:05.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:38:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:38:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:38:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:06 np0005539564 nova_compute[226295]: 2025-11-29 08:38:06.337 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:06.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:07 np0005539564 nova_compute[226295]: 2025-11-29 08:38:07.571 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:08.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:09 np0005539564 nova_compute[226295]: 2025-11-29 08:38:09.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:09 np0005539564 nova_compute[226295]: 2025-11-29 08:38:09.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:10 np0005539564 nova_compute[226295]: 2025-11-29 08:38:10.037 226310 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Creating tmpfile /var/lib/nova/instances/tmpmdmpjaw5 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 03:38:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:10.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:10 np0005539564 nova_compute[226295]: 2025-11-29 08:38:10.220 226310 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmdmpjaw5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 03:38:10 np0005539564 nova_compute[226295]: 2025-11-29 08:38:10.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:10 np0005539564 nova_compute[226295]: 2025-11-29 08:38:10.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:38:10 np0005539564 nova_compute[226295]: 2025-11-29 08:38:10.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:38:10 np0005539564 nova_compute[226295]: 2025-11-29 08:38:10.437 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:38:10 np0005539564 nova_compute[226295]: 2025-11-29 08:38:10.438 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:10.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:11 np0005539564 podman[294036]: 2025-11-29 08:38:11.69297683 +0000 UTC m=+0.061831073 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:11 np0005539564 podman[294035]: 2025-11-29 08:38:11.694320806 +0000 UTC m=+0.069204523 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Nov 29 03:38:11 np0005539564 podman[294034]: 2025-11-29 08:38:11.717297728 +0000 UTC m=+0.092633747 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:38:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:12.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:12 np0005539564 nova_compute[226295]: 2025-11-29 08:38:12.229 226310 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmdmpjaw5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='0fa70e65-aaae-493a-9c8c-db89fe6658e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 03:38:12 np0005539564 nova_compute[226295]: 2025-11-29 08:38:12.254 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:12 np0005539564 nova_compute[226295]: 2025-11-29 08:38:12.254 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquired lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:12 np0005539564 nova_compute[226295]: 2025-11-29 08:38:12.254 226310 DEBUG nova.network.neutron [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:38:12 np0005539564 nova_compute[226295]: 2025-11-29 08:38:12.511 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:12 np0005539564 nova_compute[226295]: 2025-11-29 08:38:12.575 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:12.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.925 226310 DEBUG nova.network.neutron [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.946 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Releasing lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.949 226310 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmdmpjaw5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='0fa70e65-aaae-493a-9c8c-db89fe6658e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.950 226310 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Creating instance directory: /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.951 226310 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Ensure instance console log exists: /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.951 226310 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.953 226310 DEBUG nova.virt.libvirt.vif [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:37:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-523561219',display_name='tempest-TestNetworkAdvancedServerOps-server-523561219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-523561219',id=179,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCaqYXq9qkUFXGf1qpvHhcz47gUzuZhq+Jcnq6cylAKf+87//oLJuPUr6JJfsrawgC9NlTQR6RzaDMo9jIsaNOtIuAwNCS169ddXogsTd4Ncy9Th61lYBRoaZpDFVcDEOA==',key_name='tempest-TestNetworkAdvancedServerOps-66857871',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uamaf3kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:42Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=0fa70e65-aaae-493a-9c8c-db89fe6658e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.953 226310 DEBUG nova.network.os_vif_util [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.954 226310 DEBUG nova.network.os_vif_util [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.955 226310 DEBUG os_vif [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.956 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.956 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.961 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.962 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebf4feb2-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.962 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebf4feb2-02, col_values=(('external_ids', {'iface-id': 'ebf4feb2-0247-40b6-a431-2f55b2f4c237', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:9d:09', 'vm-uuid': '0fa70e65-aaae-493a-9c8c-db89fe6658e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.964 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:13 np0005539564 NetworkManager[48997]: <info>  [1764405493.9656] manager: (tapebf4feb2-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.974 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.976 226310 INFO os_vif [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02')#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.977 226310 DEBUG nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 03:38:13 np0005539564 nova_compute[226295]: 2025-11-29 08:38:13.977 226310 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmdmpjaw5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='0fa70e65-aaae-493a-9c8c-db89fe6658e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 03:38:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:14.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:14 np0005539564 nova_compute[226295]: 2025-11-29 08:38:14.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:14 np0005539564 nova_compute[226295]: 2025-11-29 08:38:14.367 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:14.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.260 226310 DEBUG nova.network.neutron [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Port ebf4feb2-0247-40b6-a431-2f55b2f4c237 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.262 226310 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmdmpjaw5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='0fa70e65-aaae-493a-9c8c-db89fe6658e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 03:38:15 np0005539564 systemd[1]: Starting libvirt proxy daemon...
Nov 29 03:38:15 np0005539564 systemd[1]: Started libvirt proxy daemon.
Nov 29 03:38:15 np0005539564 kernel: tapebf4feb2-02: entered promiscuous mode
Nov 29 03:38:15 np0005539564 NetworkManager[48997]: <info>  [1764405495.6044] manager: (tapebf4feb2-02): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Nov 29 03:38:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:15Z|00715|binding|INFO|Claiming lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 for this additional chassis.
Nov 29 03:38:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:15Z|00716|binding|INFO|ebf4feb2-0247-40b6-a431-2f55b2f4c237: Claiming fa:16:3e:7a:9d:09 10.100.0.14
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.604 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.612 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.617 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539564 NetworkManager[48997]: <info>  [1764405495.6178] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Nov 29 03:38:15 np0005539564 NetworkManager[48997]: <info>  [1764405495.6194] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Nov 29 03:38:15 np0005539564 systemd-machined[190128]: New machine qemu-85-instance-000000b3.
Nov 29 03:38:15 np0005539564 systemd-udevd[294156]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:15 np0005539564 NetworkManager[48997]: <info>  [1764405495.6580] device (tapebf4feb2-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:38:15 np0005539564 NetworkManager[48997]: <info>  [1764405495.6594] device (tapebf4feb2-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:38:15 np0005539564 systemd[1]: Started Virtual Machine qemu-85-instance-000000b3.
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.708 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.714 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.736 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:15Z|00717|binding|INFO|Setting lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 ovn-installed in OVS
Nov 29 03:38:15 np0005539564 nova_compute[226295]: 2025-11-29 08:38:15.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:16.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:16 np0005539564 nova_compute[226295]: 2025-11-29 08:38:16.591 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405496.590388, 0fa70e65-aaae-493a-9c8c-db89fe6658e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:16 np0005539564 nova_compute[226295]: 2025-11-29 08:38:16.592 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] VM Started (Lifecycle Event)#033[00m
Nov 29 03:38:16 np0005539564 nova_compute[226295]: 2025-11-29 08:38:16.691 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:16.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:17 np0005539564 nova_compute[226295]: 2025-11-29 08:38:17.145 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405497.1451347, 0fa70e65-aaae-493a-9c8c-db89fe6658e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:17 np0005539564 nova_compute[226295]: 2025-11-29 08:38:17.146 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:38:17 np0005539564 nova_compute[226295]: 2025-11-29 08:38:17.165 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:17 np0005539564 nova_compute[226295]: 2025-11-29 08:38:17.171 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:17 np0005539564 nova_compute[226295]: 2025-11-29 08:38:17.190 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Nov 29 03:38:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:18.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:18.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:18 np0005539564 nova_compute[226295]: 2025-11-29 08:38:18.990 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:19Z|00718|binding|INFO|Claiming lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 for this chassis.
Nov 29 03:38:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:19Z|00719|binding|INFO|ebf4feb2-0247-40b6-a431-2f55b2f4c237: Claiming fa:16:3e:7a:9d:09 10.100.0.14
Nov 29 03:38:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:19Z|00720|binding|INFO|Setting lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 up in Southbound
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.173 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:9d:09 10.100.0.14'], port_security=['fa:16:3e:7a:9d:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0fa70e65-aaae-493a-9c8c-db89fe6658e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c01a89c-f496-44c3-afa3-4720950528b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'c61e493b-5131-4681-b607-cad8a707cfcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a4ed0086-1dab-4f89-9d5b-dbd6a6a8243e, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=ebf4feb2-0247-40b6-a431-2f55b2f4c237) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.175 139780 INFO neutron.agent.ovn.metadata.agent [-] Port ebf4feb2-0247-40b6-a431-2f55b2f4c237 in datapath 3c01a89c-f496-44c3-afa3-4720950528b6 bound to our chassis#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.176 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c01a89c-f496-44c3-afa3-4720950528b6#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.192 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[66058871-6900-41b6-aaa1-67aeffc84220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.194 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c01a89c-f1 in ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.197 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c01a89c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.197 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4c869d30-fd24-42fd-af51-b601954a1fb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.198 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[32291874-2bad-43f9-be93-549b6bd2d8e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.222 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[eae72a12-a724-4547-8730-424982de3713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.242 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9d44f6c9-e994-453d-a1c6-ef1bba03bd94]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 nova_compute[226295]: 2025-11-29 08:38:19.283 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.288 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5e401884-006a-4597-89f7-c1028946a641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 NetworkManager[48997]: <info>  [1764405499.2969] manager: (tap3c01a89c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/330)
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.294 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[417e4852-5888-4193-b593-d4a05425c037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 systemd-udevd[294214]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.342 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[be6e45db-612f-4d82-89e9-ab1340ee0cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.347 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6a574c-3482-4b53-b890-19006ff68716]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 NetworkManager[48997]: <info>  [1764405499.3753] device (tap3c01a89c-f0): carrier: link connected
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.383 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[98ba6c1f-b792-40b2-b6a5-02435cce414a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.398 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[01629139-97d4-4994-a36e-ac6688bb02bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c01a89c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:9d:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847405, 'reachable_time': 24419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294233, 'error': None, 'target': 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.417 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac3f3a7-bc81-4f16-b8c7-642d57bda409]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:9d5d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 847405, 'tstamp': 847405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294234, 'error': None, 'target': 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.437 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[77eec829-48fc-4c0c-8ab9-81c1c30eb695]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c01a89c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:9d:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847405, 'reachable_time': 24419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294235, 'error': None, 'target': 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.477 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6e945a-793e-460a-9802-6f2643104b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.561 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[75787dee-53e1-4e58-982e-035abd89e02d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.563 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c01a89c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.564 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.565 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c01a89c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:19 np0005539564 nova_compute[226295]: 2025-11-29 08:38:19.567 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539564 NetworkManager[48997]: <info>  [1764405499.5686] manager: (tap3c01a89c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Nov 29 03:38:19 np0005539564 kernel: tap3c01a89c-f0: entered promiscuous mode
Nov 29 03:38:19 np0005539564 nova_compute[226295]: 2025-11-29 08:38:19.571 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.574 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c01a89c-f0, col_values=(('external_ids', {'iface-id': '2ae168e9-4618-4303-8f12-978250c78d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:19 np0005539564 nova_compute[226295]: 2025-11-29 08:38:19.576 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:19Z|00721|binding|INFO|Releasing lport 2ae168e9-4618-4303-8f12-978250c78d38 from this chassis (sb_readonly=0)
Nov 29 03:38:19 np0005539564 nova_compute[226295]: 2025-11-29 08:38:19.593 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.595 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c01a89c-f496-44c3-afa3-4720950528b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c01a89c-f496-44c3-afa3-4720950528b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.597 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f20479b2-9de1-4595-afb7-8506a049c621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.598 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-3c01a89c-f496-44c3-afa3-4720950528b6
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/3c01a89c-f496-44c3-afa3-4720950528b6.pid.haproxy
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 3c01a89c-f496-44c3-afa3-4720950528b6
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:38:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:19.599 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'env', 'PROCESS_TAG=haproxy-3c01a89c-f496-44c3-afa3-4720950528b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c01a89c-f496-44c3-afa3-4720950528b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:38:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:20.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:20 np0005539564 podman[294268]: 2025-11-29 08:38:20.068050899 +0000 UTC m=+0.067197189 container create b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:38:20 np0005539564 systemd[1]: Started libpod-conmon-b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b.scope.
Nov 29 03:38:20 np0005539564 podman[294268]: 2025-11-29 08:38:20.030280857 +0000 UTC m=+0.029427207 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:38:20 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:38:20 np0005539564 nova_compute[226295]: 2025-11-29 08:38:20.142 226310 INFO nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Post operation of migration started#033[00m
Nov 29 03:38:20 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fadb1adab0f35e98a661fc4eb597037a3c60ec11075240f0eb7a1223c0201540/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:20 np0005539564 podman[294268]: 2025-11-29 08:38:20.162435782 +0000 UTC m=+0.161582062 container init b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:20 np0005539564 podman[294268]: 2025-11-29 08:38:20.171524998 +0000 UTC m=+0.170671268 container start b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:38:20 np0005539564 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[294284]: [NOTICE]   (294288) : New worker (294290) forked
Nov 29 03:38:20 np0005539564 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[294284]: [NOTICE]   (294288) : Loading success.
Nov 29 03:38:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:20.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:21 np0005539564 nova_compute[226295]: 2025-11-29 08:38:21.246 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:21 np0005539564 nova_compute[226295]: 2025-11-29 08:38:21.246 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquired lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:21 np0005539564 nova_compute[226295]: 2025-11-29 08:38:21.247 226310 DEBUG nova.network.neutron [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:38:21 np0005539564 nova_compute[226295]: 2025-11-29 08:38:21.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:22.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:22.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.260 226310 DEBUG nova.network.neutron [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.284 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Releasing lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.303 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.304 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.304 226310 DEBUG oslo_concurrency.lockutils [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.309 226310 INFO nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 03:38:23 np0005539564 virtqemud[225880]: Domain id=85 name='instance-000000b3' uuid=0fa70e65-aaae-493a-9c8c-db89fe6658e6 is tainted: custom-monitor
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.348 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "39a5933a-7591-4dd5-9113-5291a3eab7df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.349 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.378 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.452 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.453 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.460 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.460 226310 INFO nova.compute.claims [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.535 226310 DEBUG nova.scheduler.client.report [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.562 226310 DEBUG nova.scheduler.client.report [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.562 226310 DEBUG nova.compute.provider_tree [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.589 226310 DEBUG nova.scheduler.client.report [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.619 226310 DEBUG nova.scheduler.client.report [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.705 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:23 np0005539564 nova_compute[226295]: 2025-11-29 08:38:23.995 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3132769282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.144 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.152 226310 DEBUG nova.compute.provider_tree [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.176 226310 DEBUG nova.scheduler.client.report [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.212 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.213 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.261 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.262 226310 DEBUG nova.network.neutron [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.285 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.292 226310 INFO nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.319 226310 INFO nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.322 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.378 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.379 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.466 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.469 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.469 226310 INFO nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Creating image(s)#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.510 226310 DEBUG nova.storage.rbd_utils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 39a5933a-7591-4dd5-9113-5291a3eab7df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.564 226310 DEBUG nova.storage.rbd_utils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 39a5933a-7591-4dd5-9113-5291a3eab7df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.602 226310 DEBUG nova.storage.rbd_utils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 39a5933a-7591-4dd5-9113-5291a3eab7df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.609 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.650 226310 DEBUG nova.policy [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45da8ed818144f8bd6e00d233fcb5d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '03858b11000d4b57bd3659c3083eed47', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.691 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.692 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.693 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.694 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.729 226310 DEBUG nova.storage.rbd_utils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 39a5933a-7591-4dd5-9113-5291a3eab7df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.736 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 39a5933a-7591-4dd5-9113-5291a3eab7df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4100923533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.827 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.903 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:38:24 np0005539564 nova_compute[226295]: 2025-11-29 08:38:24.904 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:38:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.154 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 39a5933a-7591-4dd5-9113-5291a3eab7df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.230 226310 DEBUG nova.storage.rbd_utils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] resizing rbd image 39a5933a-7591-4dd5-9113-5291a3eab7df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.297 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.298 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4062MB free_disk=20.942729949951172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.299 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.299 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.358 226310 INFO nova.virt.libvirt.driver [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.364 226310 DEBUG nova.compute.manager [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.371 226310 DEBUG nova.objects.instance [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'migration_context' on Instance uuid 39a5933a-7591-4dd5-9113-5291a3eab7df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.435 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.436 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Ensure instance console log exists: /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.437 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.437 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.438 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.450 226310 DEBUG nova.objects.instance [None req-f8032d85-1777-4d90-9f2e-d17497d6a18b 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.476 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Migration for instance 0fa70e65-aaae-493a-9c8c-db89fe6658e6 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.497 226310 INFO nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating resource usage from migration 95ccb129-d2dd-4ef1-89d6-364eb16926cc#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.497 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Starting to track incoming migration 95ccb129-d2dd-4ef1-89d6-364eb16926cc with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.707 226310 WARNING nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 0fa70e65-aaae-493a-9c8c-db89fe6658e6 is not being actively managed by this compute host but has allocations referencing this compute host: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocation because we do not know what to do.#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.707 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 39a5933a-7591-4dd5-9113-5291a3eab7df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.708 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.708 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:38:25 np0005539564 nova_compute[226295]: 2025-11-29 08:38:25.768 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:26.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3097264608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:26 np0005539564 nova_compute[226295]: 2025-11-29 08:38:26.284 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:26 np0005539564 nova_compute[226295]: 2025-11-29 08:38:26.294 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:26 np0005539564 nova_compute[226295]: 2025-11-29 08:38:26.393 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:26 np0005539564 nova_compute[226295]: 2025-11-29 08:38:26.431 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:38:26 np0005539564 nova_compute[226295]: 2025-11-29 08:38:26.432 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:26 np0005539564 nova_compute[226295]: 2025-11-29 08:38:26.452 226310 DEBUG nova.network.neutron [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Successfully created port: 1547f1e7-48a2-41c4-9536-2f17f8b068aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:38:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:26.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:27 np0005539564 nova_compute[226295]: 2025-11-29 08:38:27.472 226310 DEBUG nova.network.neutron [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Successfully updated port: 1547f1e7-48a2-41c4-9536-2f17f8b068aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:38:27 np0005539564 nova_compute[226295]: 2025-11-29 08:38:27.488 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:27 np0005539564 nova_compute[226295]: 2025-11-29 08:38:27.488 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquired lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:27 np0005539564 nova_compute[226295]: 2025-11-29 08:38:27.488 226310 DEBUG nova.network.neutron [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:38:27 np0005539564 nova_compute[226295]: 2025-11-29 08:38:27.542 226310 DEBUG nova.compute.manager [req-03aff678-1ea6-4bec-b429-38ffcd4cdd23 req-2f42f777-c3a0-4997-8c24-7ace31d814cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-changed-1547f1e7-48a2-41c4-9536-2f17f8b068aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:27 np0005539564 nova_compute[226295]: 2025-11-29 08:38:27.542 226310 DEBUG nova.compute.manager [req-03aff678-1ea6-4bec-b429-38ffcd4cdd23 req-2f42f777-c3a0-4997-8c24-7ace31d814cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Refreshing instance network info cache due to event network-changed-1547f1e7-48a2-41c4-9536-2f17f8b068aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:27 np0005539564 nova_compute[226295]: 2025-11-29 08:38:27.543 226310 DEBUG oslo_concurrency.lockutils [req-03aff678-1ea6-4bec-b429-38ffcd4cdd23 req-2f42f777-c3a0-4997-8c24-7ace31d814cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:27 np0005539564 nova_compute[226295]: 2025-11-29 08:38:27.625 226310 DEBUG nova.network.neutron [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:38:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:28.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.604 226310 DEBUG nova.network.neutron [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updating instance_info_cache with network_info: [{"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.628 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Releasing lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.628 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Instance network_info: |[{"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.629 226310 DEBUG oslo_concurrency.lockutils [req-03aff678-1ea6-4bec-b429-38ffcd4cdd23 req-2f42f777-c3a0-4997-8c24-7ace31d814cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.629 226310 DEBUG nova.network.neutron [req-03aff678-1ea6-4bec-b429-38ffcd4cdd23 req-2f42f777-c3a0-4997-8c24-7ace31d814cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Refreshing network info cache for port 1547f1e7-48a2-41c4-9536-2f17f8b068aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.633 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Start _get_guest_xml network_info=[{"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.638 226310 WARNING nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.644 226310 DEBUG nova.virt.libvirt.host [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.645 226310 DEBUG nova.virt.libvirt.host [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.657 226310 DEBUG nova.virt.libvirt.host [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.658 226310 DEBUG nova.virt.libvirt.host [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.661 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.661 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.662 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.662 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.663 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.663 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.664 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.664 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.665 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.665 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.666 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.666 226310 DEBUG nova.virt.hardware [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:38:28 np0005539564 nova_compute[226295]: 2025-11-29 08:38:28.673 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:28.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:29 np0005539564 nova_compute[226295]: 2025-11-29 08:38:29.001 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:29 np0005539564 nova_compute[226295]: 2025-11-29 08:38:29.024 226310 INFO nova.compute.manager [None req-2242cd22-bf98-4f95-bc7f-8a37b701c4f7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Get console output#033[00m
Nov 29 03:38:29 np0005539564 nova_compute[226295]: 2025-11-29 08:38:29.033 270504 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:38:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3756476679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:29 np0005539564 nova_compute[226295]: 2025-11-29 08:38:29.179 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:29 np0005539564 nova_compute[226295]: 2025-11-29 08:38:29.217 226310 DEBUG nova.storage.rbd_utils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 39a5933a-7591-4dd5-9113-5291a3eab7df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:29 np0005539564 nova_compute[226295]: 2025-11-29 08:38:29.224 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:29 np0005539564 nova_compute[226295]: 2025-11-29 08:38:29.288 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:30.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3583594181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.585 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.586 226310 DEBUG nova.virt.libvirt.vif [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-2043804821',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-2043804821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=180,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPxqsP30oMrIL19dxV/ftYBu1JGXJbuz4zZMrSZ4IjdTrg5Jb0FRbiWvcE6tyNZZeY3AXThQKAmSrp4ocWrP93aiYeHzuh43w5nwuixKOkiv+IESVY1e8+VwoPFfftd2bA==',key_name='tempest-TestSecurityGroupsBasicOps-836616474',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-8s7vtk22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:24Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=39a5933a-7591-4dd5-9113-5291a3eab7df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.587 226310 DEBUG nova.network.os_vif_util [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.588 226310 DEBUG nova.network.os_vif_util [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=1547f1e7-48a2-41c4-9536-2f17f8b068aa,network=Network(4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1547f1e7-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.589 226310 DEBUG nova.objects.instance [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 39a5933a-7591-4dd5-9113-5291a3eab7df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.603 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <uuid>39a5933a-7591-4dd5-9113-5291a3eab7df</uuid>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <name>instance-000000b4</name>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-2043804821</nova:name>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:38:28</nova:creationTime>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <nova:user uuid="a45da8ed818144f8bd6e00d233fcb5d2">tempest-TestSecurityGroupsBasicOps-1086021155-project-member</nova:user>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <nova:project uuid="03858b11000d4b57bd3659c3083eed47">tempest-TestSecurityGroupsBasicOps-1086021155</nova:project>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <nova:port uuid="1547f1e7-48a2-41c4-9536-2f17f8b068aa">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <entry name="serial">39a5933a-7591-4dd5-9113-5291a3eab7df</entry>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <entry name="uuid">39a5933a-7591-4dd5-9113-5291a3eab7df</entry>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/39a5933a-7591-4dd5-9113-5291a3eab7df_disk">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/39a5933a-7591-4dd5-9113-5291a3eab7df_disk.config">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:fc:96:d5"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <target dev="tap1547f1e7-48"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df/console.log" append="off"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:38:30 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:38:30 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:38:30 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:38:30 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.605 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Preparing to wait for external event network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.605 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.605 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.605 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.606 226310 DEBUG nova.virt.libvirt.vif [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-2043804821',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-2043804821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=180,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPxqsP30oMrIL19dxV/ftYBu1JGXJbuz4zZMrSZ4IjdTrg5Jb0FRbiWvcE6tyNZZeY3AXThQKAmSrp4ocWrP93aiYeHzuh43w5nwuixKOkiv+IESVY1e8+VwoPFfftd2bA==',key_name='tempest-TestSecurityGroupsBasicOps-836616474',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-8s7vtk22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:24Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=39a5933a-7591-4dd5-9113-5291a3eab7df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.606 226310 DEBUG nova.network.os_vif_util [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.607 226310 DEBUG nova.network.os_vif_util [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=1547f1e7-48a2-41c4-9536-2f17f8b068aa,network=Network(4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1547f1e7-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.607 226310 DEBUG os_vif [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=1547f1e7-48a2-41c4-9536-2f17f8b068aa,network=Network(4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1547f1e7-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.608 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.608 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.608 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.612 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.612 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1547f1e7-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.613 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1547f1e7-48, col_values=(('external_ids', {'iface-id': '1547f1e7-48a2-41c4-9536-2f17f8b068aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:96:d5', 'vm-uuid': '39a5933a-7591-4dd5-9113-5291a3eab7df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.615 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:30 np0005539564 NetworkManager[48997]: <info>  [1764405510.6171] manager: (tap1547f1e7-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.618 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.626 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.629 226310 INFO os_vif [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=1547f1e7-48a2-41c4-9536-2f17f8b068aa,network=Network(4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1547f1e7-48')#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.695 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.695 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.696 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No VIF found with MAC fa:16:3e:fc:96:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.696 226310 INFO nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Using config drive#033[00m
Nov 29 03:38:30 np0005539564 nova_compute[226295]: 2025-11-29 08:38:30.733 226310 DEBUG nova.storage.rbd_utils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 39a5933a-7591-4dd5-9113-5291a3eab7df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:30.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.077 226310 DEBUG nova.network.neutron [req-03aff678-1ea6-4bec-b429-38ffcd4cdd23 req-2f42f777-c3a0-4997-8c24-7ace31d814cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updated VIF entry in instance network info cache for port 1547f1e7-48a2-41c4-9536-2f17f8b068aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.078 226310 DEBUG nova.network.neutron [req-03aff678-1ea6-4bec-b429-38ffcd4cdd23 req-2f42f777-c3a0-4997-8c24-7ace31d814cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updating instance_info_cache with network_info: [{"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.139 226310 DEBUG nova.compute.manager [req-d2cb2d1b-4960-47ce-a596-8d33ecad91a8 req-49516620-c9d0-4234-9093-00c7103e2dca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-changed-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.139 226310 DEBUG nova.compute.manager [req-d2cb2d1b-4960-47ce-a596-8d33ecad91a8 req-49516620-c9d0-4234-9093-00c7103e2dca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Refreshing instance network info cache due to event network-changed-ebf4feb2-0247-40b6-a431-2f55b2f4c237. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.139 226310 DEBUG oslo_concurrency.lockutils [req-d2cb2d1b-4960-47ce-a596-8d33ecad91a8 req-49516620-c9d0-4234-9093-00c7103e2dca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.140 226310 DEBUG oslo_concurrency.lockutils [req-d2cb2d1b-4960-47ce-a596-8d33ecad91a8 req-49516620-c9d0-4234-9093-00c7103e2dca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.140 226310 DEBUG nova.network.neutron [req-d2cb2d1b-4960-47ce-a596-8d33ecad91a8 req-49516620-c9d0-4234-9093-00c7103e2dca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Refreshing network info cache for port ebf4feb2-0247-40b6-a431-2f55b2f4c237 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.153 226310 DEBUG oslo_concurrency.lockutils [req-03aff678-1ea6-4bec-b429-38ffcd4cdd23 req-2f42f777-c3a0-4997-8c24-7ace31d814cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.193 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.194 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.194 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.195 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.195 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.197 226310 INFO nova.compute.manager [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Terminating instance#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.199 226310 DEBUG nova.compute.manager [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.204 226310 INFO nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Creating config drive at /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df/disk.config#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.211 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp092x_6jy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:31 np0005539564 kernel: tapebf4feb2-02 (unregistering): left promiscuous mode
Nov 29 03:38:31 np0005539564 NetworkManager[48997]: <info>  [1764405511.3402] device (tapebf4feb2-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.376 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp092x_6jy" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:31Z|00722|binding|INFO|Releasing lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 from this chassis (sb_readonly=0)
Nov 29 03:38:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:31Z|00723|binding|INFO|Setting lport ebf4feb2-0247-40b6-a431-2f55b2f4c237 down in Southbound
Nov 29 03:38:31 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:31Z|00724|binding|INFO|Removing iface tapebf4feb2-02 ovn-installed in OVS
Nov 29 03:38:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:31.398 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:9d:09 10.100.0.14'], port_security=['fa:16:3e:7a:9d:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0fa70e65-aaae-493a-9c8c-db89fe6658e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c01a89c-f496-44c3-afa3-4720950528b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'c61e493b-5131-4681-b607-cad8a707cfcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a4ed0086-1dab-4f89-9d5b-dbd6a6a8243e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=ebf4feb2-0247-40b6-a431-2f55b2f4c237) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:31.400 139780 INFO neutron.agent.ovn.metadata.agent [-] Port ebf4feb2-0247-40b6-a431-2f55b2f4c237 in datapath 3c01a89c-f496-44c3-afa3-4720950528b6 unbound from our chassis#033[00m
Nov 29 03:38:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:31.402 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c01a89c-f496-44c3-afa3-4720950528b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:38:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:31.403 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6c64a882-1164-4522-b607-37c9aa2fc6af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:31.404 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 namespace which is not needed anymore#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.424 226310 DEBUG nova.storage.rbd_utils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 39a5933a-7591-4dd5-9113-5291a3eab7df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:31 np0005539564 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Nov 29 03:38:31 np0005539564 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b3.scope: Consumed 2.049s CPU time.
Nov 29 03:38:31 np0005539564 systemd-machined[190128]: Machine qemu-85-instance-000000b3 terminated.
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.437 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df/disk.config 39a5933a-7591-4dd5-9113-5291a3eab7df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.480 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.505 226310 INFO nova.virt.libvirt.driver [-] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Instance destroyed successfully.#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.506 226310 DEBUG nova.objects.instance [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 0fa70e65-aaae-493a-9c8c-db89fe6658e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.535 226310 DEBUG nova.virt.libvirt.vif [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:37:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-523561219',display_name='tempest-TestNetworkAdvancedServerOps-server-523561219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-523561219',id=179,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCaqYXq9qkUFXGf1qpvHhcz47gUzuZhq+Jcnq6cylAKf+87//oLJuPUr6JJfsrawgC9NlTQR6RzaDMo9jIsaNOtIuAwNCS169ddXogsTd4Ncy9Th61lYBRoaZpDFVcDEOA==',key_name='tempest-TestNetworkAdvancedServerOps-66857871',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uamaf3kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:38:25Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=0fa70e65-aaae-493a-9c8c-db89fe6658e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.536 226310 DEBUG nova.network.os_vif_util [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.537 226310 DEBUG nova.network.os_vif_util [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.538 226310 DEBUG os_vif [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.540 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.541 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebf4feb2-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.547 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.551 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.557 226310 INFO os_vif [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:9d:09,bridge_name='br-int',has_traffic_filtering=True,id=ebf4feb2-0247-40b6-a431-2f55b2f4c237,network=Network(3c01a89c-f496-44c3-afa3-4720950528b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebf4feb2-02')#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.593 226310 DEBUG nova.compute.manager [req-256affbe-7b46-45c8-9ad5-f002f431a9a7 req-99830fab-6ec1-46ec-ae42-4dcfef223c69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.593 226310 DEBUG oslo_concurrency.lockutils [req-256affbe-7b46-45c8-9ad5-f002f431a9a7 req-99830fab-6ec1-46ec-ae42-4dcfef223c69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.593 226310 DEBUG oslo_concurrency.lockutils [req-256affbe-7b46-45c8-9ad5-f002f431a9a7 req-99830fab-6ec1-46ec-ae42-4dcfef223c69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.594 226310 DEBUG oslo_concurrency.lockutils [req-256affbe-7b46-45c8-9ad5-f002f431a9a7 req-99830fab-6ec1-46ec-ae42-4dcfef223c69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.594 226310 DEBUG nova.compute.manager [req-256affbe-7b46-45c8-9ad5-f002f431a9a7 req-99830fab-6ec1-46ec-ae42-4dcfef223c69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.594 226310 DEBUG nova.compute.manager [req-256affbe-7b46-45c8-9ad5-f002f431a9a7 req-99830fab-6ec1-46ec-ae42-4dcfef223c69 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-unplugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:38:31 np0005539564 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[294284]: [NOTICE]   (294288) : haproxy version is 2.8.14-c23fe91
Nov 29 03:38:31 np0005539564 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[294284]: [NOTICE]   (294288) : path to executable is /usr/sbin/haproxy
Nov 29 03:38:31 np0005539564 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[294284]: [ALERT]    (294288) : Current worker (294290) exited with code 143 (Terminated)
Nov 29 03:38:31 np0005539564 neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6[294284]: [WARNING]  (294288) : All workers exited. Exiting... (0)
Nov 29 03:38:31 np0005539564 systemd[1]: libpod-b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b.scope: Deactivated successfully.
Nov 29 03:38:31 np0005539564 podman[294680]: 2025-11-29 08:38:31.624476217 +0000 UTC m=+0.073764908 container died b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:31 np0005539564 systemd[1]: var-lib-containers-storage-overlay-fadb1adab0f35e98a661fc4eb597037a3c60ec11075240f0eb7a1223c0201540-merged.mount: Deactivated successfully.
Nov 29 03:38:31 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:38:31 np0005539564 podman[294680]: 2025-11-29 08:38:31.905638972 +0000 UTC m=+0.354927663 container cleanup b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:31 np0005539564 systemd[1]: libpod-conmon-b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b.scope: Deactivated successfully.
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.975 226310 DEBUG oslo_concurrency.processutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df/disk.config 39a5933a-7591-4dd5-9113-5291a3eab7df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:31 np0005539564 nova_compute[226295]: 2025-11-29 08:38:31.976 226310 INFO nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Deleting local config drive /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df/disk.config because it was imported into RBD.#033[00m
Nov 29 03:38:32 np0005539564 kernel: tap1547f1e7-48: entered promiscuous mode
Nov 29 03:38:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:32Z|00725|binding|INFO|Claiming lport 1547f1e7-48a2-41c4-9536-2f17f8b068aa for this chassis.
Nov 29 03:38:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:32Z|00726|binding|INFO|1547f1e7-48a2-41c4-9536-2f17f8b068aa: Claiming fa:16:3e:fc:96:d5 10.100.0.9
Nov 29 03:38:32 np0005539564 NetworkManager[48997]: <info>  [1764405512.0434] manager: (tap1547f1e7-48): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Nov 29 03:38:32 np0005539564 systemd-udevd[294622]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.044 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.052 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:96:d5 10.100.0.9'], port_security=['fa:16:3e:fc:96:d5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '39a5933a-7591-4dd5-9113-5291a3eab7df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '23aeff46-2491-4fb0-b831-fbf29f8b9c55 9bcfb367-a867-45a3-bf8c-2cc8b153db20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6d532b12-290e-4d71-bc8b-a61adcdcbe20, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=1547f1e7-48a2-41c4-9536-2f17f8b068aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:32 np0005539564 NetworkManager[48997]: <info>  [1764405512.0561] device (tap1547f1e7-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:38:32 np0005539564 NetworkManager[48997]: <info>  [1764405512.0568] device (tap1547f1e7-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:38:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:32Z|00727|binding|INFO|Setting lport 1547f1e7-48a2-41c4-9536-2f17f8b068aa ovn-installed in OVS
Nov 29 03:38:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:32Z|00728|binding|INFO|Setting lport 1547f1e7-48a2-41c4-9536-2f17f8b068aa up in Southbound
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.064 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.066 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:32.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:32 np0005539564 systemd-machined[190128]: New machine qemu-86-instance-000000b4.
Nov 29 03:38:32 np0005539564 systemd[1]: Started Virtual Machine qemu-86-instance-000000b4.
Nov 29 03:38:32 np0005539564 podman[294740]: 2025-11-29 08:38:32.164751642 +0000 UTC m=+0.223353613 container remove b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.172 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[050a5164-b617-4773-9cb2-61518755ae87]: (4, ('Sat Nov 29 08:38:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 (b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b)\nb456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b\nSat Nov 29 08:38:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 (b456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b)\nb456898bcad36e82781238e28e193de2660a11a8d95d34e13a7ba8fa203fd99b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.174 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7af7b797-a3d3-406e-8f0b-52e96fb572dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.176 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c01a89c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.178 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539564 kernel: tap3c01a89c-f0: left promiscuous mode
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.195 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.198 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f35ff8da-483c-4d0c-9a95-ea54d39d0475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.216 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcf55a2-d816-4e35-b8cc-e84972c99d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.218 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c80a49ea-ce77-4c85-a2e6-379a456cbc4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.229 226310 DEBUG nova.network.neutron [req-d2cb2d1b-4960-47ce-a596-8d33ecad91a8 req-49516620-c9d0-4234-9093-00c7103e2dca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updated VIF entry in instance network info cache for port ebf4feb2-0247-40b6-a431-2f55b2f4c237. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.229 226310 DEBUG nova.network.neutron [req-d2cb2d1b-4960-47ce-a596-8d33ecad91a8 req-49516620-c9d0-4234-9093-00c7103e2dca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [{"id": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "address": "fa:16:3e:7a:9d:09", "network": {"id": "3c01a89c-f496-44c3-afa3-4720950528b6", "bridge": "br-int", "label": "tempest-network-smoke--466586832", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebf4feb2-02", "ovs_interfaceid": "ebf4feb2-0247-40b6-a431-2f55b2f4c237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.235 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f7da3a31-0086-47b1-92f4-b3cc3d5990a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847396, 'reachable_time': 24928, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294774, 'error': None, 'target': 'ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 systemd[1]: run-netns-ovnmeta\x2d3c01a89c\x2df496\x2d44c3\x2dafa3\x2d4720950528b6.mount: Deactivated successfully.
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.239 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c01a89c-f496-44c3-afa3-4720950528b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.239 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[5428d095-d82c-4e30-ae6c-00c353172ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.244 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 1547f1e7-48a2-41c4-9536-2f17f8b068aa in datapath 4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab unbound from our chassis#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.245 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.258 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0a766c-aaf1-49dc-9fa6-b041058d097e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.259 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d70dafc-01 in ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.264 226310 DEBUG oslo_concurrency.lockutils [req-d2cb2d1b-4960-47ce-a596-8d33ecad91a8 req-49516620-c9d0-4234-9093-00c7103e2dca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0fa70e65-aaae-493a-9c8c-db89fe6658e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.264 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d70dafc-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.265 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[514ec738-c18b-48fe-9932-c5cdebf0620c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.266 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b253ec97-ec51-4b9e-a84b-0a7e9c403ef6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.279 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d3381b9e-9b4d-451d-a85b-84110dbe9e9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.295 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2bcdd8-68f2-47b5-baa3-779a18af97c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.323 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[078d5665-d331-470c-ad95-00d40e63454a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.332 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[61478463-2c01-40ef-9f45-249106c81179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 NetworkManager[48997]: <info>  [1764405512.3348] manager: (tap4d70dafc-00): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.374 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1f71e2d7-3265-441f-b978-3e8ee570c51e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.378 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[420af444-eb33-404f-aca3-d137b96957d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 NetworkManager[48997]: <info>  [1764405512.4132] device (tap4d70dafc-00): carrier: link connected
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.420 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5334c13a-28ec-4826-b078-4e4a153ed236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.440 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3b57c1-7861-4c44-b0f8-93f1955aeaee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d70dafc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:e2:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848709, 'reachable_time': 25093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294819, 'error': None, 'target': 'ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.460 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[841c3e7f-8aba-4c06-a906-cb4d3fbfcbb7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:e279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 848709, 'tstamp': 848709}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294820, 'error': None, 'target': 'ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.479 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[736a791c-d658-4a9e-8efd-0f557fa8c0ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d70dafc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:e2:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848709, 'reachable_time': 25093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294821, 'error': None, 'target': 'ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.509 226310 DEBUG nova.compute.manager [req-4bea3438-31a1-4567-a55a-e52a1f558149 req-81b6b3b8-b9f1-4c09-ab5b-290bf2a2ad66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.509 226310 DEBUG oslo_concurrency.lockutils [req-4bea3438-31a1-4567-a55a-e52a1f558149 req-81b6b3b8-b9f1-4c09-ab5b-290bf2a2ad66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.510 226310 DEBUG oslo_concurrency.lockutils [req-4bea3438-31a1-4567-a55a-e52a1f558149 req-81b6b3b8-b9f1-4c09-ab5b-290bf2a2ad66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.510 226310 DEBUG oslo_concurrency.lockutils [req-4bea3438-31a1-4567-a55a-e52a1f558149 req-81b6b3b8-b9f1-4c09-ab5b-290bf2a2ad66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.510 226310 DEBUG nova.compute.manager [req-4bea3438-31a1-4567-a55a-e52a1f558149 req-81b6b3b8-b9f1-4c09-ab5b-290bf2a2ad66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Processing event network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.522 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8e3ae152-b281-4d6a-a342-2f1532dbabdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.604 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6c5f60-31f1-4d39-bab5-a0a08bcec0c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.606 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d70dafc-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.607 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.608 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d70dafc-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:32 np0005539564 kernel: tap4d70dafc-00: entered promiscuous mode
Nov 29 03:38:32 np0005539564 NetworkManager[48997]: <info>  [1764405512.6120] manager: (tap4d70dafc-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.610 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.614 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d70dafc-00, col_values=(('external_ids', {'iface-id': 'b4deeba8-1d32-4538-b5ba-b0f05ed37acb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.615 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:32Z|00729|binding|INFO|Releasing lport b4deeba8-1d32-4538-b5ba-b0f05ed37acb from this chassis (sb_readonly=0)
Nov 29 03:38:32 np0005539564 nova_compute[226295]: 2025-11-29 08:38:32.633 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.634 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.635 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8faf9638-6dc5-4918-8a17-36643ce2199c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.636 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab.pid.haproxy
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:38:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:32.637 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab', 'env', 'PROCESS_TAG=haproxy-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:38:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:32.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.065 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405513.0645182, 39a5933a-7591-4dd5-9113-5291a3eab7df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.065 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] VM Started (Lifecycle Event)#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.068 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.073 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.079 226310 INFO nova.virt.libvirt.driver [-] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Instance spawned successfully.#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.080 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.092 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.096 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.108 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.109 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.110 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.110 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.111 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.112 226310 DEBUG nova.virt.libvirt.driver [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:33 np0005539564 podman[294878]: 2025-11-29 08:38:33.118876706 +0000 UTC m=+0.064396343 container create c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.120 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.121 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405513.065816, 39a5933a-7591-4dd5-9113-5291a3eab7df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.121 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.136 226310 INFO nova.virt.libvirt.driver [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Deleting instance files /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6_del#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.137 226310 INFO nova.virt.libvirt.driver [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Deletion of /var/lib/nova/instances/0fa70e65-aaae-493a-9c8c-db89fe6658e6_del complete#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.161 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.165 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405513.0709906, 39a5933a-7591-4dd5-9113-5291a3eab7df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.165 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:38:33 np0005539564 systemd[1]: Started libpod-conmon-c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a.scope.
Nov 29 03:38:33 np0005539564 podman[294878]: 2025-11-29 08:38:33.087632421 +0000 UTC m=+0.033152078 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:38:33 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.203 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:33 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d5a8b6ec10f9facc4fb03c6bc2b2d72d95d48135939d33bebc6faac4279c6b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.207 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.215 226310 INFO nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Took 8.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.216 226310 DEBUG nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.217 226310 INFO nova.compute.manager [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Took 2.02 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.217 226310 DEBUG oslo.service.loopingcall [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.217 226310 DEBUG nova.compute.manager [-] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.218 226310 DEBUG nova.network.neutron [-] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:38:33 np0005539564 podman[294878]: 2025-11-29 08:38:33.223709311 +0000 UTC m=+0.169229028 container init c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.224 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:38:33 np0005539564 podman[294878]: 2025-11-29 08:38:33.232594693 +0000 UTC m=+0.178114370 container start c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:38:33 np0005539564 neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab[294892]: [NOTICE]   (294896) : New worker (294898) forked
Nov 29 03:38:33 np0005539564 neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab[294892]: [NOTICE]   (294896) : Loading success.
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.285 226310 INFO nova.compute.manager [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Took 9.86 seconds to build instance.#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.312 226310 DEBUG oslo_concurrency.lockutils [None req-53ec2aae-dd55-47c8-a054-e05aee75a53f a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.768 226310 DEBUG nova.compute.manager [req-43c5acfb-b28e-4310-88e5-d3931a8caaed req-b5233b74-45ca-4de0-88cd-831c8517953a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.769 226310 DEBUG oslo_concurrency.lockutils [req-43c5acfb-b28e-4310-88e5-d3931a8caaed req-b5233b74-45ca-4de0-88cd-831c8517953a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.769 226310 DEBUG oslo_concurrency.lockutils [req-43c5acfb-b28e-4310-88e5-d3931a8caaed req-b5233b74-45ca-4de0-88cd-831c8517953a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.769 226310 DEBUG oslo_concurrency.lockutils [req-43c5acfb-b28e-4310-88e5-d3931a8caaed req-b5233b74-45ca-4de0-88cd-831c8517953a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.769 226310 DEBUG nova.compute.manager [req-43c5acfb-b28e-4310-88e5-d3931a8caaed req-b5233b74-45ca-4de0-88cd-831c8517953a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] No waiting events found dispatching network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:33 np0005539564 nova_compute[226295]: 2025-11-29 08:38:33.770 226310 WARNING nova.compute.manager [req-43c5acfb-b28e-4310-88e5-d3931a8caaed req-b5233b74-45ca-4de0-88cd-831c8517953a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received unexpected event network-vif-plugged-ebf4feb2-0247-40b6-a431-2f55b2f4c237 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:38:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:34.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:34 np0005539564 nova_compute[226295]: 2025-11-29 08:38:34.290 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:34 np0005539564 nova_compute[226295]: 2025-11-29 08:38:34.472 226310 DEBUG nova.compute.manager [req-7673c476-59d6-4da8-b2ce-c93eeb7d3d8b req-014b68ac-bc93-428a-b84a-247d472cdbfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:34 np0005539564 nova_compute[226295]: 2025-11-29 08:38:34.472 226310 DEBUG oslo_concurrency.lockutils [req-7673c476-59d6-4da8-b2ce-c93eeb7d3d8b req-014b68ac-bc93-428a-b84a-247d472cdbfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:34 np0005539564 nova_compute[226295]: 2025-11-29 08:38:34.473 226310 DEBUG oslo_concurrency.lockutils [req-7673c476-59d6-4da8-b2ce-c93eeb7d3d8b req-014b68ac-bc93-428a-b84a-247d472cdbfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:34 np0005539564 nova_compute[226295]: 2025-11-29 08:38:34.473 226310 DEBUG oslo_concurrency.lockutils [req-7673c476-59d6-4da8-b2ce-c93eeb7d3d8b req-014b68ac-bc93-428a-b84a-247d472cdbfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:34 np0005539564 nova_compute[226295]: 2025-11-29 08:38:34.473 226310 DEBUG nova.compute.manager [req-7673c476-59d6-4da8-b2ce-c93eeb7d3d8b req-014b68ac-bc93-428a-b84a-247d472cdbfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] No waiting events found dispatching network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:34 np0005539564 nova_compute[226295]: 2025-11-29 08:38:34.474 226310 WARNING nova.compute.manager [req-7673c476-59d6-4da8-b2ce-c93eeb7d3d8b req-014b68ac-bc93-428a-b84a-247d472cdbfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received unexpected event network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa for instance with vm_state active and task_state None.#033[00m
Nov 29 03:38:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:34.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:35 np0005539564 nova_compute[226295]: 2025-11-29 08:38:35.691 226310 DEBUG nova.network.neutron [-] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:35 np0005539564 nova_compute[226295]: 2025-11-29 08:38:35.717 226310 INFO nova.compute.manager [-] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Took 2.50 seconds to deallocate network for instance.#033[00m
Nov 29 03:38:35 np0005539564 nova_compute[226295]: 2025-11-29 08:38:35.786 226310 DEBUG nova.compute.manager [req-b650d81a-e971-4c66-b028-427154226534 req-5f1374f6-e151-4398-8002-867184519466 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Received event network-vif-deleted-ebf4feb2-0247-40b6-a431-2f55b2f4c237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:35 np0005539564 nova_compute[226295]: 2025-11-29 08:38:35.795 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:35 np0005539564 nova_compute[226295]: 2025-11-29 08:38:35.796 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:35 np0005539564 nova_compute[226295]: 2025-11-29 08:38:35.802 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:35 np0005539564 nova_compute[226295]: 2025-11-29 08:38:35.854 226310 INFO nova.scheduler.client.report [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance 0fa70e65-aaae-493a-9c8c-db89fe6658e6#033[00m
Nov 29 03:38:35 np0005539564 nova_compute[226295]: 2025-11-29 08:38:35.927 226310 DEBUG oslo_concurrency.lockutils [None req-27b9fd41-5d6c-431e-8222-a06d62972012 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "0fa70e65-aaae-493a-9c8c-db89fe6658e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:36.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:36 np0005539564 nova_compute[226295]: 2025-11-29 08:38:36.545 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:36.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:38.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:38.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:39 np0005539564 nova_compute[226295]: 2025-11-29 08:38:39.293 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:40.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:40Z|00730|binding|INFO|Releasing lport b4deeba8-1d32-4538-b5ba-b0f05ed37acb from this chassis (sb_readonly=0)
Nov 29 03:38:40 np0005539564 nova_compute[226295]: 2025-11-29 08:38:40.772 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:40 np0005539564 nova_compute[226295]: 2025-11-29 08:38:40.856 226310 DEBUG nova.compute.manager [req-367add98-bd38-4263-ae6f-a8cdf18ab455 req-05d13dae-532e-4519-88a8-86c167c4afc4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-changed-1547f1e7-48a2-41c4-9536-2f17f8b068aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:40 np0005539564 nova_compute[226295]: 2025-11-29 08:38:40.857 226310 DEBUG nova.compute.manager [req-367add98-bd38-4263-ae6f-a8cdf18ab455 req-05d13dae-532e-4519-88a8-86c167c4afc4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Refreshing instance network info cache due to event network-changed-1547f1e7-48a2-41c4-9536-2f17f8b068aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:40 np0005539564 nova_compute[226295]: 2025-11-29 08:38:40.857 226310 DEBUG oslo_concurrency.lockutils [req-367add98-bd38-4263-ae6f-a8cdf18ab455 req-05d13dae-532e-4519-88a8-86c167c4afc4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:40 np0005539564 nova_compute[226295]: 2025-11-29 08:38:40.858 226310 DEBUG oslo_concurrency.lockutils [req-367add98-bd38-4263-ae6f-a8cdf18ab455 req-05d13dae-532e-4519-88a8-86c167c4afc4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:40 np0005539564 nova_compute[226295]: 2025-11-29 08:38:40.859 226310 DEBUG nova.network.neutron [req-367add98-bd38-4263-ae6f-a8cdf18ab455 req-05d13dae-532e-4519-88a8-86c167c4afc4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Refreshing network info cache for port 1547f1e7-48a2-41c4-9536-2f17f8b068aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:40.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:41 np0005539564 nova_compute[226295]: 2025-11-29 08:38:41.548 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:42.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:42 np0005539564 podman[294908]: 2025-11-29 08:38:42.531895106 +0000 UTC m=+0.081836075 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:38:42 np0005539564 podman[294909]: 2025-11-29 08:38:42.547815647 +0000 UTC m=+0.098186218 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:42 np0005539564 podman[294907]: 2025-11-29 08:38:42.613953626 +0000 UTC m=+0.165315934 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:38:42 np0005539564 nova_compute[226295]: 2025-11-29 08:38:42.713 226310 DEBUG nova.network.neutron [req-367add98-bd38-4263-ae6f-a8cdf18ab455 req-05d13dae-532e-4519-88a8-86c167c4afc4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updated VIF entry in instance network info cache for port 1547f1e7-48a2-41c4-9536-2f17f8b068aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:42 np0005539564 nova_compute[226295]: 2025-11-29 08:38:42.714 226310 DEBUG nova.network.neutron [req-367add98-bd38-4263-ae6f-a8cdf18ab455 req-05d13dae-532e-4519-88a8-86c167c4afc4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updating instance_info_cache with network_info: [{"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:42 np0005539564 nova_compute[226295]: 2025-11-29 08:38:42.730 226310 DEBUG oslo_concurrency.lockutils [req-367add98-bd38-4263-ae6f-a8cdf18ab455 req-05d13dae-532e-4519-88a8-86c167c4afc4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:42.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:43 np0005539564 nova_compute[226295]: 2025-11-29 08:38:43.588 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:43.589 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:43.590 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:38:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:44 np0005539564 nova_compute[226295]: 2025-11-29 08:38:44.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:44.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:46 np0005539564 nova_compute[226295]: 2025-11-29 08:38:46.503 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405511.5018754, 0fa70e65-aaae-493a-9c8c-db89fe6658e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:46 np0005539564 nova_compute[226295]: 2025-11-29 08:38:46.504 226310 INFO nova.compute.manager [-] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:38:46 np0005539564 nova_compute[226295]: 2025-11-29 08:38:46.528 226310 DEBUG nova.compute.manager [None req-1c6ebbed-990c-47a8-a085-f6c40be5f264 - - - - - -] [instance: 0fa70e65-aaae-493a-9c8c-db89fe6658e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:46 np0005539564 nova_compute[226295]: 2025-11-29 08:38:46.552 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:46.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:46Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:96:d5 10.100.0.9
Nov 29 03:38:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:38:46Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:96:d5 10.100.0.9
Nov 29 03:38:47 np0005539564 nova_compute[226295]: 2025-11-29 08:38:47.442 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:48.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:49 np0005539564 nova_compute[226295]: 2025-11-29 08:38:49.299 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:38:49.592 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:50.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.397053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531397150, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 981, "num_deletes": 251, "total_data_size": 2019558, "memory_usage": 2043600, "flush_reason": "Manual Compaction"}
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531413298, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 1332192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66315, "largest_seqno": 67291, "table_properties": {"data_size": 1327755, "index_size": 2088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9879, "raw_average_key_size": 19, "raw_value_size": 1318860, "raw_average_value_size": 2632, "num_data_blocks": 93, "num_entries": 501, "num_filter_entries": 501, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405455, "oldest_key_time": 1764405455, "file_creation_time": 1764405531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 16326 microseconds, and 7760 cpu microseconds.
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.413380) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 1332192 bytes OK
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.413420) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.415519) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.415548) EVENT_LOG_v1 {"time_micros": 1764405531415539, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.415580) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 2014708, prev total WAL file size 2014708, number of live WAL files 2.
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.416924) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(1300KB)], [132(11MB)]
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531417015, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 13413327, "oldest_snapshot_seqno": -1}
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 9342 keys, 11503570 bytes, temperature: kUnknown
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531501141, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 11503570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11443444, "index_size": 35693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 247764, "raw_average_key_size": 26, "raw_value_size": 11279443, "raw_average_value_size": 1207, "num_data_blocks": 1354, "num_entries": 9342, "num_filter_entries": 9342, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.501554) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 11503570 bytes
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.514667) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.2 rd, 136.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 11.5 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(18.7) write-amplify(8.6) OK, records in: 9857, records dropped: 515 output_compression: NoCompression
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.514700) EVENT_LOG_v1 {"time_micros": 1764405531514686, "job": 84, "event": "compaction_finished", "compaction_time_micros": 84231, "compaction_time_cpu_micros": 35343, "output_level": 6, "num_output_files": 1, "total_output_size": 11503570, "num_input_records": 9857, "num_output_records": 9342, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531515286, "job": 84, "event": "table_file_deletion", "file_number": 134}
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531519315, "job": 84, "event": "table_file_deletion", "file_number": 132}
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.416763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.519422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.519428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.519431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.519434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:38:51.519436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539564 nova_compute[226295]: 2025-11-29 08:38:51.555 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:52 np0005539564 nova_compute[226295]: 2025-11-29 08:38:52.091 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:52.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:52.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:54.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:54 np0005539564 nova_compute[226295]: 2025-11-29 08:38:54.346 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:54.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:56.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:56 np0005539564 nova_compute[226295]: 2025-11-29 08:38:56.595 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:56.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:58.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:38:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:58.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:59 np0005539564 nova_compute[226295]: 2025-11-29 08:38:59.349 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:59 np0005539564 nova_compute[226295]: 2025-11-29 08:38:59.959 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:00.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:00.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:01 np0005539564 nova_compute[226295]: 2025-11-29 08:39:01.600 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:02.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:02.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:03.754 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:03.754 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:03.755 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:04.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:04 np0005539564 nova_compute[226295]: 2025-11-29 08:39:04.262 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:04 np0005539564 nova_compute[226295]: 2025-11-29 08:39:04.396 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:04.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:06.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:06 np0005539564 nova_compute[226295]: 2025-11-29 08:39:06.602 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:08.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:08.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:09 np0005539564 nova_compute[226295]: 2025-11-29 08:39:09.401 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:09 np0005539564 nova_compute[226295]: 2025-11-29 08:39:09.425 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:09 np0005539564 nova_compute[226295]: 2025-11-29 08:39:09.425 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:09 np0005539564 nova_compute[226295]: 2025-11-29 08:39:09.425 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:09 np0005539564 nova_compute[226295]: 2025-11-29 08:39:09.426 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:39:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:10.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:10.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:11 np0005539564 nova_compute[226295]: 2025-11-29 08:39:11.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:11 np0005539564 nova_compute[226295]: 2025-11-29 08:39:11.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:11 np0005539564 nova_compute[226295]: 2025-11-29 08:39:11.606 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:12.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:12 np0005539564 nova_compute[226295]: 2025-11-29 08:39:12.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:12 np0005539564 nova_compute[226295]: 2025-11-29 08:39:12.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:39:12 np0005539564 nova_compute[226295]: 2025-11-29 08:39:12.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:39:12 np0005539564 nova_compute[226295]: 2025-11-29 08:39:12.555 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:12 np0005539564 nova_compute[226295]: 2025-11-29 08:39:12.556 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:12 np0005539564 nova_compute[226295]: 2025-11-29 08:39:12.556 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:39:12 np0005539564 nova_compute[226295]: 2025-11-29 08:39:12.556 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 39a5933a-7591-4dd5-9113-5291a3eab7df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:12.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:39:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:39:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:13 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:39:13 np0005539564 podman[295102]: 2025-11-29 08:39:13.505371981 +0000 UTC m=+0.055850312 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:39:13 np0005539564 podman[295101]: 2025-11-29 08:39:13.514539369 +0000 UTC m=+0.063980942 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:39:13 np0005539564 podman[295100]: 2025-11-29 08:39:13.537671685 +0000 UTC m=+0.088219258 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:39:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:14.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:14 np0005539564 nova_compute[226295]: 2025-11-29 08:39:14.177 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updating instance_info_cache with network_info: [{"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:14 np0005539564 nova_compute[226295]: 2025-11-29 08:39:14.196 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:14 np0005539564 nova_compute[226295]: 2025-11-29 08:39:14.196 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:39:14 np0005539564 nova_compute[226295]: 2025-11-29 08:39:14.445 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:14.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:16.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:16 np0005539564 nova_compute[226295]: 2025-11-29 08:39:16.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:16 np0005539564 nova_compute[226295]: 2025-11-29 08:39:16.641 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:16.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:18.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:18.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:19 np0005539564 nova_compute[226295]: 2025-11-29 08:39:19.448 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:20.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:20.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:21 np0005539564 nova_compute[226295]: 2025-11-29 08:39:21.643 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:22 np0005539564 nova_compute[226295]: 2025-11-29 08:39:22.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:22.503 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:22 np0005539564 nova_compute[226295]: 2025-11-29 08:39:22.504 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:22.504 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:39:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:22.505 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:24 np0005539564 nova_compute[226295]: 2025-11-29 08:39:24.484 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:25.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:26.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.380 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.381 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.381 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.693 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2715970553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.866 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.945 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:26 np0005539564 nova_compute[226295]: 2025-11-29 08:39:26.946 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:27.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.107 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.108 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4014MB free_disk=20.88628387451172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.108 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.109 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.204 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 39a5933a-7591-4dd5-9113-5291a3eab7df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.204 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.205 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.357 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/707917604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.857 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.866 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.906 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.940 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:39:27 np0005539564 nova_compute[226295]: 2025-11-29 08:39:27.941 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:29.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:29 np0005539564 nova_compute[226295]: 2025-11-29 08:39:29.486 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:30.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:31.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:31 np0005539564 nova_compute[226295]: 2025-11-29 08:39:31.698 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:32.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:33.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:34.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:34 np0005539564 nova_compute[226295]: 2025-11-29 08:39:34.535 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:35.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:36.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:36 np0005539564 nova_compute[226295]: 2025-11-29 08:39:36.701 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:37.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:38.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:39.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:39 np0005539564 nova_compute[226295]: 2025-11-29 08:39:39.537 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:40.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:41.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:41 np0005539564 nova_compute[226295]: 2025-11-29 08:39:41.705 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:42.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:43.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:44.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:44 np0005539564 podman[295263]: 2025-11-29 08:39:44.513748613 +0000 UTC m=+0.058085953 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:39:44 np0005539564 podman[295261]: 2025-11-29 08:39:44.538673896 +0000 UTC m=+0.096841660 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:44 np0005539564 nova_compute[226295]: 2025-11-29 08:39:44.539 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:44 np0005539564 podman[295262]: 2025-11-29 08:39:44.546060946 +0000 UTC m=+0.096170682 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:39:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:45.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:46.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:46 np0005539564 nova_compute[226295]: 2025-11-29 08:39:46.710 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:39:46Z|00731|binding|INFO|Releasing lport b4deeba8-1d32-4538-b5ba-b0f05ed37acb from this chassis (sb_readonly=0)
Nov 29 03:39:46 np0005539564 nova_compute[226295]: 2025-11-29 08:39:46.889 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:47.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:48.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:49.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:49 np0005539564 nova_compute[226295]: 2025-11-29 08:39:49.581 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:50.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:51.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:51 np0005539564 nova_compute[226295]: 2025-11-29 08:39:51.714 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:52.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:53.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:54.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:54 np0005539564 nova_compute[226295]: 2025-11-29 08:39:54.632 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:55.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:39:55Z|00732|binding|INFO|Releasing lport b4deeba8-1d32-4538-b5ba-b0f05ed37acb from this chassis (sb_readonly=0)
Nov 29 03:39:55 np0005539564 nova_compute[226295]: 2025-11-29 08:39:55.578 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:56.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:56 np0005539564 nova_compute[226295]: 2025-11-29 08:39:56.701 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:56 np0005539564 nova_compute[226295]: 2025-11-29 08:39:56.716 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:57.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.757 226310 DEBUG nova.compute.manager [req-934ba086-b6da-4f1b-87db-9348f0d1b787 req-1eb127e7-55e9-4eb1-99b6-8fcaa97f80e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-changed-1547f1e7-48a2-41c4-9536-2f17f8b068aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.757 226310 DEBUG nova.compute.manager [req-934ba086-b6da-4f1b-87db-9348f0d1b787 req-1eb127e7-55e9-4eb1-99b6-8fcaa97f80e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Refreshing instance network info cache due to event network-changed-1547f1e7-48a2-41c4-9536-2f17f8b068aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.758 226310 DEBUG oslo_concurrency.lockutils [req-934ba086-b6da-4f1b-87db-9348f0d1b787 req-1eb127e7-55e9-4eb1-99b6-8fcaa97f80e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.758 226310 DEBUG oslo_concurrency.lockutils [req-934ba086-b6da-4f1b-87db-9348f0d1b787 req-1eb127e7-55e9-4eb1-99b6-8fcaa97f80e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.758 226310 DEBUG nova.network.neutron [req-934ba086-b6da-4f1b-87db-9348f0d1b787 req-1eb127e7-55e9-4eb1-99b6-8fcaa97f80e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Refreshing network info cache for port 1547f1e7-48a2-41c4-9536-2f17f8b068aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.805 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "39a5933a-7591-4dd5-9113-5291a3eab7df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.806 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.806 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.807 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.807 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.809 226310 INFO nova.compute.manager [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Terminating instance#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.810 226310 DEBUG nova.compute.manager [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:39:57 np0005539564 kernel: tap1547f1e7-48 (unregistering): left promiscuous mode
Nov 29 03:39:57 np0005539564 NetworkManager[48997]: <info>  [1764405597.8695] device (tap1547f1e7-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:39:57 np0005539564 ovn_controller[130591]: 2025-11-29T08:39:57Z|00733|binding|INFO|Releasing lport 1547f1e7-48a2-41c4-9536-2f17f8b068aa from this chassis (sb_readonly=0)
Nov 29 03:39:57 np0005539564 ovn_controller[130591]: 2025-11-29T08:39:57Z|00734|binding|INFO|Setting lport 1547f1e7-48a2-41c4-9536-2f17f8b068aa down in Southbound
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.877 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539564 ovn_controller[130591]: 2025-11-29T08:39:57Z|00735|binding|INFO|Removing iface tap1547f1e7-48 ovn-installed in OVS
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.879 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539564 nova_compute[226295]: 2025-11-29 08:39:57.893 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:57.895 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:96:d5 10.100.0.9'], port_security=['fa:16:3e:fc:96:d5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '39a5933a-7591-4dd5-9113-5291a3eab7df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '23aeff46-2491-4fb0-b831-fbf29f8b9c55 9bcfb367-a867-45a3-bf8c-2cc8b153db20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6d532b12-290e-4d71-bc8b-a61adcdcbe20, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=1547f1e7-48a2-41c4-9536-2f17f8b068aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:57.896 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 1547f1e7-48a2-41c4-9536-2f17f8b068aa in datapath 4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab unbound from our chassis#033[00m
Nov 29 03:39:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:57.897 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:39:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:57.899 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc18abc-ee59-4722-be65-502c033ee97f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:57.899 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab namespace which is not needed anymore#033[00m
Nov 29 03:39:57 np0005539564 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Nov 29 03:39:57 np0005539564 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b4.scope: Consumed 17.730s CPU time.
Nov 29 03:39:57 np0005539564 systemd-machined[190128]: Machine qemu-86-instance-000000b4 terminated.
Nov 29 03:39:58 np0005539564 neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab[294892]: [NOTICE]   (294896) : haproxy version is 2.8.14-c23fe91
Nov 29 03:39:58 np0005539564 neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab[294892]: [NOTICE]   (294896) : path to executable is /usr/sbin/haproxy
Nov 29 03:39:58 np0005539564 neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab[294892]: [WARNING]  (294896) : Exiting Master process...
Nov 29 03:39:58 np0005539564 neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab[294892]: [WARNING]  (294896) : Exiting Master process...
Nov 29 03:39:58 np0005539564 neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab[294892]: [ALERT]    (294896) : Current worker (294898) exited with code 143 (Terminated)
Nov 29 03:39:58 np0005539564 neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab[294892]: [WARNING]  (294896) : All workers exited. Exiting... (0)
Nov 29 03:39:58 np0005539564 systemd[1]: libpod-c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a.scope: Deactivated successfully.
Nov 29 03:39:58 np0005539564 podman[295353]: 2025-11-29 08:39:58.051186834 +0000 UTC m=+0.056730585 container died c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.053 226310 INFO nova.virt.libvirt.driver [-] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Instance destroyed successfully.#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.054 226310 DEBUG nova.objects.instance [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'resources' on Instance uuid 39a5933a-7591-4dd5-9113-5291a3eab7df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.069 226310 DEBUG nova.virt.libvirt.vif [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-2043804821',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-2043804821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=180,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPxqsP30oMrIL19dxV/ftYBu1JGXJbuz4zZMrSZ4IjdTrg5Jb0FRbiWvcE6tyNZZeY3AXThQKAmSrp4ocWrP93aiYeHzuh43w5nwuixKOkiv+IESVY1e8+VwoPFfftd2bA==',key_name='tempest-TestSecurityGroupsBasicOps-836616474',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:38:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-8s7vtk22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:38:33Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=39a5933a-7591-4dd5-9113-5291a3eab7df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.070 226310 DEBUG nova.network.os_vif_util [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.071 226310 DEBUG nova.network.os_vif_util [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=1547f1e7-48a2-41c4-9536-2f17f8b068aa,network=Network(4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1547f1e7-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.072 226310 DEBUG os_vif [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=1547f1e7-48a2-41c4-9536-2f17f8b068aa,network=Network(4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1547f1e7-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.078 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.080 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1547f1e7-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.082 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.084 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.088 226310 INFO os_vif [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=1547f1e7-48a2-41c4-9536-2f17f8b068aa,network=Network(4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1547f1e7-48')#033[00m
Nov 29 03:39:58 np0005539564 systemd[1]: var-lib-containers-storage-overlay-5d5a8b6ec10f9facc4fb03c6bc2b2d72d95d48135939d33bebc6faac4279c6b2-merged.mount: Deactivated successfully.
Nov 29 03:39:58 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:39:58 np0005539564 podman[295353]: 2025-11-29 08:39:58.104225089 +0000 UTC m=+0.109768840 container cleanup c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:58 np0005539564 systemd[1]: libpod-conmon-c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a.scope: Deactivated successfully.
Nov 29 03:39:58 np0005539564 podman[295404]: 2025-11-29 08:39:58.171970402 +0000 UTC m=+0.045695697 container remove c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.179 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6aa3a7-aab2-4f4f-af10-fc67e9075132]: (4, ('Sat Nov 29 08:39:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab (c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a)\nc4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a\nSat Nov 29 08:39:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab (c4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a)\nc4a568a30dd39dc6bc7d953bac30e432df351fc5fc1051c80a7805485bec833a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.182 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[57fe9cbe-b9ad-410c-8eb6-b034941349ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.183 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d70dafc-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:58 np0005539564 kernel: tap4d70dafc-00: left promiscuous mode
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:58.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.245 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.248 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d95a0e95-19fe-4b36-b33c-a84dfb835362]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.268 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[16003eef-71fb-4bd3-8623-4056a9d94e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.270 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2e23d791-044f-4a00-9982-2a47b5b15d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.293 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[84ca187b-83ad-4a09-b9ed-52f3273e6b86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848700, 'reachable_time': 36899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295423, 'error': None, 'target': 'ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539564 systemd[1]: run-netns-ovnmeta\x2d4d70dafc\x2d0a6e\x2d49a8\x2da3c6\x2dba97b66b0bab.mount: Deactivated successfully.
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.299 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:39:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:39:58.299 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[425e7be5-8388-464c-a46c-0d02b0f1d308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.408 226310 DEBUG nova.compute.manager [req-cc8eb747-2651-44ba-b7d5-bf45e3422206 req-6ed6280b-2e97-40f6-8973-1f8c9a9519f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-vif-unplugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.408 226310 DEBUG oslo_concurrency.lockutils [req-cc8eb747-2651-44ba-b7d5-bf45e3422206 req-6ed6280b-2e97-40f6-8973-1f8c9a9519f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.408 226310 DEBUG oslo_concurrency.lockutils [req-cc8eb747-2651-44ba-b7d5-bf45e3422206 req-6ed6280b-2e97-40f6-8973-1f8c9a9519f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.409 226310 DEBUG oslo_concurrency.lockutils [req-cc8eb747-2651-44ba-b7d5-bf45e3422206 req-6ed6280b-2e97-40f6-8973-1f8c9a9519f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.409 226310 DEBUG nova.compute.manager [req-cc8eb747-2651-44ba-b7d5-bf45e3422206 req-6ed6280b-2e97-40f6-8973-1f8c9a9519f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] No waiting events found dispatching network-vif-unplugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.409 226310 DEBUG nova.compute.manager [req-cc8eb747-2651-44ba-b7d5-bf45e3422206 req-6ed6280b-2e97-40f6-8973-1f8c9a9519f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-vif-unplugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:39:58 np0005539564 nova_compute[226295]: 2025-11-29 08:39:58.933 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:39:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:59 np0005539564 nova_compute[226295]: 2025-11-29 08:39:59.289 226310 INFO nova.virt.libvirt.driver [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Deleting instance files /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df_del#033[00m
Nov 29 03:39:59 np0005539564 nova_compute[226295]: 2025-11-29 08:39:59.290 226310 INFO nova.virt.libvirt.driver [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Deletion of /var/lib/nova/instances/39a5933a-7591-4dd5-9113-5291a3eab7df_del complete#033[00m
Nov 29 03:39:59 np0005539564 nova_compute[226295]: 2025-11-29 08:39:59.370 226310 INFO nova.compute.manager [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Took 1.56 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:39:59 np0005539564 nova_compute[226295]: 2025-11-29 08:39:59.371 226310 DEBUG oslo.service.loopingcall [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:39:59 np0005539564 nova_compute[226295]: 2025-11-29 08:39:59.374 226310 DEBUG nova.compute.manager [-] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:39:59 np0005539564 nova_compute[226295]: 2025-11-29 08:39:59.375 226310 DEBUG nova.network.neutron [-] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:39:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:59 np0005539564 nova_compute[226295]: 2025-11-29 08:39:59.635 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:00.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.379 226310 DEBUG nova.compute.manager [req-53aa2e89-38dd-4666-a4ce-5b05080477f1 req-d4952360-fbd6-464a-9d62-9fdefe8775f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-vif-deleted-1547f1e7-48a2-41c4-9536-2f17f8b068aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.380 226310 INFO nova.compute.manager [req-53aa2e89-38dd-4666-a4ce-5b05080477f1 req-d4952360-fbd6-464a-9d62-9fdefe8775f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Neutron deleted interface 1547f1e7-48a2-41c4-9536-2f17f8b068aa; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.380 226310 DEBUG nova.network.neutron [req-53aa2e89-38dd-4666-a4ce-5b05080477f1 req-d4952360-fbd6-464a-9d62-9fdefe8775f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.402 226310 DEBUG nova.network.neutron [-] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.465 226310 DEBUG nova.compute.manager [req-53aa2e89-38dd-4666-a4ce-5b05080477f1 req-d4952360-fbd6-464a-9d62-9fdefe8775f2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Detach interface failed, port_id=1547f1e7-48a2-41c4-9536-2f17f8b068aa, reason: Instance 39a5933a-7591-4dd5-9113-5291a3eab7df could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.466 226310 INFO nova.compute.manager [-] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.478 226310 DEBUG nova.network.neutron [req-934ba086-b6da-4f1b-87db-9348f0d1b787 req-1eb127e7-55e9-4eb1-99b6-8fcaa97f80e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updated VIF entry in instance network info cache for port 1547f1e7-48a2-41c4-9536-2f17f8b068aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.478 226310 DEBUG nova.network.neutron [req-934ba086-b6da-4f1b-87db-9348f0d1b787 req-1eb127e7-55e9-4eb1-99b6-8fcaa97f80e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Updating instance_info_cache with network_info: [{"id": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "address": "fa:16:3e:fc:96:d5", "network": {"id": "4d70dafc-0a6e-49a8-a3c6-ba97b66b0bab", "bridge": "br-int", "label": "tempest-network-smoke--342297603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1547f1e7-48", "ovs_interfaceid": "1547f1e7-48a2-41c4-9536-2f17f8b068aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.574 226310 DEBUG nova.compute.manager [req-9ec611be-9d43-4046-a21c-9a139a3b3904 req-2ca14287-e3a7-4376-9148-6e560a833422 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received event network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.574 226310 DEBUG oslo_concurrency.lockutils [req-9ec611be-9d43-4046-a21c-9a139a3b3904 req-2ca14287-e3a7-4376-9148-6e560a833422 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.574 226310 DEBUG oslo_concurrency.lockutils [req-9ec611be-9d43-4046-a21c-9a139a3b3904 req-2ca14287-e3a7-4376-9148-6e560a833422 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.575 226310 DEBUG oslo_concurrency.lockutils [req-9ec611be-9d43-4046-a21c-9a139a3b3904 req-2ca14287-e3a7-4376-9148-6e560a833422 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.575 226310 DEBUG nova.compute.manager [req-9ec611be-9d43-4046-a21c-9a139a3b3904 req-2ca14287-e3a7-4376-9148-6e560a833422 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] No waiting events found dispatching network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.575 226310 WARNING nova.compute.manager [req-9ec611be-9d43-4046-a21c-9a139a3b3904 req-2ca14287-e3a7-4376-9148-6e560a833422 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Received unexpected event network-vif-plugged-1547f1e7-48a2-41c4-9536-2f17f8b068aa for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.690 226310 DEBUG oslo_concurrency.lockutils [req-934ba086-b6da-4f1b-87db-9348f0d1b787 req-1eb127e7-55e9-4eb1-99b6-8fcaa97f80e8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-39a5933a-7591-4dd5-9113-5291a3eab7df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.694 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.694 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:00 np0005539564 nova_compute[226295]: 2025-11-29 08:40:00.766 226310 DEBUG oslo_concurrency.processutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:01.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:01 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 03:40:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2634890051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:01 np0005539564 nova_compute[226295]: 2025-11-29 08:40:01.251 226310 DEBUG oslo_concurrency.processutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:01 np0005539564 nova_compute[226295]: 2025-11-29 08:40:01.259 226310 DEBUG nova.compute.provider_tree [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:01 np0005539564 nova_compute[226295]: 2025-11-29 08:40:01.346 226310 DEBUG nova.scheduler.client.report [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:01 np0005539564 nova_compute[226295]: 2025-11-29 08:40:01.465 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:01 np0005539564 nova_compute[226295]: 2025-11-29 08:40:01.520 226310 INFO nova.scheduler.client.report [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Deleted allocations for instance 39a5933a-7591-4dd5-9113-5291a3eab7df#033[00m
Nov 29 03:40:01 np0005539564 nova_compute[226295]: 2025-11-29 08:40:01.699 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:01 np0005539564 nova_compute[226295]: 2025-11-29 08:40:01.705 226310 DEBUG oslo_concurrency.lockutils [None req-169b76a2-ac02-4823-80e9-e24273bba800 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "39a5933a-7591-4dd5-9113-5291a3eab7df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:03.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:03 np0005539564 nova_compute[226295]: 2025-11-29 08:40:03.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:40:03.755 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:40:03.755 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:40:03.756 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:04 np0005539564 nova_compute[226295]: 2025-11-29 08:40:04.637 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:05.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:06.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:07.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:07 np0005539564 nova_compute[226295]: 2025-11-29 08:40:07.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:07 np0005539564 nova_compute[226295]: 2025-11-29 08:40:07.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:40:07 np0005539564 nova_compute[226295]: 2025-11-29 08:40:07.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:07 np0005539564 nova_compute[226295]: 2025-11-29 08:40:07.811 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:08 np0005539564 nova_compute[226295]: 2025-11-29 08:40:08.085 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:08.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:08 np0005539564 nova_compute[226295]: 2025-11-29 08:40:08.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:09.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:09 np0005539564 nova_compute[226295]: 2025-11-29 08:40:09.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:09 np0005539564 nova_compute[226295]: 2025-11-29 08:40:09.641 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:10.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:11.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:12.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:12 np0005539564 nova_compute[226295]: 2025-11-29 08:40:12.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:12 np0005539564 nova_compute[226295]: 2025-11-29 08:40:12.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:40:12 np0005539564 nova_compute[226295]: 2025-11-29 08:40:12.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:40:12 np0005539564 nova_compute[226295]: 2025-11-29 08:40:12.414 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:40:12 np0005539564 nova_compute[226295]: 2025-11-29 08:40:12.415 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:13 np0005539564 nova_compute[226295]: 2025-11-29 08:40:13.051 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405598.048963, 39a5933a-7591-4dd5-9113-5291a3eab7df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:13 np0005539564 nova_compute[226295]: 2025-11-29 08:40:13.051 226310 INFO nova.compute.manager [-] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:40:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:13.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:13 np0005539564 nova_compute[226295]: 2025-11-29 08:40:13.086 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:13 np0005539564 nova_compute[226295]: 2025-11-29 08:40:13.134 226310 DEBUG nova.compute.manager [None req-46c10de4-295e-4224-b9cd-6638efd5f0dc - - - - - -] [instance: 39a5933a-7591-4dd5-9113-5291a3eab7df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:13 np0005539564 nova_compute[226295]: 2025-11-29 08:40:13.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:14.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:14 np0005539564 nova_compute[226295]: 2025-11-29 08:40:14.680 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:15.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:15 np0005539564 podman[295452]: 2025-11-29 08:40:15.535479793 +0000 UTC m=+0.086400589 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:40:15 np0005539564 podman[295451]: 2025-11-29 08:40:15.546121251 +0000 UTC m=+0.101083126 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:40:15 np0005539564 podman[295450]: 2025-11-29 08:40:15.571028865 +0000 UTC m=+0.119199817 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 03:40:15 np0005539564 nova_compute[226295]: 2025-11-29 08:40:15.987 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:40:15.987 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:40:15.988 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:40:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:40:15.989 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:16.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:17.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:17 np0005539564 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 03:40:18 np0005539564 nova_compute[226295]: 2025-11-29 08:40:18.089 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:18.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:18 np0005539564 nova_compute[226295]: 2025-11-29 08:40:18.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:19.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:19 np0005539564 nova_compute[226295]: 2025-11-29 08:40:19.682 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:20.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:40:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:40:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:21.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:22.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:23 np0005539564 nova_compute[226295]: 2025-11-29 08:40:23.092 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:23.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:23 np0005539564 nova_compute[226295]: 2025-11-29 08:40:23.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:24.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:24 np0005539564 nova_compute[226295]: 2025-11-29 08:40:24.684 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:25.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:27.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:28 np0005539564 nova_compute[226295]: 2025-11-29 08:40:28.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:28.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:28 np0005539564 nova_compute[226295]: 2025-11-29 08:40:28.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:28 np0005539564 nova_compute[226295]: 2025-11-29 08:40:28.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:28 np0005539564 nova_compute[226295]: 2025-11-29 08:40:28.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:28 np0005539564 nova_compute[226295]: 2025-11-29 08:40:28.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:28 np0005539564 nova_compute[226295]: 2025-11-29 08:40:28.379 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:40:28 np0005539564 nova_compute[226295]: 2025-11-29 08:40:28.379 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2920682199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:28 np0005539564 nova_compute[226295]: 2025-11-29 08:40:28.834 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.025 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.027 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4246MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.027 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.028 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.108 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.108 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:40:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:29.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.123 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3367454599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.582 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.590 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.613 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.641 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.641 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:29 np0005539564 nova_compute[226295]: 2025-11-29 08:40:29.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:31.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:33 np0005539564 nova_compute[226295]: 2025-11-29 08:40:33.097 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:33.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:34.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:34 np0005539564 nova_compute[226295]: 2025-11-29 08:40:34.743 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:35.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:36.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:37.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:38 np0005539564 nova_compute[226295]: 2025-11-29 08:40:38.100 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:38.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:39.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:39 np0005539564 nova_compute[226295]: 2025-11-29 08:40:39.809 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:40.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:41.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:42.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:43 np0005539564 nova_compute[226295]: 2025-11-29 08:40:43.103 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:43.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:44.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:44 np0005539564 nova_compute[226295]: 2025-11-29 08:40:44.811 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:45.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:46.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:40:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 59K writes, 233K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.23 GB, 0.04 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.74 writes per sync, written: 0.23 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5655 writes, 21K keys, 5655 commit groups, 1.0 writes per commit group, ingest: 21.79 MB, 0.04 MB/s#012Interval WAL: 5656 writes, 2297 syncs, 2.46 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:40:46 np0005539564 podman[295743]: 2025-11-29 08:40:46.536985986 +0000 UTC m=+0.074217140 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:40:46 np0005539564 podman[295742]: 2025-11-29 08:40:46.541993972 +0000 UTC m=+0.098636761 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:40:46 np0005539564 podman[295741]: 2025-11-29 08:40:46.561149709 +0000 UTC m=+0.119617397 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:40:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:47.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:48 np0005539564 nova_compute[226295]: 2025-11-29 08:40:48.105 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:49.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:49 np0005539564 nova_compute[226295]: 2025-11-29 08:40:49.816 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:52.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:53 np0005539564 nova_compute[226295]: 2025-11-29 08:40:53.108 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:54.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:54 np0005539564 nova_compute[226295]: 2025-11-29 08:40:54.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:56.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:57.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:57 np0005539564 ovn_controller[130591]: 2025-11-29T08:40:57Z|00736|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 29 03:40:58 np0005539564 nova_compute[226295]: 2025-11-29 08:40:58.110 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:58.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:40:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:59.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:59 np0005539564 nova_compute[226295]: 2025-11-29 08:40:59.864 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:00.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:01.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:02.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:03 np0005539564 nova_compute[226295]: 2025-11-29 08:41:03.113 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:03.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:03.756 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:03.757 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:03.757 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:04.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:04 np0005539564 nova_compute[226295]: 2025-11-29 08:41:04.866 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:05.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:07.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:08 np0005539564 nova_compute[226295]: 2025-11-29 08:41:08.115 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:08.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:08 np0005539564 nova_compute[226295]: 2025-11-29 08:41:08.996 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "65caab7f-cddf-4a5b-a375-b4061715d559" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:08 np0005539564 nova_compute[226295]: 2025-11-29 08:41:08.997 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.015 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:41:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:09.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.208 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.209 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.220 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.220 226310 INFO nova.compute.claims [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.355 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/934405617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.842 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.849 226310 DEBUG nova.compute.provider_tree [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.867 226310 DEBUG nova.scheduler.client.report [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.871 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.897 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.898 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.974 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:41:09 np0005539564 nova_compute[226295]: 2025-11-29 08:41:09.974 226310 DEBUG nova.network.neutron [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.000 226310 INFO nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.022 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.114 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.116 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.117 226310 INFO nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Creating image(s)#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.150 226310 DEBUG nova.storage.rbd_utils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 65caab7f-cddf-4a5b-a375-b4061715d559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.183 226310 DEBUG nova.storage.rbd_utils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 65caab7f-cddf-4a5b-a375-b4061715d559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.220 226310 DEBUG nova.storage.rbd_utils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 65caab7f-cddf-4a5b-a375-b4061715d559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.225 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:10.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.306 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.307 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.308 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.308 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.347 226310 DEBUG nova.storage.rbd_utils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 65caab7f-cddf-4a5b-a375-b4061715d559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.352 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 65caab7f-cddf-4a5b-a375-b4061715d559_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.641 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.642 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.642 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.642 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.668 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 65caab7f-cddf-4a5b-a375-b4061715d559_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.743 226310 DEBUG nova.storage.rbd_utils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] resizing rbd image 65caab7f-cddf-4a5b-a375-b4061715d559_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.859 226310 DEBUG nova.objects.instance [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'migration_context' on Instance uuid 65caab7f-cddf-4a5b-a375-b4061715d559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.883 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.884 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Ensure instance console log exists: /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.884 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.885 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:10 np0005539564 nova_compute[226295]: 2025-11-29 08:41:10.885 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:11.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:11 np0005539564 nova_compute[226295]: 2025-11-29 08:41:11.378 226310 DEBUG nova.policy [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45da8ed818144f8bd6e00d233fcb5d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '03858b11000d4b57bd3659c3083eed47', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:41:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:12.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:13.021 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.022 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:13.023 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.117 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:13.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.199 226310 DEBUG nova.network.neutron [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Successfully created port: 42c94438-186e-452b-9cac-ae8a01b2d281 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.411 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.411 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:41:13 np0005539564 nova_compute[226295]: 2025-11-29 08:41:13.412 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:14.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:14 np0005539564 nova_compute[226295]: 2025-11-29 08:41:14.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:14 np0005539564 nova_compute[226295]: 2025-11-29 08:41:14.872 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:15.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:15 np0005539564 nova_compute[226295]: 2025-11-29 08:41:15.921 226310 DEBUG nova.network.neutron [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Successfully updated port: 42c94438-186e-452b-9cac-ae8a01b2d281 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:41:15 np0005539564 nova_compute[226295]: 2025-11-29 08:41:15.950 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "refresh_cache-65caab7f-cddf-4a5b-a375-b4061715d559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:15 np0005539564 nova_compute[226295]: 2025-11-29 08:41:15.951 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquired lock "refresh_cache-65caab7f-cddf-4a5b-a375-b4061715d559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:15 np0005539564 nova_compute[226295]: 2025-11-29 08:41:15.951 226310 DEBUG nova.network.neutron [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:41:16 np0005539564 nova_compute[226295]: 2025-11-29 08:41:16.241 226310 DEBUG nova.network.neutron [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:41:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:16.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:16 np0005539564 nova_compute[226295]: 2025-11-29 08:41:16.307 226310 DEBUG nova.compute.manager [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Received event network-changed-42c94438-186e-452b-9cac-ae8a01b2d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:16 np0005539564 nova_compute[226295]: 2025-11-29 08:41:16.307 226310 DEBUG nova.compute.manager [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Refreshing instance network info cache due to event network-changed-42c94438-186e-452b-9cac-ae8a01b2d281. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:16 np0005539564 nova_compute[226295]: 2025-11-29 08:41:16.308 226310 DEBUG oslo_concurrency.lockutils [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-65caab7f-cddf-4a5b-a375-b4061715d559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.261 226310 DEBUG nova.network.neutron [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Updating instance_info_cache with network_info: [{"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.288 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Releasing lock "refresh_cache-65caab7f-cddf-4a5b-a375-b4061715d559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.288 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Instance network_info: |[{"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.289 226310 DEBUG oslo_concurrency.lockutils [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-65caab7f-cddf-4a5b-a375-b4061715d559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.289 226310 DEBUG nova.network.neutron [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Refreshing network info cache for port 42c94438-186e-452b-9cac-ae8a01b2d281 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.294 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Start _get_guest_xml network_info=[{"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.300 226310 WARNING nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.305 226310 DEBUG nova.virt.libvirt.host [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.307 226310 DEBUG nova.virt.libvirt.host [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.320 226310 DEBUG nova.virt.libvirt.host [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.320 226310 DEBUG nova.virt.libvirt.host [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.323 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.323 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.324 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.325 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.325 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.326 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.326 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.327 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.327 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.328 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.328 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.329 226310 DEBUG nova.virt.hardware [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.333 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:17 np0005539564 podman[295992]: 2025-11-29 08:41:17.527702651 +0000 UTC m=+0.061943087 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 03:41:17 np0005539564 podman[295991]: 2025-11-29 08:41:17.571137396 +0000 UTC m=+0.110566892 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:41:17 np0005539564 podman[295990]: 2025-11-29 08:41:17.58939709 +0000 UTC m=+0.143013840 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.802 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.842 226310 DEBUG nova.storage.rbd_utils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 65caab7f-cddf-4a5b-a375-b4061715d559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:17 np0005539564 nova_compute[226295]: 2025-11-29 08:41:17.848 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.119 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:18.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4041124736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.364 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.366 226310 DEBUG nova.virt.libvirt.vif [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-0-421909352',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-0-421909352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=185,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ0IGLlmWlZCle9EddGxHLVG7ufF3edQYwErIQ1TJ96Vu81GarffsTWuyneOiDobC15RVtiledCIVuVcZfzO9HG6ZatPDqO+3jJaa7cGLLMtn1tpBiBIHt3tTH8PwZIOw==',key_name='tempest-TestSecurityGroupsBasicOps-2076105880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-jmgex6nv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:10Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=65caab7f-cddf-4a5b-a375-b4061715d559,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.366 226310 DEBUG nova.network.os_vif_util [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.367 226310 DEBUG nova.network.os_vif_util [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:0e:46,bridge_name='br-int',has_traffic_filtering=True,id=42c94438-186e-452b-9cac-ae8a01b2d281,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42c94438-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.369 226310 DEBUG nova.objects.instance [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65caab7f-cddf-4a5b-a375-b4061715d559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.390 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <uuid>65caab7f-cddf-4a5b-a375-b4061715d559</uuid>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <name>instance-000000b9</name>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-0-421909352</nova:name>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:41:17</nova:creationTime>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <nova:user uuid="a45da8ed818144f8bd6e00d233fcb5d2">tempest-TestSecurityGroupsBasicOps-1086021155-project-member</nova:user>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <nova:project uuid="03858b11000d4b57bd3659c3083eed47">tempest-TestSecurityGroupsBasicOps-1086021155</nova:project>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <nova:port uuid="42c94438-186e-452b-9cac-ae8a01b2d281">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <entry name="serial">65caab7f-cddf-4a5b-a375-b4061715d559</entry>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <entry name="uuid">65caab7f-cddf-4a5b-a375-b4061715d559</entry>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/65caab7f-cddf-4a5b-a375-b4061715d559_disk">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/65caab7f-cddf-4a5b-a375-b4061715d559_disk.config">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:78:0e:46"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <target dev="tap42c94438-18"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559/console.log" append="off"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:41:18 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:41:18 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:41:18 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:41:18 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.392 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Preparing to wait for external event network-vif-plugged-42c94438-186e-452b-9cac-ae8a01b2d281 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.392 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.393 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.393 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.394 226310 DEBUG nova.virt.libvirt.vif [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-0-421909352',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-0-421909352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=185,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ0IGLlmWlZCle9EddGxHLVG7ufF3edQYwErIQ1TJ96Vu81GarffsTWuyneOiDobC15RVtiledCIVuVcZfzO9HG6ZatPDqO+3jJaa7cGLLMtn1tpBiBIHt3tTH8PwZIOw==',key_name='tempest-TestSecurityGroupsBasicOps-2076105880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-jmgex6nv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:10Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=65caab7f-cddf-4a5b-a375-b4061715d559,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.395 226310 DEBUG nova.network.os_vif_util [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.395 226310 DEBUG nova.network.os_vif_util [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:0e:46,bridge_name='br-int',has_traffic_filtering=True,id=42c94438-186e-452b-9cac-ae8a01b2d281,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42c94438-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.396 226310 DEBUG os_vif [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:0e:46,bridge_name='br-int',has_traffic_filtering=True,id=42c94438-186e-452b-9cac-ae8a01b2d281,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42c94438-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.397 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.397 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.398 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.403 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.403 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42c94438-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.404 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42c94438-18, col_values=(('external_ids', {'iface-id': '42c94438-186e-452b-9cac-ae8a01b2d281', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:0e:46', 'vm-uuid': '65caab7f-cddf-4a5b-a375-b4061715d559'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.406 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:18 np0005539564 NetworkManager[48997]: <info>  [1764405678.4069] manager: (tap42c94438-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.409 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.415 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.416 226310 INFO os_vif [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:0e:46,bridge_name='br-int',has_traffic_filtering=True,id=42c94438-186e-452b-9cac-ae8a01b2d281,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42c94438-18')#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.481 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.481 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.482 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No VIF found with MAC fa:16:3e:78:0e:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.482 226310 INFO nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Using config drive#033[00m
Nov 29 03:41:18 np0005539564 nova_compute[226295]: 2025-11-29 08:41:18.510 226310 DEBUG nova.storage.rbd_utils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 65caab7f-cddf-4a5b-a375-b4061715d559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:19 np0005539564 nova_compute[226295]: 2025-11-29 08:41:19.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:19 np0005539564 nova_compute[226295]: 2025-11-29 08:41:19.873 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:20.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:20 np0005539564 nova_compute[226295]: 2025-11-29 08:41:20.476 226310 INFO nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Creating config drive at /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559/disk.config#033[00m
Nov 29 03:41:20 np0005539564 nova_compute[226295]: 2025-11-29 08:41:20.483 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_l99ulqi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:20 np0005539564 nova_compute[226295]: 2025-11-29 08:41:20.642 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_l99ulqi" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:20 np0005539564 nova_compute[226295]: 2025-11-29 08:41:20.672 226310 DEBUG nova.storage.rbd_utils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 65caab7f-cddf-4a5b-a375-b4061715d559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:20 np0005539564 nova_compute[226295]: 2025-11-29 08:41:20.677 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559/disk.config 65caab7f-cddf-4a5b-a375-b4061715d559_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:20 np0005539564 nova_compute[226295]: 2025-11-29 08:41:20.904 226310 DEBUG oslo_concurrency.processutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559/disk.config 65caab7f-cddf-4a5b-a375-b4061715d559_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:20 np0005539564 nova_compute[226295]: 2025-11-29 08:41:20.905 226310 INFO nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Deleting local config drive /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559/disk.config because it was imported into RBD.#033[00m
Nov 29 03:41:20 np0005539564 kernel: tap42c94438-18: entered promiscuous mode
Nov 29 03:41:21 np0005539564 NetworkManager[48997]: <info>  [1764405680.9751] manager: (tap42c94438-18): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Nov 29 03:41:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:21Z|00737|binding|INFO|Claiming lport 42c94438-186e-452b-9cac-ae8a01b2d281 for this chassis.
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.041 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:21Z|00738|binding|INFO|42c94438-186e-452b-9cac-ae8a01b2d281: Claiming fa:16:3e:78:0e:46 10.100.0.14
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.042 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.043 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.049 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.055 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 NetworkManager[48997]: <info>  [1764405681.0566] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Nov 29 03:41:21 np0005539564 NetworkManager[48997]: <info>  [1764405681.0578] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Nov 29 03:41:21 np0005539564 systemd-udevd[296186]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.072 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:0e:46 10.100.0.14'], port_security=['fa:16:3e:78:0e:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '65caab7f-cddf-4a5b-a375-b4061715d559', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-827003dc-22a3-46f9-a129-d0a62483494f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd927c03d-544f-4cb2-a70a-354249bd42e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21c3b4de-2976-48eb-9ac4-77655b9836b0, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=42c94438-186e-452b-9cac-ae8a01b2d281) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:21 np0005539564 systemd-machined[190128]: New machine qemu-87-instance-000000b9.
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.073 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 42c94438-186e-452b-9cac-ae8a01b2d281 in datapath 827003dc-22a3-46f9-a129-d0a62483494f bound to our chassis#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.074 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 827003dc-22a3-46f9-a129-d0a62483494f#033[00m
Nov 29 03:41:21 np0005539564 NetworkManager[48997]: <info>  [1764405681.0854] device (tap42c94438-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.084 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[09795381-ed3b-497a-ab74-03b23169c864]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.085 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap827003dc-21 in ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:41:21 np0005539564 NetworkManager[48997]: <info>  [1764405681.0861] device (tap42c94438-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.087 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap827003dc-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.087 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9678cc-8253-4079-8d6a-df222950bcfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.088 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[96b4c3b2-63a9-4f7e-b135-2d3480ef47ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.101 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[362497a3-1ad9-497d-b0c8-a544369110c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 systemd[1]: Started Virtual Machine qemu-87-instance-000000b9.
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.126 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b3e7d9-16fc-4b8f-83f9-a7d750c26a5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.171 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.160 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b0927be2-9487-47a0-8f03-3dfe1ad5757a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.181 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 systemd-udevd[296190]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:21 np0005539564 NetworkManager[48997]: <info>  [1764405681.1913] manager: (tap827003dc-20): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.191 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[709bdaba-c9fe-4220-ba96-7d1210640f84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:21Z|00739|binding|INFO|Setting lport 42c94438-186e-452b-9cac-ae8a01b2d281 ovn-installed in OVS
Nov 29 03:41:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:21Z|00740|binding|INFO|Setting lport 42c94438-186e-452b-9cac-ae8a01b2d281 up in Southbound
Nov 29 03:41:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.194 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.229 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6a5999-3a14-4964-92d8-14dbfa48a751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.232 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[1efed11b-1bbc-4039-9480-32afb2a0d1ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 NetworkManager[48997]: <info>  [1764405681.2621] device (tap827003dc-20): carrier: link connected
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.268 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b132e0d9-fb4b-4e4d-9d02-a6389b92fe4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.285 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1788b72f-3a8a-44a3-abd0-50ff50bd891d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap827003dc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:29:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865594, 'reachable_time': 26948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296220, 'error': None, 'target': 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.310 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ce400e5c-2c9a-42f6-b912-3336cd6bf727]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:29a1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 865594, 'tstamp': 865594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296221, 'error': None, 'target': 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.333 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a5dd59-d222-49f8-beca-de4012a0d169]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap827003dc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:29:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865594, 'reachable_time': 26948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296222, 'error': None, 'target': 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.368 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ac609973-9032-4cd9-96ee-1db08a6848ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.445 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f53a7538-2762-4360-94c8-13b4e2e168ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.449 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap827003dc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.450 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.451 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap827003dc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:21 np0005539564 NetworkManager[48997]: <info>  [1764405681.4539] manager: (tap827003dc-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Nov 29 03:41:21 np0005539564 kernel: tap827003dc-20: entered promiscuous mode
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.453 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.458 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap827003dc-20, col_values=(('external_ids', {'iface-id': 'f6d5504f-44ca-4e58-bb7a-73fad975c4be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:21Z|00741|binding|INFO|Releasing lport f6d5504f-44ca-4e58-bb7a-73fad975c4be from this chassis (sb_readonly=0)
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.460 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.461 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/827003dc-22a3-46f9-a129-d0a62483494f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/827003dc-22a3-46f9-a129-d0a62483494f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.462 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5fca01-ae15-4981-a0c3-ecba99c5ec4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.463 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-827003dc-22a3-46f9-a129-d0a62483494f
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/827003dc-22a3-46f9-a129-d0a62483494f.pid.haproxy
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 827003dc-22a3-46f9-a129-d0a62483494f
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:41:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:21.464 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'env', 'PROCESS_TAG=haproxy-827003dc-22a3-46f9-a129-d0a62483494f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/827003dc-22a3-46f9-a129-d0a62483494f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.473 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.575 226310 DEBUG nova.network.neutron [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Updated VIF entry in instance network info cache for port 42c94438-186e-452b-9cac-ae8a01b2d281. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.577 226310 DEBUG nova.network.neutron [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Updating instance_info_cache with network_info: [{"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.605 226310 DEBUG oslo_concurrency.lockutils [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-65caab7f-cddf-4a5b-a375-b4061715d559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.763 226310 DEBUG nova.compute.manager [req-3532a528-ff15-47c1-b5c7-b560995dff0c req-be08ad4a-4f77-4708-a455-0fff6290fdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Received event network-vif-plugged-42c94438-186e-452b-9cac-ae8a01b2d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.764 226310 DEBUG oslo_concurrency.lockutils [req-3532a528-ff15-47c1-b5c7-b560995dff0c req-be08ad4a-4f77-4708-a455-0fff6290fdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.765 226310 DEBUG oslo_concurrency.lockutils [req-3532a528-ff15-47c1-b5c7-b560995dff0c req-be08ad4a-4f77-4708-a455-0fff6290fdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.766 226310 DEBUG oslo_concurrency.lockutils [req-3532a528-ff15-47c1-b5c7-b560995dff0c req-be08ad4a-4f77-4708-a455-0fff6290fdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.766 226310 DEBUG nova.compute.manager [req-3532a528-ff15-47c1-b5c7-b560995dff0c req-be08ad4a-4f77-4708-a455-0fff6290fdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Processing event network-vif-plugged-42c94438-186e-452b-9cac-ae8a01b2d281 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.778 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.779 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405681.7775826, 65caab7f-cddf-4a5b-a375-b4061715d559 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.780 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] VM Started (Lifecycle Event)#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.786 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.791 226310 INFO nova.virt.libvirt.driver [-] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Instance spawned successfully.#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.791 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.812 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.821 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.825 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.826 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.826 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.826 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.827 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.827 226310 DEBUG nova.virt.libvirt.driver [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.854 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.854 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405681.7790291, 65caab7f-cddf-4a5b-a375-b4061715d559 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.854 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.882 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.886 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405681.785491, 65caab7f-cddf-4a5b-a375-b4061715d559 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.886 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.894 226310 INFO nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Took 11.78 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.894 226310 DEBUG nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.906 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:21 np0005539564 podman[296296]: 2025-11-29 08:41:21.908248513 +0000 UTC m=+0.055407481 container create 6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.908 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.945 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:21 np0005539564 systemd[1]: Started libpod-conmon-6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b.scope.
Nov 29 03:41:21 np0005539564 podman[296296]: 2025-11-29 08:41:21.880668976 +0000 UTC m=+0.027827964 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:41:21 np0005539564 nova_compute[226295]: 2025-11-29 08:41:21.979 226310 INFO nova.compute.manager [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Took 12.90 seconds to build instance.#033[00m
Nov 29 03:41:21 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:41:21 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d65306cd20db7a866417e48df0c3363dff518c5e7137ef76822b897bbd0e73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:22 np0005539564 nova_compute[226295]: 2025-11-29 08:41:22.002 226310 DEBUG oslo_concurrency.lockutils [None req-776a5cb0-b71c-4aa1-af18-2623ae3f616b a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:22 np0005539564 podman[296296]: 2025-11-29 08:41:22.007876028 +0000 UTC m=+0.155034996 container init 6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:41:22 np0005539564 podman[296296]: 2025-11-29 08:41:22.014507217 +0000 UTC m=+0.161666185 container start 6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:22 np0005539564 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[296312]: [NOTICE]   (296316) : New worker (296318) forked
Nov 29 03:41:22 np0005539564 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[296312]: [NOTICE]   (296316) : Loading success.
Nov 29 03:41:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:22.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:23.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:23 np0005539564 nova_compute[226295]: 2025-11-29 08:41:23.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:23 np0005539564 nova_compute[226295]: 2025-11-29 08:41:23.456 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:23 np0005539564 nova_compute[226295]: 2025-11-29 08:41:23.921 226310 DEBUG nova.compute.manager [req-38966d7a-6bcb-443d-8bf3-e30e70ec6af2 req-97961c7b-9161-4ff1-b870-7fb807e1f089 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Received event network-vif-plugged-42c94438-186e-452b-9cac-ae8a01b2d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:23 np0005539564 nova_compute[226295]: 2025-11-29 08:41:23.922 226310 DEBUG oslo_concurrency.lockutils [req-38966d7a-6bcb-443d-8bf3-e30e70ec6af2 req-97961c7b-9161-4ff1-b870-7fb807e1f089 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:23 np0005539564 nova_compute[226295]: 2025-11-29 08:41:23.924 226310 DEBUG oslo_concurrency.lockutils [req-38966d7a-6bcb-443d-8bf3-e30e70ec6af2 req-97961c7b-9161-4ff1-b870-7fb807e1f089 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:23 np0005539564 nova_compute[226295]: 2025-11-29 08:41:23.924 226310 DEBUG oslo_concurrency.lockutils [req-38966d7a-6bcb-443d-8bf3-e30e70ec6af2 req-97961c7b-9161-4ff1-b870-7fb807e1f089 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:23 np0005539564 nova_compute[226295]: 2025-11-29 08:41:23.924 226310 DEBUG nova.compute.manager [req-38966d7a-6bcb-443d-8bf3-e30e70ec6af2 req-97961c7b-9161-4ff1-b870-7fb807e1f089 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] No waiting events found dispatching network-vif-plugged-42c94438-186e-452b-9cac-ae8a01b2d281 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:23 np0005539564 nova_compute[226295]: 2025-11-29 08:41:23.925 226310 WARNING nova.compute.manager [req-38966d7a-6bcb-443d-8bf3-e30e70ec6af2 req-97961c7b-9161-4ff1-b870-7fb807e1f089 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Received unexpected event network-vif-plugged-42c94438-186e-452b-9cac-ae8a01b2d281 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:41:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:24.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:24 np0005539564 nova_compute[226295]: 2025-11-29 08:41:24.874 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:25.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:25Z|00742|binding|INFO|Releasing lport f6d5504f-44ca-4e58-bb7a-73fad975c4be from this chassis (sb_readonly=0)
Nov 29 03:41:25 np0005539564 nova_compute[226295]: 2025-11-29 08:41:25.650 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:26.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:27.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:28.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:28 np0005539564 nova_compute[226295]: 2025-11-29 08:41:28.460 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:29 np0005539564 nova_compute[226295]: 2025-11-29 08:41:29.876 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:41:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:41:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.374 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.374 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/707660244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.823 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.929 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:41:30 np0005539564 nova_compute[226295]: 2025-11-29 08:41:30.929 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.107 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.109 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3995MB free_disk=20.921829223632812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.110 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.111 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.180 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 65caab7f-cddf-4a5b-a375-b4061715d559 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.181 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.181 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:41:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:31.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.221 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/772360905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.653 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.658 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.674 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.703 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:41:31 np0005539564 nova_compute[226295]: 2025-11-29 08:41:31.703 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:32.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:33.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:33 np0005539564 nova_compute[226295]: 2025-11-29 08:41:33.463 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:33 np0005539564 nova_compute[226295]: 2025-11-29 08:41:33.527 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:34.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:41:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 68K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1633 writes, 7971 keys, 1633 commit groups, 1.0 writes per commit group, ingest: 16.10 MB, 0.03 MB/s#012Interval WAL: 1634 writes, 1634 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     21.7      3.78              0.33        42    0.090       0      0       0.0       0.0#012  L6      1/0   10.97 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     50.2     42.8      9.67              1.42        41    0.236    299K    22K       0.0       0.0#012 Sum      1/0   10.97 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     36.1     36.9     13.46              1.75        83    0.162    299K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.4     72.5     70.7      1.10              0.27        12    0.091     59K   3137       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     50.2     42.8      9.67              1.42        41    0.236    299K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     21.7      3.78              0.33        41    0.092       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.080, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.48 GB write, 0.09 MB/s write, 0.47 GB read, 0.09 MB/s read, 13.5 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 55.68 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000446 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3052,53.51 MB,17.6032%) FilterBlock(83,833.92 KB,0.267887%) IndexBlock(83,1.35 MB,0.443554%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:41:34 np0005539564 nova_compute[226295]: 2025-11-29 08:41:34.880 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:35Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:0e:46 10.100.0.14
Nov 29 03:41:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:35Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:0e:46 10.100.0.14
Nov 29 03:41:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:35.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:35 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:36.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:37 np0005539564 nova_compute[226295]: 2025-11-29 08:41:37.933 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:38.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:38 np0005539564 nova_compute[226295]: 2025-11-29 08:41:38.465 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:39.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:39 np0005539564 nova_compute[226295]: 2025-11-29 08:41:39.884 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:40.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:41.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:41 np0005539564 nova_compute[226295]: 2025-11-29 08:41:41.927 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "65caab7f-cddf-4a5b-a375-b4061715d559" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:41 np0005539564 nova_compute[226295]: 2025-11-29 08:41:41.928 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:41 np0005539564 nova_compute[226295]: 2025-11-29 08:41:41.928 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:41 np0005539564 nova_compute[226295]: 2025-11-29 08:41:41.929 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:41 np0005539564 nova_compute[226295]: 2025-11-29 08:41:41.929 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:41 np0005539564 nova_compute[226295]: 2025-11-29 08:41:41.931 226310 INFO nova.compute.manager [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Terminating instance#033[00m
Nov 29 03:41:41 np0005539564 nova_compute[226295]: 2025-11-29 08:41:41.932 226310 DEBUG nova.compute.manager [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:41 np0005539564 kernel: tap42c94438-18 (unregistering): left promiscuous mode
Nov 29 03:41:42 np0005539564 NetworkManager[48997]: <info>  [1764405702.0002] device (tap42c94438-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:42Z|00743|binding|INFO|Releasing lport 42c94438-186e-452b-9cac-ae8a01b2d281 from this chassis (sb_readonly=0)
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.020 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:42Z|00744|binding|INFO|Setting lport 42c94438-186e-452b-9cac-ae8a01b2d281 down in Southbound
Nov 29 03:41:42 np0005539564 ovn_controller[130591]: 2025-11-29T08:41:42Z|00745|binding|INFO|Removing iface tap42c94438-18 ovn-installed in OVS
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.024 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.030 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:0e:46 10.100.0.14'], port_security=['fa:16:3e:78:0e:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '65caab7f-cddf-4a5b-a375-b4061715d559', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-827003dc-22a3-46f9-a129-d0a62483494f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd927c03d-544f-4cb2-a70a-354249bd42e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21c3b4de-2976-48eb-9ac4-77655b9836b0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=42c94438-186e-452b-9cac-ae8a01b2d281) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.032 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 42c94438-186e-452b-9cac-ae8a01b2d281 in datapath 827003dc-22a3-46f9-a129-d0a62483494f unbound from our chassis#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.034 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 827003dc-22a3-46f9-a129-d0a62483494f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.036 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a7979185-1985-45b5-97e7-95419b001363]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.037 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f namespace which is not needed anymore#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.043 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539564 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Nov 29 03:41:42 np0005539564 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b9.scope: Consumed 14.627s CPU time.
Nov 29 03:41:42 np0005539564 systemd-machined[190128]: Machine qemu-87-instance-000000b9 terminated.
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.173 226310 INFO nova.virt.libvirt.driver [-] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Instance destroyed successfully.#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.173 226310 DEBUG nova.objects.instance [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'resources' on Instance uuid 65caab7f-cddf-4a5b-a375-b4061715d559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.186 226310 DEBUG nova.virt.libvirt.vif [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:41:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-0-421909352',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-gen-0-421909352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ge',id=185,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ0IGLlmWlZCle9EddGxHLVG7ufF3edQYwErIQ1TJ96Vu81GarffsTWuyneOiDobC15RVtiledCIVuVcZfzO9HG6ZatPDqO+3jJaa7cGLLMtn1tpBiBIHt3tTH8PwZIOw==',key_name='tempest-TestSecurityGroupsBasicOps-2076105880',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:41:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-jmgex6nv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:41:21Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=65caab7f-cddf-4a5b-a375-b4061715d559,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.187 226310 DEBUG nova.network.os_vif_util [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "42c94438-186e-452b-9cac-ae8a01b2d281", "address": "fa:16:3e:78:0e:46", "network": {"id": "827003dc-22a3-46f9-a129-d0a62483494f", "bridge": "br-int", "label": "tempest-network-smoke--1818078440", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42c94438-18", "ovs_interfaceid": "42c94438-186e-452b-9cac-ae8a01b2d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.187 226310 DEBUG nova.network.os_vif_util [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:0e:46,bridge_name='br-int',has_traffic_filtering=True,id=42c94438-186e-452b-9cac-ae8a01b2d281,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42c94438-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.188 226310 DEBUG os_vif [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:0e:46,bridge_name='br-int',has_traffic_filtering=True,id=42c94438-186e-452b-9cac-ae8a01b2d281,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42c94438-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.190 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.190 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42c94438-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.194 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.197 226310 INFO os_vif [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:0e:46,bridge_name='br-int',has_traffic_filtering=True,id=42c94438-186e-452b-9cac-ae8a01b2d281,network=Network(827003dc-22a3-46f9-a129-d0a62483494f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42c94438-18')#033[00m
Nov 29 03:41:42 np0005539564 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[296312]: [NOTICE]   (296316) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:42 np0005539564 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[296312]: [NOTICE]   (296316) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:42 np0005539564 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[296312]: [WARNING]  (296316) : Exiting Master process...
Nov 29 03:41:42 np0005539564 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[296312]: [ALERT]    (296316) : Current worker (296318) exited with code 143 (Terminated)
Nov 29 03:41:42 np0005539564 neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f[296312]: [WARNING]  (296316) : All workers exited. Exiting... (0)
Nov 29 03:41:42 np0005539564 systemd[1]: libpod-6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b.scope: Deactivated successfully.
Nov 29 03:41:42 np0005539564 conmon[296312]: conmon 6e23246616b4605f79a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b.scope/container/memory.events
Nov 29 03:41:42 np0005539564 podman[296702]: 2025-11-29 08:41:42.230372037 +0000 UTC m=+0.050888737 container died 6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:41:42 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:42 np0005539564 systemd[1]: var-lib-containers-storage-overlay-b5d65306cd20db7a866417e48df0c3363dff518c5e7137ef76822b897bbd0e73-merged.mount: Deactivated successfully.
Nov 29 03:41:42 np0005539564 podman[296702]: 2025-11-29 08:41:42.275116528 +0000 UTC m=+0.095633228 container cleanup 6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:41:42 np0005539564 systemd[1]: libpod-conmon-6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b.scope: Deactivated successfully.
Nov 29 03:41:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:42 np0005539564 podman[296753]: 2025-11-29 08:41:42.337659591 +0000 UTC m=+0.037825515 container remove 6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.344 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[884ad16b-ad15-403c-9689-91b56e67cf75]: (4, ('Sat Nov 29 08:41:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f (6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b)\n6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b\nSat Nov 29 08:41:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f (6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b)\n6e23246616b4605f79a59afd4f9858810411d747fddccd1af28ecc27ec41d16b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.347 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c1fe1842-87a9-4a2a-88b3-0336002bc348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.348 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap827003dc-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.352 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539564 kernel: tap827003dc-20: left promiscuous mode
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.364 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.370 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[222607af-106e-44dd-84cb-158d537b4af7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.392 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2b82ac87-b704-47c7-a2ae-2dd9a9ec842d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.394 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[85b1db2f-12dc-461a-ab17-5d7c94cd3833]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.419 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[717cc41a-b328-4d6b-98d7-2f23a6c730c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865583, 'reachable_time': 34566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296768, 'error': None, 'target': 'ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.422 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-827003dc-22a3-46f9-a129-d0a62483494f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:41:42.423 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[9235c09d-9a46-4754-8f47-db48b3ca6ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:42 np0005539564 systemd[1]: run-netns-ovnmeta\x2d827003dc\x2d22a3\x2d46f9\x2da129\x2dd0a62483494f.mount: Deactivated successfully.
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.670 226310 INFO nova.virt.libvirt.driver [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Deleting instance files /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559_del#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.672 226310 INFO nova.virt.libvirt.driver [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Deletion of /var/lib/nova/instances/65caab7f-cddf-4a5b-a375-b4061715d559_del complete#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.779 226310 INFO nova.compute.manager [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.780 226310 DEBUG oslo.service.loopingcall [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.781 226310 DEBUG nova.compute.manager [-] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:42 np0005539564 nova_compute[226295]: 2025-11-29 08:41:42.781 226310 DEBUG nova.network.neutron [-] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:43.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:43 np0005539564 nova_compute[226295]: 2025-11-29 08:41:43.958 226310 DEBUG nova.network.neutron [-] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:43 np0005539564 nova_compute[226295]: 2025-11-29 08:41:43.990 226310 INFO nova.compute.manager [-] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Took 1.21 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.033 226310 DEBUG nova.compute.manager [req-12daeaae-5fda-4e18-9221-da0c562682b8 req-3da13b0c-ad33-4a57-8d99-867ff14ebe75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Received event network-vif-deleted-42c94438-186e-452b-9cac-ae8a01b2d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.047 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.048 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.091 226310 DEBUG oslo_concurrency.processutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:44.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4173765393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.546 226310 DEBUG oslo_concurrency.processutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.556 226310 DEBUG nova.compute.provider_tree [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.575 226310 DEBUG nova.scheduler.client.report [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.605 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.647 226310 INFO nova.scheduler.client.report [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Deleted allocations for instance 65caab7f-cddf-4a5b-a375-b4061715d559#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.722 226310 DEBUG oslo_concurrency.lockutils [None req-d34b71e3-8a31-4d81-af91-3c6d44e3e412 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "65caab7f-cddf-4a5b-a375-b4061715d559" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:44 np0005539564 nova_compute[226295]: 2025-11-29 08:41:44.886 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:45.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:47 np0005539564 nova_compute[226295]: 2025-11-29 08:41:47.194 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:47.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:48 np0005539564 podman[296794]: 2025-11-29 08:41:48.535991211 +0000 UTC m=+0.068987477 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:41:48 np0005539564 podman[296793]: 2025-11-29 08:41:48.545728795 +0000 UTC m=+0.085324590 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:41:48 np0005539564 podman[296792]: 2025-11-29 08:41:48.556706532 +0000 UTC m=+0.107892700 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:41:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:49.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:49 np0005539564 nova_compute[226295]: 2025-11-29 08:41:49.889 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:51.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:52 np0005539564 nova_compute[226295]: 2025-11-29 08:41:52.198 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:52.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:52 np0005539564 nova_compute[226295]: 2025-11-29 08:41:52.611 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539564 nova_compute[226295]: 2025-11-29 08:41:52.770 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:53.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:54.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:54 np0005539564 nova_compute[226295]: 2025-11-29 08:41:54.892 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:55.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:56.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:57 np0005539564 nova_compute[226295]: 2025-11-29 08:41:57.171 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405702.1700652, 65caab7f-cddf-4a5b-a375-b4061715d559 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:57 np0005539564 nova_compute[226295]: 2025-11-29 08:41:57.171 226310 INFO nova.compute.manager [-] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:57 np0005539564 nova_compute[226295]: 2025-11-29 08:41:57.195 226310 DEBUG nova.compute.manager [None req-e27292a8-acb2-467b-9180-2055e6884fae - - - - - -] [instance: 65caab7f-cddf-4a5b-a375-b4061715d559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:57 np0005539564 nova_compute[226295]: 2025-11-29 08:41:57.201 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:57.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:58.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:41:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:59.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:59 np0005539564 nova_compute[226295]: 2025-11-29 08:41:59.893 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:00.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:01.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:01 np0005539564 nova_compute[226295]: 2025-11-29 08:42:01.697 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:02 np0005539564 nova_compute[226295]: 2025-11-29 08:42:02.205 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:02.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:03.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:42:03.758 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:42:03.758 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:42:03.758 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:04.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:04 np0005539564 nova_compute[226295]: 2025-11-29 08:42:04.895 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:05.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:06.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:07 np0005539564 nova_compute[226295]: 2025-11-29 08:42:07.209 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:07.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:07 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 03:42:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:08.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:09 np0005539564 nova_compute[226295]: 2025-11-29 08:42:09.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:09 np0005539564 nova_compute[226295]: 2025-11-29 08:42:09.897 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:10.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:11 np0005539564 nova_compute[226295]: 2025-11-29 08:42:11.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:11 np0005539564 nova_compute[226295]: 2025-11-29 08:42:11.341 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:11 np0005539564 nova_compute[226295]: 2025-11-29 08:42:11.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:42:12 np0005539564 nova_compute[226295]: 2025-11-29 08:42:12.242 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:14 np0005539564 nova_compute[226295]: 2025-11-29 08:42:14.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:14 np0005539564 nova_compute[226295]: 2025-11-29 08:42:14.899 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:15.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:15 np0005539564 nova_compute[226295]: 2025-11-29 08:42:15.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:15 np0005539564 nova_compute[226295]: 2025-11-29 08:42:15.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:42:15 np0005539564 nova_compute[226295]: 2025-11-29 08:42:15.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:42:15 np0005539564 nova_compute[226295]: 2025-11-29 08:42:15.376 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:42:15 np0005539564 nova_compute[226295]: 2025-11-29 08:42:15.376 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:42:15.553 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:15 np0005539564 nova_compute[226295]: 2025-11-29 08:42:15.555 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:42:15.555 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:42:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:17 np0005539564 nova_compute[226295]: 2025-11-29 08:42:17.245 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:17.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:19.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:19 np0005539564 podman[296858]: 2025-11-29 08:42:19.540325162 +0000 UTC m=+0.082160554 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:42:19 np0005539564 podman[296859]: 2025-11-29 08:42:19.540702012 +0000 UTC m=+0.081408273 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:42:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:19 np0005539564 podman[296857]: 2025-11-29 08:42:19.612329631 +0000 UTC m=+0.162131998 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:42:19 np0005539564 nova_compute[226295]: 2025-11-29 08:42:19.902 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:20 np0005539564 nova_compute[226295]: 2025-11-29 08:42:20.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:20.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:21.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:22 np0005539564 nova_compute[226295]: 2025-11-29 08:42:22.249 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:22.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:23.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:24 np0005539564 nova_compute[226295]: 2025-11-29 08:42:24.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:42:24.557 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:24 np0005539564 nova_compute[226295]: 2025-11-29 08:42:24.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:25.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.334079) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746334122, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2373, "num_deletes": 251, "total_data_size": 5758876, "memory_usage": 5828672, "flush_reason": "Manual Compaction"}
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746362493, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3754644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67296, "largest_seqno": 69664, "table_properties": {"data_size": 3745151, "index_size": 5986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19677, "raw_average_key_size": 20, "raw_value_size": 3726216, "raw_average_value_size": 3857, "num_data_blocks": 262, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405532, "oldest_key_time": 1764405532, "file_creation_time": 1764405746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 28456 microseconds, and 14070 cpu microseconds.
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.362536) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3754644 bytes OK
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.362556) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.365012) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.365041) EVENT_LOG_v1 {"time_micros": 1764405746365031, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.365067) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5748554, prev total WAL file size 5748554, number of live WAL files 2.
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.367280) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3666KB)], [135(10MB)]
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746367335, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 15258214, "oldest_snapshot_seqno": -1}
Nov 29 03:42:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:26.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9791 keys, 13330972 bytes, temperature: kUnknown
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746477198, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 13330972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13266409, "index_size": 38978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 257871, "raw_average_key_size": 26, "raw_value_size": 13093317, "raw_average_value_size": 1337, "num_data_blocks": 1489, "num_entries": 9791, "num_filter_entries": 9791, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.477483) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 13330972 bytes
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.478938) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.8 rd, 121.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.0 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 10308, records dropped: 517 output_compression: NoCompression
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.478958) EVENT_LOG_v1 {"time_micros": 1764405746478950, "job": 86, "event": "compaction_finished", "compaction_time_micros": 109940, "compaction_time_cpu_micros": 57117, "output_level": 6, "num_output_files": 1, "total_output_size": 13330972, "num_input_records": 10308, "num_output_records": 9791, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746479745, "job": 86, "event": "table_file_deletion", "file_number": 137}
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746482644, "job": 86, "event": "table_file_deletion", "file_number": 135}
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.367140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.482740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.482749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.482753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.482757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:42:26.482761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:27 np0005539564 nova_compute[226295]: 2025-11-29 08:42:27.253 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:27.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:27 np0005539564 nova_compute[226295]: 2025-11-29 08:42:27.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:27 np0005539564 nova_compute[226295]: 2025-11-29 08:42:27.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:42:27 np0005539564 nova_compute[226295]: 2025-11-29 08:42:27.371 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:42:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:28.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:29.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:29 np0005539564 nova_compute[226295]: 2025-11-29 08:42:29.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:31 np0005539564 nova_compute[226295]: 2025-11-29 08:42:31.372 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:31 np0005539564 nova_compute[226295]: 2025-11-29 08:42:31.407 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:31 np0005539564 nova_compute[226295]: 2025-11-29 08:42:31.408 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:31 np0005539564 nova_compute[226295]: 2025-11-29 08:42:31.408 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:31 np0005539564 nova_compute[226295]: 2025-11-29 08:42:31.408 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:42:31 np0005539564 nova_compute[226295]: 2025-11-29 08:42:31.409 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3157097088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:31 np0005539564 nova_compute[226295]: 2025-11-29 08:42:31.856 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.023 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.025 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4230MB free_disk=20.93596649169922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.025 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.025 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.100 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.100 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.124 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.258 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3395023748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.648 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.655 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.679 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.721 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:42:32 np0005539564 nova_compute[226295]: 2025-11-29 08:42:32.722 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:34.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:34 np0005539564 nova_compute[226295]: 2025-11-29 08:42:34.951 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:35 np0005539564 nova_compute[226295]: 2025-11-29 08:42:35.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:35 np0005539564 nova_compute[226295]: 2025-11-29 08:42:35.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:42:36 np0005539564 podman[297145]: 2025-11-29 08:42:36.268758786 +0000 UTC m=+0.079477821 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:42:36 np0005539564 podman[297145]: 2025-11-29 08:42:36.378360351 +0000 UTC m=+0.189079286 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:42:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:36.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:37 np0005539564 nova_compute[226295]: 2025-11-29 08:42:37.263 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:37.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:42:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:42:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:38.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:38 np0005539564 ovn_controller[130591]: 2025-11-29T08:42:38Z|00746|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:42:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:39.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:39 np0005539564 nova_compute[226295]: 2025-11-29 08:42:39.953 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:40.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:41.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:41 np0005539564 nova_compute[226295]: 2025-11-29 08:42:41.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:42 np0005539564 nova_compute[226295]: 2025-11-29 08:42:42.269 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:42.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:43.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:44.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:44 np0005539564 nova_compute[226295]: 2025-11-29 08:42:44.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:46.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:47 np0005539564 nova_compute[226295]: 2025-11-29 08:42:47.273 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:47.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:48.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:49.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:49 np0005539564 nova_compute[226295]: 2025-11-29 08:42:49.957 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:50.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:50 np0005539564 podman[297449]: 2025-11-29 08:42:50.554845312 +0000 UTC m=+0.096744409 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:42:50 np0005539564 podman[297450]: 2025-11-29 08:42:50.555396977 +0000 UTC m=+0.087433707 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:42:50 np0005539564 podman[297448]: 2025-11-29 08:42:50.607805185 +0000 UTC m=+0.151074058 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:42:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:52 np0005539564 nova_compute[226295]: 2025-11-29 08:42:52.278 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:52.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 03:42:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:53.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:54 np0005539564 nova_compute[226295]: 2025-11-29 08:42:54.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:55.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:57 np0005539564 nova_compute[226295]: 2025-11-29 08:42:57.282 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:57.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:58.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:42:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:59.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:00 np0005539564 nova_compute[226295]: 2025-11-29 08:43:00.001 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:00.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:43:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:01.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:43:02 np0005539564 nova_compute[226295]: 2025-11-29 08:43:02.287 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:02.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:03.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:03.759 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:03.760 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:03.760 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:04.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:05 np0005539564 nova_compute[226295]: 2025-11-29 08:43:05.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:05.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:06.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.185 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.186 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.216 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.294 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.346 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.347 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.356 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.357 226310 INFO nova.compute.claims [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:43:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:07.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:07 np0005539564 nova_compute[226295]: 2025-11-29 08:43:07.672 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:08 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3416482593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.183 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.193 226310 DEBUG nova.compute.provider_tree [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.221 226310 DEBUG nova.scheduler.client.report [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.258 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.260 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.305 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.306 226310 DEBUG nova.network.neutron [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.345 226310 INFO nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.366 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:43:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:08.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.518 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.520 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.521 226310 INFO nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Creating image(s)#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.564 226310 DEBUG nova.storage.rbd_utils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.609 226310 DEBUG nova.storage.rbd_utils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.643 226310 DEBUG nova.storage.rbd_utils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.648 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.690 226310 DEBUG nova.policy [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45da8ed818144f8bd6e00d233fcb5d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '03858b11000d4b57bd3659c3083eed47', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.741 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.742 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.744 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.744 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.783 226310 DEBUG nova.storage.rbd_utils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:08 np0005539564 nova_compute[226295]: 2025-11-29 08:43:08.788 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:09 np0005539564 nova_compute[226295]: 2025-11-29 08:43:09.111 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:09 np0005539564 nova_compute[226295]: 2025-11-29 08:43:09.195 226310 DEBUG nova.storage.rbd_utils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] resizing rbd image 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:43:09 np0005539564 nova_compute[226295]: 2025-11-29 08:43:09.313 226310 DEBUG nova.objects.instance [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'migration_context' on Instance uuid 32b7350a-e995-40b6-89fe-5c543ecdd0c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:09 np0005539564 nova_compute[226295]: 2025-11-29 08:43:09.343 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:43:09 np0005539564 nova_compute[226295]: 2025-11-29 08:43:09.344 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Ensure instance console log exists: /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:43:09 np0005539564 nova_compute[226295]: 2025-11-29 08:43:09.345 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:09 np0005539564 nova_compute[226295]: 2025-11-29 08:43:09.346 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:09 np0005539564 nova_compute[226295]: 2025-11-29 08:43:09.347 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:09.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:10 np0005539564 nova_compute[226295]: 2025-11-29 08:43:10.005 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:10.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:10 np0005539564 nova_compute[226295]: 2025-11-29 08:43:10.627 226310 DEBUG nova.network.neutron [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Successfully created port: 424720b1-be69-4064-90d3-6c8625c811b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:43:11 np0005539564 nova_compute[226295]: 2025-11-29 08:43:11.365 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:11 np0005539564 nova_compute[226295]: 2025-11-29 08:43:11.366 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:11 np0005539564 nova_compute[226295]: 2025-11-29 08:43:11.366 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:43:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:11.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:11 np0005539564 nova_compute[226295]: 2025-11-29 08:43:11.928 226310 DEBUG nova.network.neutron [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Successfully updated port: 424720b1-be69-4064-90d3-6c8625c811b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:43:11 np0005539564 nova_compute[226295]: 2025-11-29 08:43:11.993 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:11 np0005539564 nova_compute[226295]: 2025-11-29 08:43:11.994 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquired lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:11 np0005539564 nova_compute[226295]: 2025-11-29 08:43:11.994 226310 DEBUG nova.network.neutron [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:43:12 np0005539564 nova_compute[226295]: 2025-11-29 08:43:12.039 226310 DEBUG nova.compute.manager [req-15e4f647-26ac-4348-b5a2-b5ec8519e796 req-53990223-870c-473e-943c-4f04244e7f38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-changed-424720b1-be69-4064-90d3-6c8625c811b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:12 np0005539564 nova_compute[226295]: 2025-11-29 08:43:12.040 226310 DEBUG nova.compute.manager [req-15e4f647-26ac-4348-b5a2-b5ec8519e796 req-53990223-870c-473e-943c-4f04244e7f38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Refreshing instance network info cache due to event network-changed-424720b1-be69-4064-90d3-6c8625c811b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:12 np0005539564 nova_compute[226295]: 2025-11-29 08:43:12.041 226310 DEBUG oslo_concurrency.lockutils [req-15e4f647-26ac-4348-b5a2-b5ec8519e796 req-53990223-870c-473e-943c-4f04244e7f38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:12 np0005539564 nova_compute[226295]: 2025-11-29 08:43:12.167 226310 DEBUG nova.network.neutron [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:43:12 np0005539564 nova_compute[226295]: 2025-11-29 08:43:12.298 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539564 nova_compute[226295]: 2025-11-29 08:43:12.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:12.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:13.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.645 226310 DEBUG nova.network.neutron [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updating instance_info_cache with network_info: [{"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.677 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Releasing lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.677 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Instance network_info: |[{"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.678 226310 DEBUG oslo_concurrency.lockutils [req-15e4f647-26ac-4348-b5a2-b5ec8519e796 req-53990223-870c-473e-943c-4f04244e7f38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.678 226310 DEBUG nova.network.neutron [req-15e4f647-26ac-4348-b5a2-b5ec8519e796 req-53990223-870c-473e-943c-4f04244e7f38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Refreshing network info cache for port 424720b1-be69-4064-90d3-6c8625c811b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.681 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Start _get_guest_xml network_info=[{"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.684 226310 WARNING nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.688 226310 DEBUG nova.virt.libvirt.host [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.689 226310 DEBUG nova.virt.libvirt.host [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.692 226310 DEBUG nova.virt.libvirt.host [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.692 226310 DEBUG nova.virt.libvirt.host [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.693 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.693 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.694 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.694 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.694 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.694 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.694 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.695 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.695 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.695 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.695 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.695 226310 DEBUG nova.virt.hardware [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:43:13 np0005539564 nova_compute[226295]: 2025-11-29 08:43:13.698 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:14 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2297182486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:14 np0005539564 nova_compute[226295]: 2025-11-29 08:43:14.131 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:14 np0005539564 nova_compute[226295]: 2025-11-29 08:43:14.158 226310 DEBUG nova.storage.rbd_utils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:14 np0005539564 nova_compute[226295]: 2025-11-29 08:43:14.162 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:14.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:14 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1698676140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:14 np0005539564 nova_compute[226295]: 2025-11-29 08:43:14.590 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:14 np0005539564 nova_compute[226295]: 2025-11-29 08:43:14.592 226310 DEBUG nova.virt.libvirt.vif [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-1291068993',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-1291068993',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=189,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwpUqO5gzZ3Kiu9Q9kE3XcG8UjwJDSKNvGi4SJG6g2Btnk9SXkBhw2wnT5/sd4LSjXZexDSd+ENYEJXfD2i6ueU6jk14FmGOgrEhWzS31tOPvfl4SVZAco45HP7sMpJnw==',key_name='tempest-TestSecurityGroupsBasicOps-581417246',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-b0av42hb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:08Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=32b7350a-e995-40b6-89fe-5c543ecdd0c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:43:14 np0005539564 nova_compute[226295]: 2025-11-29 08:43:14.592 226310 DEBUG nova.network.os_vif_util [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:14 np0005539564 nova_compute[226295]: 2025-11-29 08:43:14.593 226310 DEBUG nova.network.os_vif_util [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:36:5b,bridge_name='br-int',has_traffic_filtering=True,id=424720b1-be69-4064-90d3-6c8625c811b8,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424720b1-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:14 np0005539564 nova_compute[226295]: 2025-11-29 08:43:14.594 226310 DEBUG nova.objects.instance [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32b7350a-e995-40b6-89fe-5c543ecdd0c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.006 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:15.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.465 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <uuid>32b7350a-e995-40b6-89fe-5c543ecdd0c8</uuid>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <name>instance-000000bd</name>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-1291068993</nova:name>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:43:13</nova:creationTime>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <nova:user uuid="a45da8ed818144f8bd6e00d233fcb5d2">tempest-TestSecurityGroupsBasicOps-1086021155-project-member</nova:user>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <nova:project uuid="03858b11000d4b57bd3659c3083eed47">tempest-TestSecurityGroupsBasicOps-1086021155</nova:project>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <nova:port uuid="424720b1-be69-4064-90d3-6c8625c811b8">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <entry name="serial">32b7350a-e995-40b6-89fe-5c543ecdd0c8</entry>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <entry name="uuid">32b7350a-e995-40b6-89fe-5c543ecdd0c8</entry>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk.config">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:1e:36:5b"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <target dev="tap424720b1-be"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8/console.log" append="off"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:43:15 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:43:15 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:43:15 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:43:15 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.466 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Preparing to wait for external event network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.467 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.467 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.467 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.468 226310 DEBUG nova.virt.libvirt.vif [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-1291068993',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-1291068993',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=189,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwpUqO5gzZ3Kiu9Q9kE3XcG8UjwJDSKNvGi4SJG6g2Btnk9SXkBhw2wnT5/sd4LSjXZexDSd+ENYEJXfD2i6ueU6jk14FmGOgrEhWzS31tOPvfl4SVZAco45HP7sMpJnw==',key_name='tempest-TestSecurityGroupsBasicOps-581417246',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-b0av42hb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:08Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=32b7350a-e995-40b6-89fe-5c543ecdd0c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.468 226310 DEBUG nova.network.os_vif_util [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.469 226310 DEBUG nova.network.os_vif_util [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:36:5b,bridge_name='br-int',has_traffic_filtering=True,id=424720b1-be69-4064-90d3-6c8625c811b8,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424720b1-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.469 226310 DEBUG os_vif [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:36:5b,bridge_name='br-int',has_traffic_filtering=True,id=424720b1-be69-4064-90d3-6c8625c811b8,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424720b1-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.470 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.470 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.471 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.475 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.475 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424720b1-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.476 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap424720b1-be, col_values=(('external_ids', {'iface-id': '424720b1-be69-4064-90d3-6c8625c811b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:36:5b', 'vm-uuid': '32b7350a-e995-40b6-89fe-5c543ecdd0c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.478 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539564 NetworkManager[48997]: <info>  [1764405795.4799] manager: (tap424720b1-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.482 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.490 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.491 226310 INFO os_vif [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:36:5b,bridge_name='br-int',has_traffic_filtering=True,id=424720b1-be69-4064-90d3-6c8625c811b8,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424720b1-be')#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.559 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.560 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.562 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No VIF found with MAC fa:16:3e:1e:36:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.562 226310 INFO nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Using config drive#033[00m
Nov 29 03:43:15 np0005539564 nova_compute[226295]: 2025-11-29 08:43:15.593 226310 DEBUG nova.storage.rbd_utils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.432 226310 INFO nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Creating config drive at /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8/disk.config#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.438 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0w3bavfg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:16.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.607 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0w3bavfg" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.644 226310 DEBUG nova.storage.rbd_utils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.649 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8/disk.config 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.749 226310 DEBUG nova.network.neutron [req-15e4f647-26ac-4348-b5a2-b5ec8519e796 req-53990223-870c-473e-943c-4f04244e7f38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updated VIF entry in instance network info cache for port 424720b1-be69-4064-90d3-6c8625c811b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.750 226310 DEBUG nova.network.neutron [req-15e4f647-26ac-4348-b5a2-b5ec8519e796 req-53990223-870c-473e-943c-4f04244e7f38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updating instance_info_cache with network_info: [{"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.814 226310 DEBUG oslo_concurrency.lockutils [req-15e4f647-26ac-4348-b5a2-b5ec8519e796 req-53990223-870c-473e-943c-4f04244e7f38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.891 226310 DEBUG oslo_concurrency.processutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8/disk.config 32b7350a-e995-40b6-89fe-5c543ecdd0c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.894 226310 INFO nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Deleting local config drive /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8/disk.config because it was imported into RBD.#033[00m
Nov 29 03:43:16 np0005539564 kernel: tap424720b1-be: entered promiscuous mode
Nov 29 03:43:16 np0005539564 NetworkManager[48997]: <info>  [1764405796.9483] manager: (tap424720b1-be): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Nov 29 03:43:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:43:16Z|00747|binding|INFO|Claiming lport 424720b1-be69-4064-90d3-6c8625c811b8 for this chassis.
Nov 29 03:43:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:43:16Z|00748|binding|INFO|424720b1-be69-4064-90d3-6c8625c811b8: Claiming fa:16:3e:1e:36:5b 10.100.0.14
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.984 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:16 np0005539564 nova_compute[226295]: 2025-11-29 08:43:16.990 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.001 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:36:5b 10.100.0.14'], port_security=['fa:16:3e:1e:36:5b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '32b7350a-e995-40b6-89fe-5c543ecdd0c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a125bd8-3063-451d-9def-2dc2c28d61df 2feff04b-6141-4213-8893-fb38d7924b9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7cdd894-e2ae-4700-83cb-f8f82b6152b9, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=424720b1-be69-4064-90d3-6c8625c811b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.003 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 424720b1-be69-4064-90d3-6c8625c811b8 in datapath 826294d5-f5eb-469a-9ec9-f18a05fdaa3c bound to our chassis#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.004 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 826294d5-f5eb-469a-9ec9-f18a05fdaa3c#033[00m
Nov 29 03:43:17 np0005539564 systemd-udevd[297835]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.016 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9a98e6-1c55-4a2d-8fa3-ae173e8f840f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.017 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap826294d5-f1 in ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:43:17 np0005539564 systemd-machined[190128]: New machine qemu-88-instance-000000bd.
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.021 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap826294d5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.021 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1f2176-0aea-4d54-8cd1-ddab2cb05680]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.022 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc0a96d-6a39-4498-8ced-4c92f24ee021]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 NetworkManager[48997]: <info>  [1764405797.0299] device (tap424720b1-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:43:17 np0005539564 NetworkManager[48997]: <info>  [1764405797.0310] device (tap424720b1-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.034 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[3cad7bf3-8856-4903-b321-ae22e15578ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:43:17Z|00749|binding|INFO|Setting lport 424720b1-be69-4064-90d3-6c8625c811b8 ovn-installed in OVS
Nov 29 03:43:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:43:17Z|00750|binding|INFO|Setting lport 424720b1-be69-4064-90d3-6c8625c811b8 up in Southbound
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.050 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.051 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e96e08c8-2561-4359-8c71-195949535db9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 systemd[1]: Started Virtual Machine qemu-88-instance-000000bd.
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.077 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9caa6ae6-0315-4772-954d-904cfa845e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.082 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8b84ff74-aeec-471b-b670-eab7be0c4c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 NetworkManager[48997]: <info>  [1764405797.0838] manager: (tap826294d5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Nov 29 03:43:17 np0005539564 systemd-udevd[297839]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.115 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[45e04e21-9c92-49c1-97c2-e285c7df84b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.119 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[eba53dae-68c7-4405-a540-f49a04e015cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 NetworkManager[48997]: <info>  [1764405797.1415] device (tap826294d5-f0): carrier: link connected
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.146 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cb34ef8e-8491-408c-ace3-afc9df8e0504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.163 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[98bc2efa-7ee4-494e-b863-65f3232c3a84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap826294d5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:a9:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 226], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877182, 'reachable_time': 30421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297868, 'error': None, 'target': 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.180 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[252daf10-d1bc-4aa1-837e-36a2fce98ce5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:a908'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877182, 'tstamp': 877182}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297869, 'error': None, 'target': 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.202 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[09af3ec1-fc61-4845-8f77-8f544d7cf362]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap826294d5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:a9:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 226], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877182, 'reachable_time': 30421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297870, 'error': None, 'target': 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.236 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d266837d-3c37-47dd-b6e1-13160f82858d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.318 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4d4f2457-ef79-4891-ad59-5f28a1b10b7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.320 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap826294d5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.320 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.321 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap826294d5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.322 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:17 np0005539564 NetworkManager[48997]: <info>  [1764405797.3230] manager: (tap826294d5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Nov 29 03:43:17 np0005539564 kernel: tap826294d5-f0: entered promiscuous mode
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.325 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap826294d5-f0, col_values=(('external_ids', {'iface-id': '1e48b477-303e-412f-b368-d958453e1fe0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.326 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:17 np0005539564 ovn_controller[130591]: 2025-11-29T08:43:17Z|00751|binding|INFO|Releasing lport 1e48b477-303e-412f-b368-d958453e1fe0 from this chassis (sb_readonly=1)
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.339 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/826294d5-f5eb-469a-9ec9-f18a05fdaa3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/826294d5-f5eb-469a-9ec9-f18a05fdaa3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.339 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.341 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6280c094-2e7e-4fe5-af7a-a8776551ec07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.342 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-826294d5-f5eb-469a-9ec9-f18a05fdaa3c
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/826294d5-f5eb-469a-9ec9-f18a05fdaa3c.pid.haproxy
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 826294d5-f5eb-469a-9ec9-f18a05fdaa3c
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:43:17 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:17.343 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'env', 'PROCESS_TAG=haproxy-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/826294d5-f5eb-469a-9ec9-f18a05fdaa3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:43:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:17.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.496 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405797.4955065, 32b7350a-e995-40b6-89fe-5c543ecdd0c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:17 np0005539564 nova_compute[226295]: 2025-11-29 08:43:17.496 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] VM Started (Lifecycle Event)#033[00m
Nov 29 03:43:17 np0005539564 podman[297945]: 2025-11-29 08:43:17.774158825 +0000 UTC m=+0.056966292 container create a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:43:17 np0005539564 systemd[1]: Started libpod-conmon-a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80.scope.
Nov 29 03:43:17 np0005539564 podman[297945]: 2025-11-29 08:43:17.748065679 +0000 UTC m=+0.030873176 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:43:17 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:43:17 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382c97ff292133921d83716681ba4b204ed7c89cfb3e3f038b0cc31e7b79466b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:17 np0005539564 podman[297945]: 2025-11-29 08:43:17.877802868 +0000 UTC m=+0.160610355 container init a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:43:17 np0005539564 podman[297945]: 2025-11-29 08:43:17.88447847 +0000 UTC m=+0.167285937 container start a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:43:17 np0005539564 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[297961]: [NOTICE]   (297965) : New worker (297967) forked
Nov 29 03:43:17 np0005539564 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[297961]: [NOTICE]   (297965) : Loading success.
Nov 29 03:43:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:18.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:18 np0005539564 nova_compute[226295]: 2025-11-29 08:43:18.549 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:43:18 np0005539564 nova_compute[226295]: 2025-11-29 08:43:18.550 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:43:18 np0005539564 nova_compute[226295]: 2025-11-29 08:43:18.551 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:18 np0005539564 nova_compute[226295]: 2025-11-29 08:43:18.556 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405797.5002813, 32b7350a-e995-40b6-89fe-5c543ecdd0c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:18 np0005539564 nova_compute[226295]: 2025-11-29 08:43:18.556 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:43:18 np0005539564 nova_compute[226295]: 2025-11-29 08:43:18.589 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:18 np0005539564 nova_compute[226295]: 2025-11-29 08:43:18.594 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:18 np0005539564 nova_compute[226295]: 2025-11-29 08:43:18.629 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.105 226310 DEBUG nova.compute.manager [req-edda32b6-b866-4541-aa16-95d46bf94993 req-330f9ca3-024f-4ab6-b436-2cb43a488691 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.106 226310 DEBUG oslo_concurrency.lockutils [req-edda32b6-b866-4541-aa16-95d46bf94993 req-330f9ca3-024f-4ab6-b436-2cb43a488691 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.106 226310 DEBUG oslo_concurrency.lockutils [req-edda32b6-b866-4541-aa16-95d46bf94993 req-330f9ca3-024f-4ab6-b436-2cb43a488691 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.107 226310 DEBUG oslo_concurrency.lockutils [req-edda32b6-b866-4541-aa16-95d46bf94993 req-330f9ca3-024f-4ab6-b436-2cb43a488691 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.107 226310 DEBUG nova.compute.manager [req-edda32b6-b866-4541-aa16-95d46bf94993 req-330f9ca3-024f-4ab6-b436-2cb43a488691 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Processing event network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.108 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.112 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405799.1125934, 32b7350a-e995-40b6-89fe-5c543ecdd0c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.113 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.115 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.120 226310 INFO nova.virt.libvirt.driver [-] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Instance spawned successfully.#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.120 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.148 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.155 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.159 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.160 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.161 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.162 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.162 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.163 226310 DEBUG nova.virt.libvirt.driver [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.191 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.227 226310 INFO nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Took 10.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.228 226310 DEBUG nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.315 226310 INFO nova.compute.manager [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Took 12.03 seconds to build instance.#033[00m
Nov 29 03:43:19 np0005539564 nova_compute[226295]: 2025-11-29 08:43:19.382 226310 DEBUG oslo_concurrency.lockutils [None req-ba5ba0e2-440f-460c-ab08-c8fd0c4f849d a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:19.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:20 np0005539564 nova_compute[226295]: 2025-11-29 08:43:20.007 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:20.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:20 np0005539564 nova_compute[226295]: 2025-11-29 08:43:20.478 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:21 np0005539564 nova_compute[226295]: 2025-11-29 08:43:21.220 226310 DEBUG nova.compute.manager [req-38d81d53-d6ef-4f83-b1c7-44681def2a44 req-8e2966a0-5e6d-4d84-94d5-34f3220a8b8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:21 np0005539564 nova_compute[226295]: 2025-11-29 08:43:21.220 226310 DEBUG oslo_concurrency.lockutils [req-38d81d53-d6ef-4f83-b1c7-44681def2a44 req-8e2966a0-5e6d-4d84-94d5-34f3220a8b8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:21 np0005539564 nova_compute[226295]: 2025-11-29 08:43:21.221 226310 DEBUG oslo_concurrency.lockutils [req-38d81d53-d6ef-4f83-b1c7-44681def2a44 req-8e2966a0-5e6d-4d84-94d5-34f3220a8b8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:21 np0005539564 nova_compute[226295]: 2025-11-29 08:43:21.221 226310 DEBUG oslo_concurrency.lockutils [req-38d81d53-d6ef-4f83-b1c7-44681def2a44 req-8e2966a0-5e6d-4d84-94d5-34f3220a8b8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:21 np0005539564 nova_compute[226295]: 2025-11-29 08:43:21.221 226310 DEBUG nova.compute.manager [req-38d81d53-d6ef-4f83-b1c7-44681def2a44 req-8e2966a0-5e6d-4d84-94d5-34f3220a8b8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] No waiting events found dispatching network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:21 np0005539564 nova_compute[226295]: 2025-11-29 08:43:21.221 226310 WARNING nova.compute.manager [req-38d81d53-d6ef-4f83-b1c7-44681def2a44 req-8e2966a0-5e6d-4d84-94d5-34f3220a8b8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received unexpected event network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:43:21 np0005539564 nova_compute[226295]: 2025-11-29 08:43:21.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:21 np0005539564 podman[297978]: 2025-11-29 08:43:21.527816066 +0000 UTC m=+0.079985024 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:43:21 np0005539564 podman[297977]: 2025-11-29 08:43:21.565581628 +0000 UTC m=+0.118452146 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:43:21 np0005539564 podman[297976]: 2025-11-29 08:43:21.573020199 +0000 UTC m=+0.122933176 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 29 03:43:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:22.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:22 np0005539564 nova_compute[226295]: 2025-11-29 08:43:22.850 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:22.850 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:22.852 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:43:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:23 np0005539564 NetworkManager[48997]: <info>  [1764405803.7030] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Nov 29 03:43:23 np0005539564 NetworkManager[48997]: <info>  [1764405803.7040] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Nov 29 03:43:23 np0005539564 nova_compute[226295]: 2025-11-29 08:43:23.706 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:23 np0005539564 nova_compute[226295]: 2025-11-29 08:43:23.773 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:23 np0005539564 ovn_controller[130591]: 2025-11-29T08:43:23Z|00752|binding|INFO|Releasing lport 1e48b477-303e-412f-b368-d958453e1fe0 from this chassis (sb_readonly=0)
Nov 29 03:43:23 np0005539564 nova_compute[226295]: 2025-11-29 08:43:23.791 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:24 np0005539564 nova_compute[226295]: 2025-11-29 08:43:24.421 226310 DEBUG nova.compute.manager [req-547f4c1d-5425-4a6a-a9ea-49b8892a79e0 req-bc7d20c0-0c7d-4e79-98ac-0a9167c98d3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-changed-424720b1-be69-4064-90d3-6c8625c811b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:24 np0005539564 nova_compute[226295]: 2025-11-29 08:43:24.421 226310 DEBUG nova.compute.manager [req-547f4c1d-5425-4a6a-a9ea-49b8892a79e0 req-bc7d20c0-0c7d-4e79-98ac-0a9167c98d3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Refreshing instance network info cache due to event network-changed-424720b1-be69-4064-90d3-6c8625c811b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:24 np0005539564 nova_compute[226295]: 2025-11-29 08:43:24.422 226310 DEBUG oslo_concurrency.lockutils [req-547f4c1d-5425-4a6a-a9ea-49b8892a79e0 req-bc7d20c0-0c7d-4e79-98ac-0a9167c98d3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:24 np0005539564 nova_compute[226295]: 2025-11-29 08:43:24.422 226310 DEBUG oslo_concurrency.lockutils [req-547f4c1d-5425-4a6a-a9ea-49b8892a79e0 req-bc7d20c0-0c7d-4e79-98ac-0a9167c98d3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:24 np0005539564 nova_compute[226295]: 2025-11-29 08:43:24.422 226310 DEBUG nova.network.neutron [req-547f4c1d-5425-4a6a-a9ea-49b8892a79e0 req-bc7d20c0-0c7d-4e79-98ac-0a9167c98d3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Refreshing network info cache for port 424720b1-be69-4064-90d3-6c8625c811b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:24.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:43:24.854 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:25 np0005539564 nova_compute[226295]: 2025-11-29 08:43:25.011 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:25.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:25 np0005539564 nova_compute[226295]: 2025-11-29 08:43:25.481 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:25 np0005539564 nova_compute[226295]: 2025-11-29 08:43:25.852 226310 DEBUG nova.network.neutron [req-547f4c1d-5425-4a6a-a9ea-49b8892a79e0 req-bc7d20c0-0c7d-4e79-98ac-0a9167c98d3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updated VIF entry in instance network info cache for port 424720b1-be69-4064-90d3-6c8625c811b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:25 np0005539564 nova_compute[226295]: 2025-11-29 08:43:25.853 226310 DEBUG nova.network.neutron [req-547f4c1d-5425-4a6a-a9ea-49b8892a79e0 req-bc7d20c0-0c7d-4e79-98ac-0a9167c98d3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updating instance_info_cache with network_info: [{"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:25 np0005539564 nova_compute[226295]: 2025-11-29 08:43:25.875 226310 DEBUG oslo_concurrency.lockutils [req-547f4c1d-5425-4a6a-a9ea-49b8892a79e0 req-bc7d20c0-0c7d-4e79-98ac-0a9167c98d3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:26 np0005539564 nova_compute[226295]: 2025-11-29 08:43:26.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:26.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:27.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:29.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:30 np0005539564 nova_compute[226295]: 2025-11-29 08:43:30.015 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539564 nova_compute[226295]: 2025-11-29 08:43:30.483 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:30.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:31.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.397 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.398 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.399 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.399 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.399 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:32.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3387081859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.866 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.972 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:32 np0005539564 nova_compute[226295]: 2025-11-29 08:43:32.973 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.163 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.165 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3988MB free_disk=20.921993255615234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.166 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.166 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.297 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 32b7350a-e995-40b6-89fe-5c543ecdd0c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.297 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.298 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.349 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.377 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.378 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.396 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:43:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.450 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:43:33 np0005539564 nova_compute[226295]: 2025-11-29 08:43:33.502 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:43:33Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:36:5b 10.100.0.14
Nov 29 03:43:33 np0005539564 ovn_controller[130591]: 2025-11-29T08:43:33Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:36:5b 10.100.0.14
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3845253938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:34 np0005539564 nova_compute[226295]: 2025-11-29 08:43:34.285 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.783s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:34 np0005539564 nova_compute[226295]: 2025-11-29 08:43:34.293 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:34 np0005539564 nova_compute[226295]: 2025-11-29 08:43:34.348 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:34 np0005539564 nova_compute[226295]: 2025-11-29 08:43:34.397 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:43:34 np0005539564 nova_compute[226295]: 2025-11-29 08:43:34.397 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:34.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:34.756256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814756299, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 952, "num_deletes": 252, "total_data_size": 1955294, "memory_usage": 1988792, "flush_reason": "Manual Compaction"}
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814764471, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 851253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69669, "largest_seqno": 70616, "table_properties": {"data_size": 847495, "index_size": 1473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9909, "raw_average_key_size": 21, "raw_value_size": 839594, "raw_average_value_size": 1782, "num_data_blocks": 62, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405747, "oldest_key_time": 1764405747, "file_creation_time": 1764405814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 8255 microseconds, and 3565 cpu microseconds.
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:34.764510) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 851253 bytes OK
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:34.764540) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:34.766432) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:34.766447) EVENT_LOG_v1 {"time_micros": 1764405814766442, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:34.766472) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1950526, prev total WAL file size 1950526, number of live WAL files 2.
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:34.767281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323635' seq:72057594037927935, type:22 .. '6D6772737461740032353138' seq:0, type:0; will stop at (end)
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(831KB)], [138(12MB)]
Nov 29 03:43:34 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814767324, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 14182225, "oldest_snapshot_seqno": -1}
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9769 keys, 10720298 bytes, temperature: kUnknown
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405815007349, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10720298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10659817, "index_size": 34960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 257616, "raw_average_key_size": 26, "raw_value_size": 10490815, "raw_average_value_size": 1073, "num_data_blocks": 1322, "num_entries": 9769, "num_filter_entries": 9769, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:43:35 np0005539564 nova_compute[226295]: 2025-11-29 08:43:35.019 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:35.007778) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10720298 bytes
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:35.259738) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.1 rd, 44.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(29.3) write-amplify(12.6) OK, records in: 10262, records dropped: 493 output_compression: NoCompression
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:35.259796) EVENT_LOG_v1 {"time_micros": 1764405815259775, "job": 88, "event": "compaction_finished", "compaction_time_micros": 240154, "compaction_time_cpu_micros": 31299, "output_level": 6, "num_output_files": 1, "total_output_size": 10720298, "num_input_records": 10262, "num_output_records": 9769, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405815260906, "job": 88, "event": "table_file_deletion", "file_number": 140}
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405815264452, "job": 88, "event": "table_file_deletion", "file_number": 138}
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:34.767171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:35.264662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:35.264670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:35.264672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:35.264675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:43:35.264677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:35.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:35 np0005539564 nova_compute[226295]: 2025-11-29 08:43:35.486 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:36.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:37.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:38.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:39.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:40 np0005539564 nova_compute[226295]: 2025-11-29 08:43:40.021 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:40 np0005539564 nova_compute[226295]: 2025-11-29 08:43:40.488 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:40.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:41.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:42.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:44.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:45 np0005539564 nova_compute[226295]: 2025-11-29 08:43:45.023 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:45.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:45 np0005539564 nova_compute[226295]: 2025-11-29 08:43:45.490 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:46 np0005539564 podman[298360]: 2025-11-29 08:43:46.224454419 +0000 UTC m=+0.031584965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:46.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:46 np0005539564 podman[298360]: 2025-11-29 08:43:46.688712609 +0000 UTC m=+0.495843105 container create 8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:43:46 np0005539564 systemd[1]: Started libpod-conmon-8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199.scope.
Nov 29 03:43:46 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:43:46 np0005539564 podman[298360]: 2025-11-29 08:43:46.795232991 +0000 UTC m=+0.602363477 container init 8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:43:46 np0005539564 podman[298360]: 2025-11-29 08:43:46.804692107 +0000 UTC m=+0.611822573 container start 8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:46 np0005539564 jovial_brahmagupta[298376]: 167 167
Nov 29 03:43:46 np0005539564 systemd[1]: libpod-8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199.scope: Deactivated successfully.
Nov 29 03:43:46 np0005539564 podman[298360]: 2025-11-29 08:43:46.948482868 +0000 UTC m=+0.755613404 container attach 8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:43:46 np0005539564 podman[298360]: 2025-11-29 08:43:46.949630598 +0000 UTC m=+0.756761074 container died 8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:43:47 np0005539564 systemd[1]: var-lib-containers-storage-overlay-05adfc54a920ae9af6e19abd65c434b7b201d1d652c6d89a38100453fc476612-merged.mount: Deactivated successfully.
Nov 29 03:43:47 np0005539564 podman[298360]: 2025-11-29 08:43:47.103352847 +0000 UTC m=+0.910483303 container remove 8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:43:47 np0005539564 systemd[1]: libpod-conmon-8e80743d76e2921944699875068ece8e9aed2b7a73580609fcf026f37df9f199.scope: Deactivated successfully.
Nov 29 03:43:47 np0005539564 podman[298402]: 2025-11-29 08:43:47.27754569 +0000 UTC m=+0.039679995 container create be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:43:47 np0005539564 systemd[1]: Started libpod-conmon-be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c.scope.
Nov 29 03:43:47 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:43:47 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d340f4ceef766ae5936d231405b322ecc143a1ee324478e4afaa1bfdc685cc3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:47 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d340f4ceef766ae5936d231405b322ecc143a1ee324478e4afaa1bfdc685cc3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:47 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d340f4ceef766ae5936d231405b322ecc143a1ee324478e4afaa1bfdc685cc3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:47 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d340f4ceef766ae5936d231405b322ecc143a1ee324478e4afaa1bfdc685cc3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:47 np0005539564 podman[298402]: 2025-11-29 08:43:47.262135763 +0000 UTC m=+0.024270088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:47 np0005539564 podman[298402]: 2025-11-29 08:43:47.358569822 +0000 UTC m=+0.120704147 container init be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:43:47 np0005539564 podman[298402]: 2025-11-29 08:43:47.370514115 +0000 UTC m=+0.132648420 container start be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:43:47 np0005539564 podman[298402]: 2025-11-29 08:43:47.37474743 +0000 UTC m=+0.136881765 container attach be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:43:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:47.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:48.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]: [
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:    {
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        "available": false,
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        "ceph_device": false,
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        "lsm_data": {},
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        "lvs": [],
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        "path": "/dev/sr0",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        "rejected_reasons": [
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "Insufficient space (<5GB)",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "Has a FileSystem"
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        ],
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        "sys_api": {
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "actuators": null,
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "device_nodes": "sr0",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "devname": "sr0",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "human_readable_size": "482.00 KB",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "id_bus": "ata",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "model": "QEMU DVD-ROM",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "nr_requests": "2",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "parent": "/dev/sr0",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "partitions": {},
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "path": "/dev/sr0",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "removable": "1",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "rev": "2.5+",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "ro": "0",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "rotational": "1",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "sas_address": "",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "sas_device_handle": "",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "scheduler_mode": "mq-deadline",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "sectors": 0,
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "sectorsize": "2048",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "size": 493568.0,
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "support_discard": "2048",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "type": "disk",
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:            "vendor": "QEMU"
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:        }
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]:    }
Nov 29 03:43:48 np0005539564 naughty_shannon[298418]: ]
Nov 29 03:43:48 np0005539564 systemd[1]: libpod-be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c.scope: Deactivated successfully.
Nov 29 03:43:48 np0005539564 systemd[1]: libpod-be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c.scope: Consumed 1.235s CPU time.
Nov 29 03:43:48 np0005539564 conmon[298418]: conmon be9017c743078fe15386 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c.scope/container/memory.events
Nov 29 03:43:48 np0005539564 podman[298402]: 2025-11-29 08:43:48.669883048 +0000 UTC m=+1.432017363 container died be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:43:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:49.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:50 np0005539564 nova_compute[226295]: 2025-11-29 08:43:50.026 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e408 e408: 3 total, 3 up, 3 in
Nov 29 03:43:50 np0005539564 systemd[1]: var-lib-containers-storage-overlay-d340f4ceef766ae5936d231405b322ecc143a1ee324478e4afaa1bfdc685cc3c-merged.mount: Deactivated successfully.
Nov 29 03:43:50 np0005539564 podman[298402]: 2025-11-29 08:43:50.081363324 +0000 UTC m=+2.843497669 container remove be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shannon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:43:50 np0005539564 systemd[1]: libpod-conmon-be9017c743078fe15386a9b960400be28f3cb96d4207bcc6974452ffb71f9e0c.scope: Deactivated successfully.
Nov 29 03:43:50 np0005539564 nova_compute[226295]: 2025-11-29 08:43:50.491 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:43:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:50.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:43:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:52 np0005539564 podman[299558]: 2025-11-29 08:43:52.509106015 +0000 UTC m=+0.068701420 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:43:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:52.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:52 np0005539564 podman[299559]: 2025-11-29 08:43:52.557140664 +0000 UTC m=+0.104974541 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:43:52 np0005539564 podman[299557]: 2025-11-29 08:43:52.567747491 +0000 UTC m=+0.120146821 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:43:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:53.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:54.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:55 np0005539564 nova_compute[226295]: 2025-11-29 08:43:55.029 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:55 np0005539564 nova_compute[226295]: 2025-11-29 08:43:55.493 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:56.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:57.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:58.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e409 e409: 3 total, 3 up, 3 in
Nov 29 03:43:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:43:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:59.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:00 np0005539564 nova_compute[226295]: 2025-11-29 08:44:00.032 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:00 np0005539564 nova_compute[226295]: 2025-11-29 08:44:00.496 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:00.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:44:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:01.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:02.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:44:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:03.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:03.760 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:03.761 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:03.762 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:04.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:05 np0005539564 nova_compute[226295]: 2025-11-29 08:44:05.036 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:05.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:05 np0005539564 nova_compute[226295]: 2025-11-29 08:44:05.498 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:06 np0005539564 nova_compute[226295]: 2025-11-29 08:44:06.390 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:06.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:07.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 e410: 3 total, 3 up, 3 in
Nov 29 03:44:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:08.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:10 np0005539564 nova_compute[226295]: 2025-11-29 08:44:10.039 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:10 np0005539564 nova_compute[226295]: 2025-11-29 08:44:10.501 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:10.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:11 np0005539564 nova_compute[226295]: 2025-11-29 08:44:11.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:11 np0005539564 nova_compute[226295]: 2025-11-29 08:44:11.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:44:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:11.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:12 np0005539564 nova_compute[226295]: 2025-11-29 08:44:12.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:12.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:13 np0005539564 nova_compute[226295]: 2025-11-29 08:44:13.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:13.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:14.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:15 np0005539564 nova_compute[226295]: 2025-11-29 08:44:15.041 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:15 np0005539564 nova_compute[226295]: 2025-11-29 08:44:15.503 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:16 np0005539564 nova_compute[226295]: 2025-11-29 08:44:16.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:16.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:17 np0005539564 nova_compute[226295]: 2025-11-29 08:44:17.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:17.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:18.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:19 np0005539564 nova_compute[226295]: 2025-11-29 08:44:19.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:19 np0005539564 nova_compute[226295]: 2025-11-29 08:44:19.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:44:19 np0005539564 nova_compute[226295]: 2025-11-29 08:44:19.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:44:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:19.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:19 np0005539564 nova_compute[226295]: 2025-11-29 08:44:19.636 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:19 np0005539564 nova_compute[226295]: 2025-11-29 08:44:19.637 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:19 np0005539564 nova_compute[226295]: 2025-11-29 08:44:19.638 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:44:19 np0005539564 nova_compute[226295]: 2025-11-29 08:44:19.638 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 32b7350a-e995-40b6-89fe-5c543ecdd0c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:20 np0005539564 nova_compute[226295]: 2025-11-29 08:44:20.043 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:20 np0005539564 nova_compute[226295]: 2025-11-29 08:44:20.505 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:20.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.435 226310 DEBUG nova.compute.manager [req-2465d30d-2031-42ce-a2a2-2c4979b4235b req-b7c71501-a443-446a-83ad-7e85bc84ec84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-changed-424720b1-be69-4064-90d3-6c8625c811b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.436 226310 DEBUG nova.compute.manager [req-2465d30d-2031-42ce-a2a2-2c4979b4235b req-b7c71501-a443-446a-83ad-7e85bc84ec84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Refreshing instance network info cache due to event network-changed-424720b1-be69-4064-90d3-6c8625c811b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.437 226310 DEBUG oslo_concurrency.lockutils [req-2465d30d-2031-42ce-a2a2-2c4979b4235b req-b7c71501-a443-446a-83ad-7e85bc84ec84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:21.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.497 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updating instance_info_cache with network_info: [{"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.519 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.519 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.520 226310 DEBUG oslo_concurrency.lockutils [req-2465d30d-2031-42ce-a2a2-2c4979b4235b req-b7c71501-a443-446a-83ad-7e85bc84ec84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.520 226310 DEBUG nova.network.neutron [req-2465d30d-2031-42ce-a2a2-2c4979b4235b req-b7c71501-a443-446a-83ad-7e85bc84ec84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Refreshing network info cache for port 424720b1-be69-4064-90d3-6c8625c811b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.523 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.524 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.524 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.525 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.525 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.526 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.528 226310 INFO nova.compute.manager [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Terminating instance#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.529 226310 DEBUG nova.compute.manager [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:44:21 np0005539564 kernel: tap424720b1-be (unregistering): left promiscuous mode
Nov 29 03:44:21 np0005539564 NetworkManager[48997]: <info>  [1764405861.5943] device (tap424720b1-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:44:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:21Z|00753|binding|INFO|Releasing lport 424720b1-be69-4064-90d3-6c8625c811b8 from this chassis (sb_readonly=0)
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.605 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:21Z|00754|binding|INFO|Setting lport 424720b1-be69-4064-90d3-6c8625c811b8 down in Southbound
Nov 29 03:44:21 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:21Z|00755|binding|INFO|Removing iface tap424720b1-be ovn-installed in OVS
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.611 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.624 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:36:5b 10.100.0.14'], port_security=['fa:16:3e:1e:36:5b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '32b7350a-e995-40b6-89fe-5c543ecdd0c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a125bd8-3063-451d-9def-2dc2c28d61df 2feff04b-6141-4213-8893-fb38d7924b9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7cdd894-e2ae-4700-83cb-f8f82b6152b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=424720b1-be69-4064-90d3-6c8625c811b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.626 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 424720b1-be69-4064-90d3-6c8625c811b8 in datapath 826294d5-f5eb-469a-9ec9-f18a05fdaa3c unbound from our chassis#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.627 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 826294d5-f5eb-469a-9ec9-f18a05fdaa3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.629 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdd8262-7e04-432b-8ca4-bf9cfb37fe32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.630 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c namespace which is not needed anymore#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.643 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Nov 29 03:44:21 np0005539564 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000bd.scope: Consumed 17.369s CPU time.
Nov 29 03:44:21 np0005539564 systemd-machined[190128]: Machine qemu-88-instance-000000bd terminated.
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.755 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.761 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.774 226310 INFO nova.virt.libvirt.driver [-] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Instance destroyed successfully.#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.775 226310 DEBUG nova.objects.instance [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'resources' on Instance uuid 32b7350a-e995-40b6-89fe-5c543ecdd0c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:21 np0005539564 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[297961]: [NOTICE]   (297965) : haproxy version is 2.8.14-c23fe91
Nov 29 03:44:21 np0005539564 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[297961]: [NOTICE]   (297965) : path to executable is /usr/sbin/haproxy
Nov 29 03:44:21 np0005539564 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[297961]: [WARNING]  (297965) : Exiting Master process...
Nov 29 03:44:21 np0005539564 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[297961]: [WARNING]  (297965) : Exiting Master process...
Nov 29 03:44:21 np0005539564 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[297961]: [ALERT]    (297965) : Current worker (297967) exited with code 143 (Terminated)
Nov 29 03:44:21 np0005539564 neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c[297961]: [WARNING]  (297965) : All workers exited. Exiting... (0)
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.806 226310 DEBUG nova.virt.libvirt.vif [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-1291068993',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-1291068993',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=189,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwpUqO5gzZ3Kiu9Q9kE3XcG8UjwJDSKNvGi4SJG6g2Btnk9SXkBhw2wnT5/sd4LSjXZexDSd+ENYEJXfD2i6ueU6jk14FmGOgrEhWzS31tOPvfl4SVZAco45HP7sMpJnw==',key_name='tempest-TestSecurityGroupsBasicOps-581417246',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-b0av42hb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:43:19Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=32b7350a-e995-40b6-89fe-5c543ecdd0c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:44:21 np0005539564 systemd[1]: libpod-a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80.scope: Deactivated successfully.
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.807 226310 DEBUG nova.network.os_vif_util [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.808 226310 DEBUG nova.network.os_vif_util [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:36:5b,bridge_name='br-int',has_traffic_filtering=True,id=424720b1-be69-4064-90d3-6c8625c811b8,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424720b1-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.809 226310 DEBUG os_vif [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:36:5b,bridge_name='br-int',has_traffic_filtering=True,id=424720b1-be69-4064-90d3-6c8625c811b8,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424720b1-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.810 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.810 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424720b1-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.812 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.813 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 podman[299695]: 2025-11-29 08:44:21.814087832 +0000 UTC m=+0.057231769 container died a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.817 226310 INFO os_vif [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:36:5b,bridge_name='br-int',has_traffic_filtering=True,id=424720b1-be69-4064-90d3-6c8625c811b8,network=Network(826294d5-f5eb-469a-9ec9-f18a05fdaa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424720b1-be')#033[00m
Nov 29 03:44:21 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80-userdata-shm.mount: Deactivated successfully.
Nov 29 03:44:21 np0005539564 systemd[1]: var-lib-containers-storage-overlay-382c97ff292133921d83716681ba4b204ed7c89cfb3e3f038b0cc31e7b79466b-merged.mount: Deactivated successfully.
Nov 29 03:44:21 np0005539564 podman[299695]: 2025-11-29 08:44:21.867726324 +0000 UTC m=+0.110870291 container cleanup a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:44:21 np0005539564 systemd[1]: libpod-conmon-a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80.scope: Deactivated successfully.
Nov 29 03:44:21 np0005539564 podman[299751]: 2025-11-29 08:44:21.939702361 +0000 UTC m=+0.047048604 container remove a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.948 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ba77ef-0f60-4a35-aac4-22999b5ad991]: (4, ('Sat Nov 29 08:44:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c (a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80)\na8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80\nSat Nov 29 08:44:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c (a8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80)\na8a856aa30d9b8d060e2df8314d54cb2c860c2815b7483231d3b009b289e2e80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.951 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d20c9cbc-54db-45dc-ba23-e5e7b05e41e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.952 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap826294d5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.954 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 kernel: tap826294d5-f0: left promiscuous mode
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 nova_compute[226295]: 2025-11-29 08:44:21.968 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.970 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[262a2d6d-4a49-4e59-8f95-c5a25991db17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.985 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2aab36ea-8310-4d50-983e-1ce8990f9b26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:21.987 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1b1e16-2230-4772-be6f-797b75238fa8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:22.010 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[939fd7f5-2df3-4ea8-9f97-247f484260f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877175, 'reachable_time': 36067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299766, 'error': None, 'target': 'ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:22 np0005539564 systemd[1]: run-netns-ovnmeta\x2d826294d5\x2df5eb\x2d469a\x2d9ec9\x2df18a05fdaa3c.mount: Deactivated successfully.
Nov 29 03:44:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:22.013 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-826294d5-f5eb-469a-9ec9-f18a05fdaa3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:44:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:22.014 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[14e335e4-6267-4a36-a740-1fe3fa50620f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.152 226310 DEBUG nova.compute.manager [req-2e569011-78b1-4b1b-93d7-7cca4877d902 req-79d0650c-6b96-454c-8baa-a5e03bd263e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-vif-unplugged-424720b1-be69-4064-90d3-6c8625c811b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.153 226310 DEBUG oslo_concurrency.lockutils [req-2e569011-78b1-4b1b-93d7-7cca4877d902 req-79d0650c-6b96-454c-8baa-a5e03bd263e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.154 226310 DEBUG oslo_concurrency.lockutils [req-2e569011-78b1-4b1b-93d7-7cca4877d902 req-79d0650c-6b96-454c-8baa-a5e03bd263e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.154 226310 DEBUG oslo_concurrency.lockutils [req-2e569011-78b1-4b1b-93d7-7cca4877d902 req-79d0650c-6b96-454c-8baa-a5e03bd263e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.155 226310 DEBUG nova.compute.manager [req-2e569011-78b1-4b1b-93d7-7cca4877d902 req-79d0650c-6b96-454c-8baa-a5e03bd263e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] No waiting events found dispatching network-vif-unplugged-424720b1-be69-4064-90d3-6c8625c811b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.155 226310 DEBUG nova.compute.manager [req-2e569011-78b1-4b1b-93d7-7cca4877d902 req-79d0650c-6b96-454c-8baa-a5e03bd263e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-vif-unplugged-424720b1-be69-4064-90d3-6c8625c811b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.233 226310 INFO nova.virt.libvirt.driver [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Deleting instance files /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8_del#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.234 226310 INFO nova.virt.libvirt.driver [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Deletion of /var/lib/nova/instances/32b7350a-e995-40b6-89fe-5c543ecdd0c8_del complete#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.318 226310 INFO nova.compute.manager [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.319 226310 DEBUG oslo.service.loopingcall [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.319 226310 DEBUG nova.compute.manager [-] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.319 226310 DEBUG nova.network.neutron [-] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:44:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:22.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.673 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.820 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.832 226310 DEBUG nova.network.neutron [req-2465d30d-2031-42ce-a2a2-2c4979b4235b req-b7c71501-a443-446a-83ad-7e85bc84ec84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updated VIF entry in instance network info cache for port 424720b1-be69-4064-90d3-6c8625c811b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.833 226310 DEBUG nova.network.neutron [req-2465d30d-2031-42ce-a2a2-2c4979b4235b req-b7c71501-a443-446a-83ad-7e85bc84ec84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updating instance_info_cache with network_info: [{"id": "424720b1-be69-4064-90d3-6c8625c811b8", "address": "fa:16:3e:1e:36:5b", "network": {"id": "826294d5-f5eb-469a-9ec9-f18a05fdaa3c", "bridge": "br-int", "label": "tempest-network-smoke--487870044", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424720b1-be", "ovs_interfaceid": "424720b1-be69-4064-90d3-6c8625c811b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:22 np0005539564 nova_compute[226295]: 2025-11-29 08:44:22.854 226310 DEBUG oslo_concurrency.lockutils [req-2465d30d-2031-42ce-a2a2-2c4979b4235b req-b7c71501-a443-446a-83ad-7e85bc84ec84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-32b7350a-e995-40b6-89fe-5c543ecdd0c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:23.093 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:23 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:23.094 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.119 226310 DEBUG nova.network.neutron [-] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.141 226310 INFO nova.compute.manager [-] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Took 0.82 seconds to deallocate network for instance.#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.202 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.203 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.269 226310 DEBUG oslo_concurrency.processutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:23.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:23 np0005539564 podman[299790]: 2025-11-29 08:44:23.502599494 +0000 UTC m=+0.058482143 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.517 226310 DEBUG nova.compute.manager [req-82018809-2553-4554-b9c8-f9c2b58e392c req-a8a71a85-5393-4782-9a12-51822533205c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-vif-deleted-424720b1-be69-4064-90d3-6c8625c811b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:23 np0005539564 podman[299789]: 2025-11-29 08:44:23.514887507 +0000 UTC m=+0.070768416 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:44:23 np0005539564 podman[299779]: 2025-11-29 08:44:23.591843968 +0000 UTC m=+0.148486318 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:44:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:23 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/778055859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.806 226310 DEBUG oslo_concurrency.processutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.816 226310 DEBUG nova.compute.provider_tree [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.834 226310 DEBUG nova.scheduler.client.report [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.890 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:23 np0005539564 nova_compute[226295]: 2025-11-29 08:44:23.945 226310 INFO nova.scheduler.client.report [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Deleted allocations for instance 32b7350a-e995-40b6-89fe-5c543ecdd0c8#033[00m
Nov 29 03:44:24 np0005539564 nova_compute[226295]: 2025-11-29 08:44:24.019 226310 DEBUG oslo_concurrency.lockutils [None req-f46ac994-a46a-46c5-bcf8-f37b9ea75f4a a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:24.097 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:24 np0005539564 nova_compute[226295]: 2025-11-29 08:44:24.248 226310 DEBUG nova.compute.manager [req-7b118a2e-0252-497d-96c1-67104cdf7ee5 req-a940bd4a-114b-4f07-a56b-e4b1757bf6ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received event network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:24 np0005539564 nova_compute[226295]: 2025-11-29 08:44:24.249 226310 DEBUG oslo_concurrency.lockutils [req-7b118a2e-0252-497d-96c1-67104cdf7ee5 req-a940bd4a-114b-4f07-a56b-e4b1757bf6ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:24 np0005539564 nova_compute[226295]: 2025-11-29 08:44:24.250 226310 DEBUG oslo_concurrency.lockutils [req-7b118a2e-0252-497d-96c1-67104cdf7ee5 req-a940bd4a-114b-4f07-a56b-e4b1757bf6ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:24 np0005539564 nova_compute[226295]: 2025-11-29 08:44:24.250 226310 DEBUG oslo_concurrency.lockutils [req-7b118a2e-0252-497d-96c1-67104cdf7ee5 req-a940bd4a-114b-4f07-a56b-e4b1757bf6ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "32b7350a-e995-40b6-89fe-5c543ecdd0c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:24 np0005539564 nova_compute[226295]: 2025-11-29 08:44:24.250 226310 DEBUG nova.compute.manager [req-7b118a2e-0252-497d-96c1-67104cdf7ee5 req-a940bd4a-114b-4f07-a56b-e4b1757bf6ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] No waiting events found dispatching network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:24 np0005539564 nova_compute[226295]: 2025-11-29 08:44:24.251 226310 WARNING nova.compute.manager [req-7b118a2e-0252-497d-96c1-67104cdf7ee5 req-a940bd4a-114b-4f07-a56b-e4b1757bf6ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Received unexpected event network-vif-plugged-424720b1-be69-4064-90d3-6c8625c811b8 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:25 np0005539564 nova_compute[226295]: 2025-11-29 08:44:25.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:25.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:26 np0005539564 nova_compute[226295]: 2025-11-29 08:44:26.350 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:26.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:26 np0005539564 nova_compute[226295]: 2025-11-29 08:44:26.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:27.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:29.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:30 np0005539564 nova_compute[226295]: 2025-11-29 08:44:30.048 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:31.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:31 np0005539564 nova_compute[226295]: 2025-11-29 08:44:31.817 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:32 np0005539564 nova_compute[226295]: 2025-11-29 08:44:32.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:32 np0005539564 nova_compute[226295]: 2025-11-29 08:44:32.370 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:32 np0005539564 nova_compute[226295]: 2025-11-29 08:44:32.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:32 np0005539564 nova_compute[226295]: 2025-11-29 08:44:32.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:32 np0005539564 nova_compute[226295]: 2025-11-29 08:44:32.371 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:44:32 np0005539564 nova_compute[226295]: 2025-11-29 08:44:32.371 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/816626238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:32 np0005539564 nova_compute[226295]: 2025-11-29 08:44:32.864 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.073 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.075 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4225MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.075 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.075 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.236 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.236 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.296 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:33.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4282406034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.759 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.767 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.794 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.828 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:44:33 np0005539564 nova_compute[226295]: 2025-11-29 08:44:33.828 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:35 np0005539564 nova_compute[226295]: 2025-11-29 08:44:35.052 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:35.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:36 np0005539564 nova_compute[226295]: 2025-11-29 08:44:36.772 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405861.7707355, 32b7350a-e995-40b6-89fe-5c543ecdd0c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:36 np0005539564 nova_compute[226295]: 2025-11-29 08:44:36.772 226310 INFO nova.compute.manager [-] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:44:36 np0005539564 nova_compute[226295]: 2025-11-29 08:44:36.792 226310 DEBUG nova.compute.manager [None req-f6c8de33-6563-4ca3-b2d2-e9cd9cf98bf7 - - - - - -] [instance: 32b7350a-e995-40b6-89fe-5c543ecdd0c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:36 np0005539564 nova_compute[226295]: 2025-11-29 08:44:36.819 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:37.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:44:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:44:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:40 np0005539564 nova_compute[226295]: 2025-11-29 08:44:40.054 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:41.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:41 np0005539564 nova_compute[226295]: 2025-11-29 08:44:41.823 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:42 np0005539564 nova_compute[226295]: 2025-11-29 08:44:42.929 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:42 np0005539564 nova_compute[226295]: 2025-11-29 08:44:42.930 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:42 np0005539564 nova_compute[226295]: 2025-11-29 08:44:42.954 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.080 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.081 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.088 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.088 226310 INFO nova.compute.claims [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.238 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:43.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2053551327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.727 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.736 226310 DEBUG nova.compute.provider_tree [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.815 226310 DEBUG nova.scheduler.client.report [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.997 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:43 np0005539564 nova_compute[226295]: 2025-11-29 08:44:43.998 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.104 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.105 226310 DEBUG nova.network.neutron [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.136 226310 INFO nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.205 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.315 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.316 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.316 226310 INFO nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Creating image(s)#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.349 226310 DEBUG nova.storage.rbd_utils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.381 226310 DEBUG nova.storage.rbd_utils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.425 226310 DEBUG nova.storage.rbd_utils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.431 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.480 226310 DEBUG nova.policy [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.516 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.518 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.519 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.519 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.557 226310 DEBUG nova.storage.rbd_utils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.564 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:44.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:44 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.901 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:44 np0005539564 nova_compute[226295]: 2025-11-29 08:44:44.999 226310 DEBUG nova.storage.rbd_utils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:44:45 np0005539564 nova_compute[226295]: 2025-11-29 08:44:45.056 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:45 np0005539564 nova_compute[226295]: 2025-11-29 08:44:45.116 226310 DEBUG nova.objects.instance [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:45 np0005539564 nova_compute[226295]: 2025-11-29 08:44:45.137 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:44:45 np0005539564 nova_compute[226295]: 2025-11-29 08:44:45.138 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Ensure instance console log exists: /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:44:45 np0005539564 nova_compute[226295]: 2025-11-29 08:44:45.138 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:45 np0005539564 nova_compute[226295]: 2025-11-29 08:44:45.138 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:45 np0005539564 nova_compute[226295]: 2025-11-29 08:44:45.139 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:45 np0005539564 nova_compute[226295]: 2025-11-29 08:44:45.339 226310 DEBUG nova.network.neutron [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Successfully created port: 05cc7810-837e-49c5-98f5-e14ac1ff5796 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:44:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:45.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.513 226310 DEBUG nova.network.neutron [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Successfully updated port: 05cc7810-837e-49c5-98f5-e14ac1ff5796 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.533 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.533 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.534 226310 DEBUG nova.network.neutron [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:44:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:46.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.609 226310 DEBUG nova.compute.manager [req-712d698b-99be-4dfa-982d-400330d81038 req-465c44b2-9f3e-4fe8-afa3-e096c1accddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-changed-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.610 226310 DEBUG nova.compute.manager [req-712d698b-99be-4dfa-982d-400330d81038 req-465c44b2-9f3e-4fe8-afa3-e096c1accddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Refreshing instance network info cache due to event network-changed-05cc7810-837e-49c5-98f5-e14ac1ff5796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.610 226310 DEBUG oslo_concurrency.lockutils [req-712d698b-99be-4dfa-982d-400330d81038 req-465c44b2-9f3e-4fe8-afa3-e096c1accddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.736 226310 DEBUG nova.network.neutron [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:44:46 np0005539564 nova_compute[226295]: 2025-11-29 08:44:46.828 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:46.957054) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886957101, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 998, "num_deletes": 258, "total_data_size": 2158649, "memory_usage": 2192912, "flush_reason": "Manual Compaction"}
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886975774, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 1414103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70622, "largest_seqno": 71614, "table_properties": {"data_size": 1409327, "index_size": 2363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10474, "raw_average_key_size": 19, "raw_value_size": 1399693, "raw_average_value_size": 2650, "num_data_blocks": 102, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405815, "oldest_key_time": 1764405815, "file_creation_time": 1764405886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 18774 microseconds, and 5229 cpu microseconds.
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:46.975832) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 1414103 bytes OK
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:46.975855) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:46.977426) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:46.977439) EVENT_LOG_v1 {"time_micros": 1764405886977434, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:46.977457) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2153613, prev total WAL file size 2153613, number of live WAL files 2.
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:46.978388) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353136' seq:72057594037927935, type:22 .. '6C6F676D0032373639' seq:0, type:0; will stop at (end)
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(1380KB)], [141(10MB)]
Nov 29 03:44:46 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886978480, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 12134401, "oldest_snapshot_seqno": -1}
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9760 keys, 11969684 bytes, temperature: kUnknown
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887122125, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 11969684, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11907561, "index_size": 36602, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 258459, "raw_average_key_size": 26, "raw_value_size": 11737287, "raw_average_value_size": 1202, "num_data_blocks": 1389, "num_entries": 9760, "num_filter_entries": 9760, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:47.122505) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11969684 bytes
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:47.124761) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.4 rd, 83.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.2 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(17.0) write-amplify(8.5) OK, records in: 10297, records dropped: 537 output_compression: NoCompression
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:47.124784) EVENT_LOG_v1 {"time_micros": 1764405887124773, "job": 90, "event": "compaction_finished", "compaction_time_micros": 143735, "compaction_time_cpu_micros": 60523, "output_level": 6, "num_output_files": 1, "total_output_size": 11969684, "num_input_records": 10297, "num_output_records": 9760, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887125363, "job": 90, "event": "table_file_deletion", "file_number": 143}
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887127785, "job": 90, "event": "table_file_deletion", "file_number": 141}
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:46.978222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:47.127852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:47.127859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:47.127861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:47.127862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:44:47.127863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.680 226310 DEBUG nova.network.neutron [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updating instance_info_cache with network_info: [{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.702 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.703 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance network_info: |[{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.704 226310 DEBUG oslo_concurrency.lockutils [req-712d698b-99be-4dfa-982d-400330d81038 req-465c44b2-9f3e-4fe8-afa3-e096c1accddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.705 226310 DEBUG nova.network.neutron [req-712d698b-99be-4dfa-982d-400330d81038 req-465c44b2-9f3e-4fe8-afa3-e096c1accddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Refreshing network info cache for port 05cc7810-837e-49c5-98f5-e14ac1ff5796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.710 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Start _get_guest_xml network_info=[{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.719 226310 WARNING nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.728 226310 DEBUG nova.virt.libvirt.host [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.729 226310 DEBUG nova.virt.libvirt.host [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.733 226310 DEBUG nova.virt.libvirt.host [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.734 226310 DEBUG nova.virt.libvirt.host [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.735 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.735 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.736 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.736 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.736 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.736 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.737 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.737 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.737 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.737 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.737 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.738 226310 DEBUG nova.virt.hardware [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:44:47 np0005539564 nova_compute[226295]: 2025-11-29 08:44:47.741 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/557507739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.259 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.303 226310 DEBUG nova.storage.rbd_utils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.309 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:48.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3282842492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.808 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.811 226310 DEBUG nova.virt.libvirt.vif [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1268542128',display_name='tempest-TestNetworkAdvancedServerOps-server-1268542128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1268542128',id=191,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJhLf6FaCEQlgtGJ466Elx/ZqN2KwkcnjEwf65gRkVvvzRnDJxusTh8/iP5gP4ElqFttcg/fOI7oZlNwuCxP0D5jnDsVjqIml/cam53sPjjzN011MM0KRngsWjRo3EjYfQ==',key_name='tempest-TestNetworkAdvancedServerOps-508554027',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-2mgqivi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:44Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=000f2b4b-91a7-461e-8695-5285bfe53cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.812 226310 DEBUG nova.network.os_vif_util [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.813 226310 DEBUG nova.network.os_vif_util [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.815 226310 DEBUG nova.objects.instance [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.991 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <uuid>000f2b4b-91a7-461e-8695-5285bfe53cc3</uuid>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <name>instance-000000bf</name>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1268542128</nova:name>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:44:47</nova:creationTime>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <nova:port uuid="05cc7810-837e-49c5-98f5-e14ac1ff5796">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <entry name="serial">000f2b4b-91a7-461e-8695-5285bfe53cc3</entry>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <entry name="uuid">000f2b4b-91a7-461e-8695-5285bfe53cc3</entry>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/000f2b4b-91a7-461e-8695-5285bfe53cc3_disk">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/000f2b4b-91a7-461e-8695-5285bfe53cc3_disk.config">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:a7:0e:d6"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <target dev="tap05cc7810-83"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/console.log" append="off"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:44:48 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:44:48 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:44:48 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:44:48 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.993 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Preparing to wait for external event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.995 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.996 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.996 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.997 226310 DEBUG nova.virt.libvirt.vif [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1268542128',display_name='tempest-TestNetworkAdvancedServerOps-server-1268542128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1268542128',id=191,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJhLf6FaCEQlgtGJ466Elx/ZqN2KwkcnjEwf65gRkVvvzRnDJxusTh8/iP5gP4ElqFttcg/fOI7oZlNwuCxP0D5jnDsVjqIml/cam53sPjjzN011MM0KRngsWjRo3EjYfQ==',key_name='tempest-TestNetworkAdvancedServerOps-508554027',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-2mgqivi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:44Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=000f2b4b-91a7-461e-8695-5285bfe53cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.998 226310 DEBUG nova.network.os_vif_util [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.999 226310 DEBUG nova.network.os_vif_util [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:48 np0005539564 nova_compute[226295]: 2025-11-29 08:44:48.999 226310 DEBUG os_vif [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.000 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.000 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.001 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.005 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.006 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05cc7810-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.007 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05cc7810-83, col_values=(('external_ids', {'iface-id': '05cc7810-837e-49c5-98f5-e14ac1ff5796', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:0e:d6', 'vm-uuid': '000f2b4b-91a7-461e-8695-5285bfe53cc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.008 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539564 NetworkManager[48997]: <info>  [1764405889.0099] manager: (tap05cc7810-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.010 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.016 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.017 226310 INFO os_vif [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83')#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.056 226310 DEBUG nova.network.neutron [req-712d698b-99be-4dfa-982d-400330d81038 req-465c44b2-9f3e-4fe8-afa3-e096c1accddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updated VIF entry in instance network info cache for port 05cc7810-837e-49c5-98f5-e14ac1ff5796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.057 226310 DEBUG nova.network.neutron [req-712d698b-99be-4dfa-982d-400330d81038 req-465c44b2-9f3e-4fe8-afa3-e096c1accddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updating instance_info_cache with network_info: [{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.093 226310 DEBUG oslo_concurrency.lockutils [req-712d698b-99be-4dfa-982d-400330d81038 req-465c44b2-9f3e-4fe8-afa3-e096c1accddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.108 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.109 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.109 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:a7:0e:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.110 226310 INFO nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Using config drive#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.140 226310 DEBUG nova.storage.rbd_utils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.627 226310 INFO nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Creating config drive at /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/disk.config#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.633 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44_3jxwc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.795 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44_3jxwc" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.830 226310 DEBUG nova.storage.rbd_utils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:49 np0005539564 nova_compute[226295]: 2025-11-29 08:44:49.834 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/disk.config 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.015 226310 DEBUG oslo_concurrency.processutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/disk.config 000f2b4b-91a7-461e-8695-5285bfe53cc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.016 226310 INFO nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Deleting local config drive /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 kernel: tap05cc7810-83: entered promiscuous mode
Nov 29 03:44:50 np0005539564 NetworkManager[48997]: <info>  [1764405890.1003] manager: (tap05cc7810-83): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.099 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:50Z|00756|binding|INFO|Claiming lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 for this chassis.
Nov 29 03:44:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:50Z|00757|binding|INFO|05cc7810-837e-49c5-98f5-e14ac1ff5796: Claiming fa:16:3e:a7:0e:d6 10.100.0.14
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.106 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.114 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:0e:d6 10.100.0.14'], port_security=['fa:16:3e:a7:0e:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '000f2b4b-91a7-461e-8695-5285bfe53cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c09da3c-4224-4197-b2fb-72b9d807c1ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fe4c91d-8afb-43ca-a608-778d25bb54c3, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=05cc7810-837e-49c5-98f5-e14ac1ff5796) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.117 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 05cc7810-837e-49c5-98f5-e14ac1ff5796 in datapath 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a bound to our chassis#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.119 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a#033[00m
Nov 29 03:44:50 np0005539564 systemd-udevd[300222]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.135 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[772b2738-65c4-44dd-819d-939e5fab147a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.136 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14047ff8-a1 in ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.139 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14047ff8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.140 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[beccddda-1533-4f89-8d93-47f76ab4f7c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.141 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4b32a688-0ab6-4a2b-ab21-cd920e917f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 systemd-machined[190128]: New machine qemu-89-instance-000000bf.
Nov 29 03:44:50 np0005539564 NetworkManager[48997]: <info>  [1764405890.1562] device (tap05cc7810-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:44:50 np0005539564 NetworkManager[48997]: <info>  [1764405890.1574] device (tap05cc7810-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.158 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[230e6928-414b-49c5-89ef-976fa410bf7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 systemd[1]: Started Virtual Machine qemu-89-instance-000000bf.
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.189 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7b56ed70-f8d8-4bb4-b9d5-2de865955565]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.206 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:50Z|00758|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 ovn-installed in OVS
Nov 29 03:44:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:50Z|00759|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 up in Southbound
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.213 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.234 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[42b18a8e-9b27-41fb-add5-e1d269266e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 systemd-udevd[300226]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:44:50 np0005539564 NetworkManager[48997]: <info>  [1764405890.2423] manager: (tap14047ff8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.241 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a31231-bf78-4b32-bc95-f1e7551cd7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.282 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[866e066a-5a9b-4371-89ca-d1718abc309a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.285 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[821fe773-5cce-401e-9afb-9c10022bc357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 NetworkManager[48997]: <info>  [1764405890.3046] device (tap14047ff8-a0): carrier: link connected
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.308 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[ea78f361-11a0-43f2-a8f0-2cc44e4021ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.323 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d2401a5f-9f60-4660-a033-b7c7211d04d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14047ff8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:0b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886498, 'reachable_time': 17693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300255, 'error': None, 'target': 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.337 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d085e3-4750-4c51-a425-ad4704d3016e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:ba0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 886498, 'tstamp': 886498}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300256, 'error': None, 'target': 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.361 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6a59bcfc-db50-4fe0-b546-eee76c2e9140]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14047ff8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:0b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886498, 'reachable_time': 17693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300257, 'error': None, 'target': 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.399 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[26120ce3-0365-4edc-b0ae-0a46dc74833c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.484 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[30924db9-1563-4671-b469-5d6ae5a486ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.487 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14047ff8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.488 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.489 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14047ff8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.491 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 NetworkManager[48997]: <info>  [1764405890.4925] manager: (tap14047ff8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Nov 29 03:44:50 np0005539564 kernel: tap14047ff8-a0: entered promiscuous mode
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.494 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.497 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14047ff8-a0, col_values=(('external_ids', {'iface-id': '6f19bd34-c24e-47b5-aa3a-35a26f253f43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:50Z|00760|binding|INFO|Releasing lport 6f19bd34-c24e-47b5-aa3a-35a26f253f43 from this chassis (sb_readonly=0)
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.498 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.500 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14047ff8-a7c2-4411-bbc5-cc7ce1023d2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14047ff8-a7c2-4411-bbc5-cc7ce1023d2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.501 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e9645ca7-44f2-4207-98f6-acb7d8349207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.502 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/14047ff8-a7c2-4411-bbc5-cc7ce1023d2a.pid.haproxy
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:44:50 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:44:50 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:44:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:44:50.503 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'env', 'PROCESS_TAG=haproxy-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14047ff8-a7c2-4411-bbc5-cc7ce1023d2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.518 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.531 226310 DEBUG nova.compute.manager [req-ea49e463-4682-40ba-b2e4-4dc8b7e5c033 req-6b370144-27dc-4e2d-be5d-d8656f9210d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.532 226310 DEBUG oslo_concurrency.lockutils [req-ea49e463-4682-40ba-b2e4-4dc8b7e5c033 req-6b370144-27dc-4e2d-be5d-d8656f9210d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.532 226310 DEBUG oslo_concurrency.lockutils [req-ea49e463-4682-40ba-b2e4-4dc8b7e5c033 req-6b370144-27dc-4e2d-be5d-d8656f9210d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.532 226310 DEBUG oslo_concurrency.lockutils [req-ea49e463-4682-40ba-b2e4-4dc8b7e5c033 req-6b370144-27dc-4e2d-be5d-d8656f9210d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.532 226310 DEBUG nova.compute.manager [req-ea49e463-4682-40ba-b2e4-4dc8b7e5c033 req-6b370144-27dc-4e2d-be5d-d8656f9210d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Processing event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:44:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:50.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.747 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405890.7464488, 000f2b4b-91a7-461e-8695-5285bfe53cc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.747 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.751 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.756 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.760 226310 INFO nova.virt.libvirt.driver [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance spawned successfully.#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.760 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.776 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.784 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.787 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.788 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.788 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.788 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.789 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.789 226310 DEBUG nova.virt.libvirt.driver [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.820 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.821 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405890.750315, 000f2b4b-91a7-461e-8695-5285bfe53cc3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.821 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.849 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.852 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405890.754452, 000f2b4b-91a7-461e-8695-5285bfe53cc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.852 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.858 226310 INFO nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Took 6.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.858 226310 DEBUG nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.883 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.886 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:50 np0005539564 podman[300332]: 2025-11-29 08:44:50.89961699 +0000 UTC m=+0.052869251 container create ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.912 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.939 226310 INFO nova.compute.manager [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Took 7.90 seconds to build instance.#033[00m
Nov 29 03:44:50 np0005539564 systemd[1]: Started libpod-conmon-ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a.scope.
Nov 29 03:44:50 np0005539564 nova_compute[226295]: 2025-11-29 08:44:50.957 226310 DEBUG oslo_concurrency.lockutils [None req-65f0b100-cef4-4779-871d-c9ef621e4afb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:50 np0005539564 podman[300332]: 2025-11-29 08:44:50.87302404 +0000 UTC m=+0.026276311 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:44:50 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:44:50 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de1fdb6b637a1749f743943804cc87337f57bf20ec39e14d2d37bd37831bd70f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:50 np0005539564 podman[300332]: 2025-11-29 08:44:50.996058879 +0000 UTC m=+0.149311160 container init ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:44:51 np0005539564 podman[300332]: 2025-11-29 08:44:51.003221723 +0000 UTC m=+0.156473994 container start ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:44:51 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[300347]: [NOTICE]   (300351) : New worker (300353) forked
Nov 29 03:44:51 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[300347]: [NOTICE]   (300351) : Loading success.
Nov 29 03:44:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:51.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:52 np0005539564 nova_compute[226295]: 2025-11-29 08:44:52.595 226310 DEBUG nova.compute.manager [req-9f1d97e7-e31a-4e73-b47b-e60a54792cf7 req-5b1ad0d6-58ff-4dac-a8ca-a9f1c1621c54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:52 np0005539564 nova_compute[226295]: 2025-11-29 08:44:52.596 226310 DEBUG oslo_concurrency.lockutils [req-9f1d97e7-e31a-4e73-b47b-e60a54792cf7 req-5b1ad0d6-58ff-4dac-a8ca-a9f1c1621c54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:52 np0005539564 nova_compute[226295]: 2025-11-29 08:44:52.596 226310 DEBUG oslo_concurrency.lockutils [req-9f1d97e7-e31a-4e73-b47b-e60a54792cf7 req-5b1ad0d6-58ff-4dac-a8ca-a9f1c1621c54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:52 np0005539564 nova_compute[226295]: 2025-11-29 08:44:52.596 226310 DEBUG oslo_concurrency.lockutils [req-9f1d97e7-e31a-4e73-b47b-e60a54792cf7 req-5b1ad0d6-58ff-4dac-a8ca-a9f1c1621c54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:52 np0005539564 nova_compute[226295]: 2025-11-29 08:44:52.596 226310 DEBUG nova.compute.manager [req-9f1d97e7-e31a-4e73-b47b-e60a54792cf7 req-5b1ad0d6-58ff-4dac-a8ca-a9f1c1621c54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:52 np0005539564 nova_compute[226295]: 2025-11-29 08:44:52.596 226310 WARNING nova.compute.manager [req-9f1d97e7-e31a-4e73-b47b-e60a54792cf7 req-5b1ad0d6-58ff-4dac-a8ca-a9f1c1621c54 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:44:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:53.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:54 np0005539564 nova_compute[226295]: 2025-11-29 08:44:54.010 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539564 podman[300364]: 2025-11-29 08:44:54.51823549 +0000 UTC m=+0.059792540 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:44:54 np0005539564 podman[300363]: 2025-11-29 08:44:54.54708865 +0000 UTC m=+0.085325689 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 03:44:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:54.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:54 np0005539564 podman[300362]: 2025-11-29 08:44:54.615677215 +0000 UTC m=+0.163895235 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:44:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.071 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.484 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:55 np0005539564 NetworkManager[48997]: <info>  [1764405895.4850] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 29 03:44:55 np0005539564 NetworkManager[48997]: <info>  [1764405895.4871] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Nov 29 03:44:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:55.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.547 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:55 np0005539564 ovn_controller[130591]: 2025-11-29T08:44:55Z|00761|binding|INFO|Releasing lport 6f19bd34-c24e-47b5-aa3a-35a26f253f43 from this chassis (sb_readonly=0)
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.559 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.814 226310 DEBUG nova.compute.manager [req-3a6f2a5f-4562-4fe3-8f3e-b5aefc25147f req-751833b0-c0b7-426c-a6ca-b30be5358d83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-changed-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.815 226310 DEBUG nova.compute.manager [req-3a6f2a5f-4562-4fe3-8f3e-b5aefc25147f req-751833b0-c0b7-426c-a6ca-b30be5358d83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Refreshing instance network info cache due to event network-changed-05cc7810-837e-49c5-98f5-e14ac1ff5796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.816 226310 DEBUG oslo_concurrency.lockutils [req-3a6f2a5f-4562-4fe3-8f3e-b5aefc25147f req-751833b0-c0b7-426c-a6ca-b30be5358d83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.817 226310 DEBUG oslo_concurrency.lockutils [req-3a6f2a5f-4562-4fe3-8f3e-b5aefc25147f req-751833b0-c0b7-426c-a6ca-b30be5358d83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:55 np0005539564 nova_compute[226295]: 2025-11-29 08:44:55.817 226310 DEBUG nova.network.neutron [req-3a6f2a5f-4562-4fe3-8f3e-b5aefc25147f req-751833b0-c0b7-426c-a6ca-b30be5358d83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Refreshing network info cache for port 05cc7810-837e-49c5-98f5-e14ac1ff5796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:56.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:57.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:58 np0005539564 nova_compute[226295]: 2025-11-29 08:44:58.562 226310 DEBUG nova.network.neutron [req-3a6f2a5f-4562-4fe3-8f3e-b5aefc25147f req-751833b0-c0b7-426c-a6ca-b30be5358d83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updated VIF entry in instance network info cache for port 05cc7810-837e-49c5-98f5-e14ac1ff5796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:58 np0005539564 nova_compute[226295]: 2025-11-29 08:44:58.563 226310 DEBUG nova.network.neutron [req-3a6f2a5f-4562-4fe3-8f3e-b5aefc25147f req-751833b0-c0b7-426c-a6ca-b30be5358d83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updating instance_info_cache with network_info: [{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:58 np0005539564 nova_compute[226295]: 2025-11-29 08:44:58.590 226310 DEBUG oslo_concurrency.lockutils [req-3a6f2a5f-4562-4fe3-8f3e-b5aefc25147f req-751833b0-c0b7-426c-a6ca-b30be5358d83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:58.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:59 np0005539564 nova_compute[226295]: 2025-11-29 08:44:59.060 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:44:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:59.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:00 np0005539564 nova_compute[226295]: 2025-11-29 08:45:00.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:00.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:01.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:03.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:03.761 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:03.763 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:03.765 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:45:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:45:04 np0005539564 nova_compute[226295]: 2025-11-29 08:45:04.064 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:05 np0005539564 nova_compute[226295]: 2025-11-29 08:45:05.077 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:05.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:05Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:0e:d6 10.100.0.14
Nov 29 03:45:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:05Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:0e:d6 10.100.0.14
Nov 29 03:45:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:07.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:08.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:09 np0005539564 nova_compute[226295]: 2025-11-29 08:45:09.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:09.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:10 np0005539564 nova_compute[226295]: 2025-11-29 08:45:10.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:11 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:11 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.550 226310 INFO nova.compute.manager [None req-e5cddc7e-0027-4df4-94bc-759182e25b53 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Get console output#033[00m
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.558 270504 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:45:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:12.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.829 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.830 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.831 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.964 226310 DEBUG oslo_concurrency.lockutils [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.964 226310 DEBUG oslo_concurrency.lockutils [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.965 226310 DEBUG nova.compute.manager [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.970 226310 DEBUG nova.compute.manager [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:45:12 np0005539564 nova_compute[226295]: 2025-11-29 08:45:12.972 226310 DEBUG nova.objects.instance [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'flavor' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:13 np0005539564 nova_compute[226295]: 2025-11-29 08:45:13.081 226310 DEBUG nova.virt.libvirt.driver [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:45:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:13.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:14 np0005539564 nova_compute[226295]: 2025-11-29 08:45:14.104 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:14 np0005539564 nova_compute[226295]: 2025-11-29 08:45:14.337 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:14.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 kernel: tap05cc7810-83 (unregistering): left promiscuous mode
Nov 29 03:45:15 np0005539564 NetworkManager[48997]: <info>  [1764405915.3644] device (tap05cc7810-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.374 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00762|binding|INFO|Releasing lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 from this chassis (sb_readonly=0)
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00763|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 down in Southbound
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00764|binding|INFO|Removing iface tap05cc7810-83 ovn-installed in OVS
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.379 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.397 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:0e:d6 10.100.0.14'], port_security=['fa:16:3e:a7:0e:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '000f2b4b-91a7-461e-8695-5285bfe53cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c09da3c-4224-4197-b2fb-72b9d807c1ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fe4c91d-8afb-43ca-a608-778d25bb54c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=05cc7810-837e-49c5-98f5-e14ac1ff5796) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.398 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 05cc7810-837e-49c5-98f5-e14ac1ff5796 in datapath 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a unbound from our chassis#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.399 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.400 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c6e54a-5120-426d-b6e6-ea46fe84ddc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.401 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a namespace which is not needed anymore#033[00m
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.401 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Nov 29 03:45:15 np0005539564 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000bf.scope: Consumed 15.251s CPU time.
Nov 29 03:45:15 np0005539564 systemd-machined[190128]: Machine qemu-89-instance-000000bf terminated.
Nov 29 03:45:15 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[300347]: [NOTICE]   (300351) : haproxy version is 2.8.14-c23fe91
Nov 29 03:45:15 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[300347]: [NOTICE]   (300351) : path to executable is /usr/sbin/haproxy
Nov 29 03:45:15 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[300347]: [WARNING]  (300351) : Exiting Master process...
Nov 29 03:45:15 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[300347]: [WARNING]  (300351) : Exiting Master process...
Nov 29 03:45:15 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[300347]: [ALERT]    (300351) : Current worker (300353) exited with code 143 (Terminated)
Nov 29 03:45:15 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[300347]: [WARNING]  (300351) : All workers exited. Exiting... (0)
Nov 29 03:45:15 np0005539564 systemd[1]: libpod-ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a.scope: Deactivated successfully.
Nov 29 03:45:15 np0005539564 podman[300751]: 2025-11-29 08:45:15.57893402 +0000 UTC m=+0.068047463 container died ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:15.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:15 np0005539564 kernel: tap05cc7810-83: entered promiscuous mode
Nov 29 03:45:15 np0005539564 NetworkManager[48997]: <info>  [1764405915.6013] manager: (tap05cc7810-83): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00765|binding|INFO|Claiming lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 for this chassis.
Nov 29 03:45:15 np0005539564 systemd-udevd[300732]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00766|binding|INFO|05cc7810-837e-49c5-98f5-e14ac1ff5796: Claiming fa:16:3e:a7:0e:d6 10.100.0.14
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.604 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 kernel: tap05cc7810-83 (unregistering): left promiscuous mode
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.614 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:0e:d6 10.100.0.14'], port_security=['fa:16:3e:a7:0e:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '000f2b4b-91a7-461e-8695-5285bfe53cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c09da3c-4224-4197-b2fb-72b9d807c1ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fe4c91d-8afb-43ca-a608-778d25bb54c3, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=05cc7810-837e-49c5-98f5-e14ac1ff5796) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:15 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:45:15 np0005539564 systemd[1]: var-lib-containers-storage-overlay-de1fdb6b637a1749f743943804cc87337f57bf20ec39e14d2d37bd37831bd70f-merged.mount: Deactivated successfully.
Nov 29 03:45:15 np0005539564 podman[300751]: 2025-11-29 08:45:15.653157168 +0000 UTC m=+0.142270641 container cleanup ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.651 226310 DEBUG nova.compute.manager [req-e86f406c-e230-4594-8e18-6de24b7520b4 req-4aaa52ff-618f-4c3e-baa9-78f6eba7504e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.652 226310 DEBUG oslo_concurrency.lockutils [req-e86f406c-e230-4594-8e18-6de24b7520b4 req-4aaa52ff-618f-4c3e-baa9-78f6eba7504e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.652 226310 DEBUG oslo_concurrency.lockutils [req-e86f406c-e230-4594-8e18-6de24b7520b4 req-4aaa52ff-618f-4c3e-baa9-78f6eba7504e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.652 226310 DEBUG oslo_concurrency.lockutils [req-e86f406c-e230-4594-8e18-6de24b7520b4 req-4aaa52ff-618f-4c3e-baa9-78f6eba7504e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.652 226310 DEBUG nova.compute.manager [req-e86f406c-e230-4594-8e18-6de24b7520b4 req-4aaa52ff-618f-4c3e-baa9-78f6eba7504e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.653 226310 WARNING nova.compute.manager [req-e86f406c-e230-4594-8e18-6de24b7520b4 req-4aaa52ff-618f-4c3e-baa9-78f6eba7504e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state active and task_state powering-off.#033[00m
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00767|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 ovn-installed in OVS
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00768|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 up in Southbound
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.655 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00769|binding|INFO|Releasing lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 from this chassis (sb_readonly=1)
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.657 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00770|if_status|INFO|Not setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 down as sb is readonly
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00771|binding|INFO|Releasing lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 from this chassis (sb_readonly=0)
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00772|binding|INFO|Removing iface tap05cc7810-83 ovn-installed in OVS
Nov 29 03:45:15 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:15Z|00773|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 down in Southbound
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.664 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.667 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:0e:d6 10.100.0.14'], port_security=['fa:16:3e:a7:0e:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '000f2b4b-91a7-461e-8695-5285bfe53cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3c09da3c-4224-4197-b2fb-72b9d807c1ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fe4c91d-8afb-43ca-a608-778d25bb54c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=05cc7810-837e-49c5-98f5-e14ac1ff5796) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:15 np0005539564 systemd[1]: libpod-conmon-ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a.scope: Deactivated successfully.
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.675 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 podman[300785]: 2025-11-29 08:45:15.725030293 +0000 UTC m=+0.044658420 container remove ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.732 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[72703bcb-bb60-4afa-926d-19779c9fc8b0]: (4, ('Sat Nov 29 08:45:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a (ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a)\nec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a\nSat Nov 29 08:45:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a (ec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a)\nec433435a086d88570b6423dacfeba3656be41a6e750be954f5f9b4df5ab2f4a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.734 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d55981-7f16-4f65-9fec-f193ef1f9bb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.735 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14047ff8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.736 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 kernel: tap14047ff8-a0: left promiscuous mode
Nov 29 03:45:15 np0005539564 nova_compute[226295]: 2025-11-29 08:45:15.758 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.761 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6e922f3a-9d26-490c-903e-007f9b286a02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.782 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0ebbd0-e281-4db5-9b81-c1b972e20c7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.784 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[90d9b2c2-663e-4289-a12d-d39129820cd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.800 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b012d78e-f85f-43b6-88c3-acccc733b1e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886490, 'reachable_time': 43645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300803, 'error': None, 'target': 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.804 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:45:15 np0005539564 systemd[1]: run-netns-ovnmeta\x2d14047ff8\x2da7c2\x2d4411\x2dbbc5\x2dcc7ce1023d2a.mount: Deactivated successfully.
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.804 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[fc89b532-b00b-42df-88f1-5c3d81190e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.805 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 05cc7810-837e-49c5-98f5-e14ac1ff5796 in datapath 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a unbound from our chassis#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.807 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.807 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21183f8d-a5f3-4651-a515-74338dbe328c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.808 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 05cc7810-837e-49c5-98f5-e14ac1ff5796 in datapath 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a unbound from our chassis#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.809 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:45:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:15.810 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9098fb11-eab1-4d8a-b3d5-8ba772ade935]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:16 np0005539564 nova_compute[226295]: 2025-11-29 08:45:16.116 226310 INFO nova.virt.libvirt.driver [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:45:16 np0005539564 nova_compute[226295]: 2025-11-29 08:45:16.124 226310 INFO nova.virt.libvirt.driver [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance destroyed successfully.#033[00m
Nov 29 03:45:16 np0005539564 nova_compute[226295]: 2025-11-29 08:45:16.125 226310 DEBUG nova.objects.instance [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'numa_topology' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:16 np0005539564 nova_compute[226295]: 2025-11-29 08:45:16.143 226310 DEBUG nova.compute.manager [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:16 np0005539564 nova_compute[226295]: 2025-11-29 08:45:16.226 226310 DEBUG oslo_concurrency.lockutils [None req-7718a375-36a4-4c5a-83be-119e3de809d0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:16.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:17.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.805 226310 DEBUG nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.805 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.806 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.806 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.806 226310 DEBUG nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.807 226310 WARNING nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.807 226310 DEBUG nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.807 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.807 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.808 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.808 226310 DEBUG nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.808 226310 WARNING nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.809 226310 DEBUG nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.809 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.809 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.809 226310 DEBUG oslo_concurrency.lockutils [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.810 226310 DEBUG nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:17 np0005539564 nova_compute[226295]: 2025-11-29 08:45:17.810 226310 WARNING nova.compute.manager [req-21c635fc-66ce-40bc-873e-89ef483e57bf req-34a0970f-e18c-4d05-a499-666d16851aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:45:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:18.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:18 np0005539564 nova_compute[226295]: 2025-11-29 08:45:18.872 226310 INFO nova.compute.manager [None req-8719eddc-d02c-45ed-8bdb-55fb227ba73a 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Get console output#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.107 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.141 226310 DEBUG nova.objects.instance [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'flavor' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.167 226310 DEBUG oslo_concurrency.lockutils [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.167 226310 DEBUG oslo_concurrency.lockutils [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.168 226310 DEBUG nova.network.neutron [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.168 226310 DEBUG nova.objects.instance [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'info_cache' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.369 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:19.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.949 226310 DEBUG nova.compute.manager [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.949 226310 DEBUG oslo_concurrency.lockutils [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.950 226310 DEBUG oslo_concurrency.lockutils [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.950 226310 DEBUG oslo_concurrency.lockutils [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.950 226310 DEBUG nova.compute.manager [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.951 226310 WARNING nova.compute.manager [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.951 226310 DEBUG nova.compute.manager [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.951 226310 DEBUG oslo_concurrency.lockutils [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.952 226310 DEBUG oslo_concurrency.lockutils [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.952 226310 DEBUG oslo_concurrency.lockutils [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.952 226310 DEBUG nova.compute.manager [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:19 np0005539564 nova_compute[226295]: 2025-11-29 08:45:19.953 226310 WARNING nova.compute.manager [req-15fdc28c-9d3b-48cf-a9dc-0a4deb16e1dc req-1af5a856-d55d-4627-bbdd-e35b125c8bdc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:45:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:20 np0005539564 nova_compute[226295]: 2025-11-29 08:45:20.130 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:20.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:20.774 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:20.775 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:45:20 np0005539564 nova_compute[226295]: 2025-11-29 08:45:20.775 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:22.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:22 np0005539564 nova_compute[226295]: 2025-11-29 08:45:22.725 226310 DEBUG nova.network.neutron [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updating instance_info_cache with network_info: [{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:23.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:24 np0005539564 nova_compute[226295]: 2025-11-29 08:45:24.159 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:24.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:24 np0005539564 nova_compute[226295]: 2025-11-29 08:45:24.776 226310 DEBUG oslo_concurrency.lockutils [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:24 np0005539564 nova_compute[226295]: 2025-11-29 08:45:24.778 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:24 np0005539564 nova_compute[226295]: 2025-11-29 08:45:24.778 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:45:24 np0005539564 nova_compute[226295]: 2025-11-29 08:45:24.779 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:25 np0005539564 nova_compute[226295]: 2025-11-29 08:45:25.131 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:25 np0005539564 podman[300806]: 2025-11-29 08:45:25.544485021 +0000 UTC m=+0.085851724 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 03:45:25 np0005539564 podman[300805]: 2025-11-29 08:45:25.555943781 +0000 UTC m=+0.108798324 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:45:25 np0005539564 podman[300804]: 2025-11-29 08:45:25.571986035 +0000 UTC m=+0.120846820 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:45:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:25.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.024 226310 INFO nova.virt.libvirt.driver [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance destroyed successfully.#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.024 226310 DEBUG nova.objects.instance [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'numa_topology' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.071 226310 DEBUG nova.objects.instance [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.095 226310 DEBUG nova.virt.libvirt.vif [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1268542128',display_name='tempest-TestNetworkAdvancedServerOps-server-1268542128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1268542128',id=191,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJhLf6FaCEQlgtGJ466Elx/ZqN2KwkcnjEwf65gRkVvvzRnDJxusTh8/iP5gP4ElqFttcg/fOI7oZlNwuCxP0D5jnDsVjqIml/cam53sPjjzN011MM0KRngsWjRo3EjYfQ==',key_name='tempest-TestNetworkAdvancedServerOps-508554027',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:44:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-2mgqivi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:16Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=000f2b4b-91a7-461e-8695-5285bfe53cc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.095 226310 DEBUG nova.network.os_vif_util [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.096 226310 DEBUG nova.network.os_vif_util [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.096 226310 DEBUG os_vif [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.098 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.098 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05cc7810-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.099 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.101 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.103 226310 INFO os_vif [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83')#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.110 226310 DEBUG nova.virt.libvirt.driver [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Start _get_guest_xml network_info=[{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.114 226310 WARNING nova.virt.libvirt.driver [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.123 226310 DEBUG nova.virt.libvirt.host [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.124 226310 DEBUG nova.virt.libvirt.host [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.128 226310 DEBUG nova.virt.libvirt.host [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.128 226310 DEBUG nova.virt.libvirt.host [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.129 226310 DEBUG nova.virt.libvirt.driver [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.129 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.130 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.130 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.130 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.131 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.131 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.131 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.131 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.132 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.132 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.132 226310 DEBUG nova.virt.hardware [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.132 226310 DEBUG nova.objects.instance [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.197 226310 DEBUG oslo_concurrency.processutils [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2972397803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.627 226310 DEBUG oslo_concurrency.processutils [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:26.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:26 np0005539564 nova_compute[226295]: 2025-11-29 08:45:26.671 226310 DEBUG oslo_concurrency.processutils [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:26.777 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3790370612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:27 np0005539564 nova_compute[226295]: 2025-11-29 08:45:27.130 226310 DEBUG oslo_concurrency.processutils [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:27 np0005539564 nova_compute[226295]: 2025-11-29 08:45:27.132 226310 DEBUG nova.virt.libvirt.vif [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1268542128',display_name='tempest-TestNetworkAdvancedServerOps-server-1268542128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1268542128',id=191,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJhLf6FaCEQlgtGJ466Elx/ZqN2KwkcnjEwf65gRkVvvzRnDJxusTh8/iP5gP4ElqFttcg/fOI7oZlNwuCxP0D5jnDsVjqIml/cam53sPjjzN011MM0KRngsWjRo3EjYfQ==',key_name='tempest-TestNetworkAdvancedServerOps-508554027',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:44:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-2mgqivi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:16Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=000f2b4b-91a7-461e-8695-5285bfe53cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:45:27 np0005539564 nova_compute[226295]: 2025-11-29 08:45:27.133 226310 DEBUG nova.network.os_vif_util [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:27 np0005539564 nova_compute[226295]: 2025-11-29 08:45:27.134 226310 DEBUG nova.network.os_vif_util [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:27 np0005539564 nova_compute[226295]: 2025-11-29 08:45:27.136 226310 DEBUG nova.objects.instance [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:27.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:28.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.481 226310 DEBUG nova.virt.libvirt.driver [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <uuid>000f2b4b-91a7-461e-8695-5285bfe53cc3</uuid>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <name>instance-000000bf</name>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1268542128</nova:name>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:45:26</nova:creationTime>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <nova:port uuid="05cc7810-837e-49c5-98f5-e14ac1ff5796">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <entry name="serial">000f2b4b-91a7-461e-8695-5285bfe53cc3</entry>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <entry name="uuid">000f2b4b-91a7-461e-8695-5285bfe53cc3</entry>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/000f2b4b-91a7-461e-8695-5285bfe53cc3_disk">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/000f2b4b-91a7-461e-8695-5285bfe53cc3_disk.config">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:a7:0e:d6"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <target dev="tap05cc7810-83"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3/console.log" append="off"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:45:29 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:45:29 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:45:29 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:45:29 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.484 226310 DEBUG nova.virt.libvirt.driver [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.485 226310 DEBUG nova.virt.libvirt.driver [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.487 226310 DEBUG nova.virt.libvirt.vif [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1268542128',display_name='tempest-TestNetworkAdvancedServerOps-server-1268542128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1268542128',id=191,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJhLf6FaCEQlgtGJ466Elx/ZqN2KwkcnjEwf65gRkVvvzRnDJxusTh8/iP5gP4ElqFttcg/fOI7oZlNwuCxP0D5jnDsVjqIml/cam53sPjjzN011MM0KRngsWjRo3EjYfQ==',key_name='tempest-TestNetworkAdvancedServerOps-508554027',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:44:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-2mgqivi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:16Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=000f2b4b-91a7-461e-8695-5285bfe53cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.488 226310 DEBUG nova.network.os_vif_util [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.489 226310 DEBUG nova.network.os_vif_util [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.490 226310 DEBUG os_vif [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.491 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.492 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.492 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.497 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.497 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05cc7810-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.498 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05cc7810-83, col_values=(('external_ids', {'iface-id': '05cc7810-837e-49c5-98f5-e14ac1ff5796', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:0e:d6', 'vm-uuid': '000f2b4b-91a7-461e-8695-5285bfe53cc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.500 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:29 np0005539564 NetworkManager[48997]: <info>  [1764405929.5034] manager: (tap05cc7810-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.504 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.508 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.510 226310 INFO os_vif [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83')#033[00m
Nov 29 03:45:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:29.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:29 np0005539564 kernel: tap05cc7810-83: entered promiscuous mode
Nov 29 03:45:29 np0005539564 NetworkManager[48997]: <info>  [1764405929.6138] manager: (tap05cc7810-83): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.615 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:29Z|00774|binding|INFO|Claiming lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 for this chassis.
Nov 29 03:45:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:29Z|00775|binding|INFO|05cc7810-837e-49c5-98f5-e14ac1ff5796: Claiming fa:16:3e:a7:0e:d6 10.100.0.14
Nov 29 03:45:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:29Z|00776|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 ovn-installed in OVS
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.643 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:29 np0005539564 nova_compute[226295]: 2025-11-29 08:45:29.648 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:29 np0005539564 systemd-machined[190128]: New machine qemu-90-instance-000000bf.
Nov 29 03:45:29 np0005539564 systemd[1]: Started Virtual Machine qemu-90-instance-000000bf.
Nov 29 03:45:29 np0005539564 systemd-udevd[300947]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:45:29 np0005539564 NetworkManager[48997]: <info>  [1764405929.7074] device (tap05cc7810-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:45:29 np0005539564 NetworkManager[48997]: <info>  [1764405929.7101] device (tap05cc7810-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:45:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:29Z|00777|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 up in Southbound
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.928 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:0e:d6 10.100.0.14'], port_security=['fa:16:3e:a7:0e:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '000f2b4b-91a7-461e-8695-5285bfe53cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '7', 'neutron:security_group_ids': '3c09da3c-4224-4197-b2fb-72b9d807c1ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fe4c91d-8afb-43ca-a608-778d25bb54c3, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=05cc7810-837e-49c5-98f5-e14ac1ff5796) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.929 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 05cc7810-837e-49c5-98f5-e14ac1ff5796 in datapath 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a bound to our chassis#033[00m
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.930 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a#033[00m
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.943 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c655ce2d-2591-48af-a1a7-f85ee0f7505c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.945 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14047ff8-a1 in ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.948 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14047ff8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.948 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f5699b4d-0057-4e59-bc25-43d5aeac2b2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.949 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2dface-0eeb-4a2c-b9cc-f4ad3d42b385]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.967 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[245ea585-d5b0-40db-812e-63acadc045b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:29.994 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[171aa2d9-fbbc-4959-a1f0-a0edd5ce4e25]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.040 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[91c52275-3a3e-4f2a-8d60-6efceb69f2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 NetworkManager[48997]: <info>  [1764405930.0477] manager: (tap14047ff8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/357)
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.047 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc86123-68ca-4b44-873e-ff594002b239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.090 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3a42af63-2e3f-49dd-a432-6946ccf24f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.093 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[162668ad-bd57-4a49-80d9-18fa5e905402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 NetworkManager[48997]: <info>  [1764405930.1252] device (tap14047ff8-a0): carrier: link connected
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.133 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.135 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[4493cfa6-4c9d-4fc9-9e11-6558ae9ca2b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.164 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea2882e-bf7a-478b-9cff-25b6ab9d4d1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14047ff8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:0b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890480, 'reachable_time': 16136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301021, 'error': None, 'target': 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.187 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7a13b8-d38d-4636-9fb0-bc2396dcf840]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:ba0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890480, 'tstamp': 890480}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301022, 'error': None, 'target': 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.215 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[305d715e-2aef-4b54-a3aa-8a77bd6e704c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14047ff8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:0b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890480, 'reachable_time': 16136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301024, 'error': None, 'target': 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.230 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 000f2b4b-91a7-461e-8695-5285bfe53cc3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.231 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405930.2302544, 000f2b4b-91a7-461e-8695-5285bfe53cc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.232 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.234 226310 DEBUG nova.compute.manager [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.239 226310 INFO nova.virt.libvirt.driver [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance rebooted successfully.#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.240 226310 DEBUG nova.compute.manager [None req-79a28a23-f848-450d-9caf-f47dc3ff4009 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.269 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6d4902-b181-4ea7-b60a-540d261a9bc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.345 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d5557950-62e3-4288-9cde-dd35d6e745dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.347 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14047ff8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.347 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.348 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14047ff8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539564 NetworkManager[48997]: <info>  [1764405930.3508] manager: (tap14047ff8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Nov 29 03:45:30 np0005539564 kernel: tap14047ff8-a0: entered promiscuous mode
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.349 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.354 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14047ff8-a0, col_values=(('external_ids', {'iface-id': '6f19bd34-c24e-47b5-aa3a-35a26f253f43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:30Z|00778|binding|INFO|Releasing lport 6f19bd34-c24e-47b5-aa3a-35a26f253f43 from this chassis (sb_readonly=1)
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.375 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.376 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14047ff8-a7c2-4411-bbc5-cc7ce1023d2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14047ff8-a7c2-4411-bbc5-cc7ce1023d2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.377 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8e69fc50-5d35-4c4b-8315-4328709cbf8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.378 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/14047ff8-a7c2-4411-bbc5-cc7ce1023d2a.pid.haproxy
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:45:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:30.379 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'env', 'PROCESS_TAG=haproxy-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14047ff8-a7c2-4411-bbc5-cc7ce1023d2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.594 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.605 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:45:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:30.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:30 np0005539564 podman[301055]: 2025-11-29 08:45:30.826995026 +0000 UTC m=+0.089487182 container create bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.858 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764405930.2344656, 000f2b4b-91a7-461e-8695-5285bfe53cc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.860 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:45:30 np0005539564 systemd[1]: Started libpod-conmon-bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551.scope.
Nov 29 03:45:30 np0005539564 podman[301055]: 2025-11-29 08:45:30.791782493 +0000 UTC m=+0.054274689 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:45:30 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:45:30 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379fed45ea14e0473a06184aa8738130279733c7339babb5ecbdb25d7cc78b24/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:30 np0005539564 podman[301055]: 2025-11-29 08:45:30.93361256 +0000 UTC m=+0.196104726 container init bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:45:30 np0005539564 podman[301055]: 2025-11-29 08:45:30.941315249 +0000 UTC m=+0.203807385 container start bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:30 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[301070]: [NOTICE]   (301074) : New worker (301076) forked
Nov 29 03:45:30 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[301070]: [NOTICE]   (301074) : Loading success.
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.988 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:30 np0005539564 nova_compute[226295]: 2025-11-29 08:45:30.994 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:45:31 np0005539564 nova_compute[226295]: 2025-11-29 08:45:31.258 226310 DEBUG nova.compute.manager [req-34f27768-b055-42f5-96ed-20bfe0478d66 req-f04641fa-07ef-4509-8979-775f1f5b7bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:31 np0005539564 nova_compute[226295]: 2025-11-29 08:45:31.258 226310 DEBUG oslo_concurrency.lockutils [req-34f27768-b055-42f5-96ed-20bfe0478d66 req-f04641fa-07ef-4509-8979-775f1f5b7bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:31 np0005539564 nova_compute[226295]: 2025-11-29 08:45:31.258 226310 DEBUG oslo_concurrency.lockutils [req-34f27768-b055-42f5-96ed-20bfe0478d66 req-f04641fa-07ef-4509-8979-775f1f5b7bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:31 np0005539564 nova_compute[226295]: 2025-11-29 08:45:31.259 226310 DEBUG oslo_concurrency.lockutils [req-34f27768-b055-42f5-96ed-20bfe0478d66 req-f04641fa-07ef-4509-8979-775f1f5b7bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:31 np0005539564 nova_compute[226295]: 2025-11-29 08:45:31.259 226310 DEBUG nova.compute.manager [req-34f27768-b055-42f5-96ed-20bfe0478d66 req-f04641fa-07ef-4509-8979-775f1f5b7bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:31 np0005539564 nova_compute[226295]: 2025-11-29 08:45:31.259 226310 WARNING nova.compute.manager [req-34f27768-b055-42f5-96ed-20bfe0478d66 req-f04641fa-07ef-4509-8979-775f1f5b7bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:45:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:32 np0005539564 nova_compute[226295]: 2025-11-29 08:45:32.256 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updating instance_info_cache with network_info: [{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:32.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.362 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.362 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.362 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.363 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.363 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.479 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.480 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.481 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.482 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.482 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.519 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.671 226310 DEBUG nova.compute.manager [req-8085bc70-951a-470d-ae59-4b59454fecbb req-623d05f1-a1b9-4ac6-8d8b-6590ba6edc2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.672 226310 DEBUG oslo_concurrency.lockutils [req-8085bc70-951a-470d-ae59-4b59454fecbb req-623d05f1-a1b9-4ac6-8d8b-6590ba6edc2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.673 226310 DEBUG oslo_concurrency.lockutils [req-8085bc70-951a-470d-ae59-4b59454fecbb req-623d05f1-a1b9-4ac6-8d8b-6590ba6edc2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.673 226310 DEBUG oslo_concurrency.lockutils [req-8085bc70-951a-470d-ae59-4b59454fecbb req-623d05f1-a1b9-4ac6-8d8b-6590ba6edc2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.674 226310 DEBUG nova.compute.manager [req-8085bc70-951a-470d-ae59-4b59454fecbb req-623d05f1-a1b9-4ac6-8d8b-6590ba6edc2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.674 226310 WARNING nova.compute.manager [req-8085bc70-951a-470d-ae59-4b59454fecbb req-623d05f1-a1b9-4ac6-8d8b-6590ba6edc2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:45:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1698163957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:34 np0005539564 nova_compute[226295]: 2025-11-29 08:45:34.962 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.136 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.187 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.188 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.407 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.408 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4031MB free_disk=20.897125244140625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.409 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.409 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.588 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 000f2b4b-91a7-461e-8695-5285bfe53cc3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.589 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.589 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:45:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:35.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:35 np0005539564 nova_compute[226295]: 2025-11-29 08:45:35.625 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3519615254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:36 np0005539564 nova_compute[226295]: 2025-11-29 08:45:36.141 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:36 np0005539564 nova_compute[226295]: 2025-11-29 08:45:36.146 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:36.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:38.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:39 np0005539564 nova_compute[226295]: 2025-11-29 08:45:39.523 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:39 np0005539564 nova_compute[226295]: 2025-11-29 08:45:39.981 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:40 np0005539564 nova_compute[226295]: 2025-11-29 08:45:40.137 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:45:40 np0005539564 nova_compute[226295]: 2025-11-29 08:45:40.137 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:40 np0005539564 nova_compute[226295]: 2025-11-29 08:45:40.139 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:40.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:42.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:43 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:43Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:0e:d6 10.100.0.14
Nov 29 03:45:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:43.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:44 np0005539564 nova_compute[226295]: 2025-11-29 08:45:44.526 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:45 np0005539564 nova_compute[226295]: 2025-11-29 08:45:45.142 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:45.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:48.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:49 np0005539564 nova_compute[226295]: 2025-11-29 08:45:49.529 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:49.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:49 np0005539564 nova_compute[226295]: 2025-11-29 08:45:49.922 226310 INFO nova.compute.manager [None req-fba3a041-4e4c-42b8-b1b7-c39820bdca7d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Get console output#033[00m
Nov 29 03:45:49 np0005539564 nova_compute[226295]: 2025-11-29 08:45:49.936 270504 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:45:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:50 np0005539564 nova_compute[226295]: 2025-11-29 08:45:50.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:50.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:51.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.375 226310 DEBUG nova.compute.manager [req-49e08bee-5b94-4475-aaa8-abb67b5df716 req-2014bd7e-5054-47d7-ba6a-ab5a3ad5938d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-changed-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.375 226310 DEBUG nova.compute.manager [req-49e08bee-5b94-4475-aaa8-abb67b5df716 req-2014bd7e-5054-47d7-ba6a-ab5a3ad5938d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Refreshing instance network info cache due to event network-changed-05cc7810-837e-49c5-98f5-e14ac1ff5796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.376 226310 DEBUG oslo_concurrency.lockutils [req-49e08bee-5b94-4475-aaa8-abb67b5df716 req-2014bd7e-5054-47d7-ba6a-ab5a3ad5938d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.376 226310 DEBUG oslo_concurrency.lockutils [req-49e08bee-5b94-4475-aaa8-abb67b5df716 req-2014bd7e-5054-47d7-ba6a-ab5a3ad5938d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.376 226310 DEBUG nova.network.neutron [req-49e08bee-5b94-4475-aaa8-abb67b5df716 req-2014bd7e-5054-47d7-ba6a-ab5a3ad5938d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Refreshing network info cache for port 05cc7810-837e-49c5-98f5-e14ac1ff5796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.468 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.469 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.469 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.469 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.469 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.470 226310 INFO nova.compute.manager [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Terminating instance#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.471 226310 DEBUG nova.compute.manager [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:45:53 np0005539564 kernel: tap05cc7810-83 (unregistering): left promiscuous mode
Nov 29 03:45:53 np0005539564 NetworkManager[48997]: <info>  [1764405953.5377] device (tap05cc7810-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:45:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:53Z|00779|binding|INFO|Releasing lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 from this chassis (sb_readonly=0)
Nov 29 03:45:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:53Z|00780|binding|INFO|Setting lport 05cc7810-837e-49c5-98f5-e14ac1ff5796 down in Southbound
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.591 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539564 ovn_controller[130591]: 2025-11-29T08:45:53Z|00781|binding|INFO|Removing iface tap05cc7810-83 ovn-installed in OVS
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.594 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.609 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:0e:d6 10.100.0.14'], port_security=['fa:16:3e:a7:0e:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '000f2b4b-91a7-461e-8695-5285bfe53cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3c09da3c-4224-4197-b2fb-72b9d807c1ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fe4c91d-8afb-43ca-a608-778d25bb54c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=05cc7810-837e-49c5-98f5-e14ac1ff5796) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.612 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 05cc7810-837e-49c5-98f5-e14ac1ff5796 in datapath 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a unbound from our chassis#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.614 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.615 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ec624815-4009-49b5-a427-a01916cf3f24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.616 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a namespace which is not needed anymore#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.620 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:53 np0005539564 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Nov 29 03:45:53 np0005539564 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000bf.scope: Consumed 15.385s CPU time.
Nov 29 03:45:53 np0005539564 systemd-machined[190128]: Machine qemu-90-instance-000000bf terminated.
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.697 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.704 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.720 226310 INFO nova.virt.libvirt.driver [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Instance destroyed successfully.#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.721 226310 DEBUG nova.objects.instance [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 000f2b4b-91a7-461e-8695-5285bfe53cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.737 226310 DEBUG nova.virt.libvirt.vif [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1268542128',display_name='tempest-TestNetworkAdvancedServerOps-server-1268542128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1268542128',id=191,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJhLf6FaCEQlgtGJ466Elx/ZqN2KwkcnjEwf65gRkVvvzRnDJxusTh8/iP5gP4ElqFttcg/fOI7oZlNwuCxP0D5jnDsVjqIml/cam53sPjjzN011MM0KRngsWjRo3EjYfQ==',key_name='tempest-TestNetworkAdvancedServerOps-508554027',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:44:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-2mgqivi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:30Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=000f2b4b-91a7-461e-8695-5285bfe53cc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.737 226310 DEBUG nova.network.os_vif_util [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.738 226310 DEBUG nova.network.os_vif_util [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.738 226310 DEBUG os_vif [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.741 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.742 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05cc7810-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.751 226310 INFO os_vif [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:0e:d6,bridge_name='br-int',has_traffic_filtering=True,id=05cc7810-837e-49c5-98f5-e14ac1ff5796,network=Network(14047ff8-a7c2-4411-bbc5-cc7ce1023d2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cc7810-83')#033[00m
Nov 29 03:45:53 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[301070]: [NOTICE]   (301074) : haproxy version is 2.8.14-c23fe91
Nov 29 03:45:53 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[301070]: [NOTICE]   (301074) : path to executable is /usr/sbin/haproxy
Nov 29 03:45:53 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[301070]: [WARNING]  (301074) : Exiting Master process...
Nov 29 03:45:53 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[301070]: [WARNING]  (301074) : Exiting Master process...
Nov 29 03:45:53 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[301070]: [ALERT]    (301074) : Current worker (301076) exited with code 143 (Terminated)
Nov 29 03:45:53 np0005539564 neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a[301070]: [WARNING]  (301074) : All workers exited. Exiting... (0)
Nov 29 03:45:53 np0005539564 systemd[1]: libpod-bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551.scope: Deactivated successfully.
Nov 29 03:45:53 np0005539564 podman[301163]: 2025-11-29 08:45:53.815292078 +0000 UTC m=+0.057462937 container died bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:45:53 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551-userdata-shm.mount: Deactivated successfully.
Nov 29 03:45:53 np0005539564 systemd[1]: var-lib-containers-storage-overlay-379fed45ea14e0473a06184aa8738130279733c7339babb5ecbdb25d7cc78b24-merged.mount: Deactivated successfully.
Nov 29 03:45:53 np0005539564 podman[301163]: 2025-11-29 08:45:53.852907805 +0000 UTC m=+0.095078674 container cleanup bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:53 np0005539564 systemd[1]: libpod-conmon-bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551.scope: Deactivated successfully.
Nov 29 03:45:53 np0005539564 podman[301208]: 2025-11-29 08:45:53.939006984 +0000 UTC m=+0.062129081 container remove bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.946 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf00d6c-f778-43a0-be20-8211be64e1f0]: (4, ('Sat Nov 29 08:45:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a (bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551)\nbdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551\nSat Nov 29 08:45:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a (bdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551)\nbdb273da8b173e079eb2b207bba1bdd250032339ded5391f21df34d675bf7551\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.949 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[69188fe5-f827-478d-9b0b-c4211a6de8d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.951 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14047ff8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:53 np0005539564 kernel: tap14047ff8-a0: left promiscuous mode
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.954 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539564 nova_compute[226295]: 2025-11-29 08:45:53.972 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.976 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[23ba0e27-678a-4c60-b3bd-7a000242fbb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:53.998 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0d0749-431b-42e3-bbcb-616b19de8edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:54.000 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa1169c-b796-48e6-a804-bf3bbf6661ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:54.019 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0149a6cf-49e2-4c70-96d7-bd9749334f15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890471, 'reachable_time': 16997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301222, 'error': None, 'target': 'ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:54 np0005539564 systemd[1]: run-netns-ovnmeta\x2d14047ff8\x2da7c2\x2d4411\x2dbbc5\x2dcc7ce1023d2a.mount: Deactivated successfully.
Nov 29 03:45:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:54.024 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14047ff8-a7c2-4411-bbc5-cc7ce1023d2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:45:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:45:54.025 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[937bb58c-a159-4da0-bf7e-0276f3c9bdab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:54 np0005539564 nova_compute[226295]: 2025-11-29 08:45:54.202 226310 INFO nova.virt.libvirt.driver [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Deleting instance files /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3_del#033[00m
Nov 29 03:45:54 np0005539564 nova_compute[226295]: 2025-11-29 08:45:54.203 226310 INFO nova.virt.libvirt.driver [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Deletion of /var/lib/nova/instances/000f2b4b-91a7-461e-8695-5285bfe53cc3_del complete#033[00m
Nov 29 03:45:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.149 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.225506) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955225583, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 967, "num_deletes": 251, "total_data_size": 1977229, "memory_usage": 2007632, "flush_reason": "Manual Compaction"}
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955239360, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1293643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71619, "largest_seqno": 72581, "table_properties": {"data_size": 1289175, "index_size": 2119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10019, "raw_average_key_size": 19, "raw_value_size": 1280227, "raw_average_value_size": 2550, "num_data_blocks": 92, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405887, "oldest_key_time": 1764405887, "file_creation_time": 1764405955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 13934 microseconds, and 7752 cpu microseconds.
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.239440) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1293643 bytes OK
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.239474) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.241871) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.241971) EVENT_LOG_v1 {"time_micros": 1764405955241953, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.242012) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1972405, prev total WAL file size 1972405, number of live WAL files 2.
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.243266) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1263KB)], [144(11MB)]
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955243315, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 13263327, "oldest_snapshot_seqno": -1}
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9744 keys, 11306693 bytes, temperature: kUnknown
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955324789, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 11306693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11245309, "index_size": 35927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 258824, "raw_average_key_size": 26, "raw_value_size": 11075927, "raw_average_value_size": 1136, "num_data_blocks": 1355, "num_entries": 9744, "num_filter_entries": 9744, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764405955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.325048) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 11306693 bytes
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.326302) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.7 rd, 138.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(19.0) write-amplify(8.7) OK, records in: 10262, records dropped: 518 output_compression: NoCompression
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.326320) EVENT_LOG_v1 {"time_micros": 1764405955326312, "job": 92, "event": "compaction_finished", "compaction_time_micros": 81535, "compaction_time_cpu_micros": 34758, "output_level": 6, "num_output_files": 1, "total_output_size": 11306693, "num_input_records": 10262, "num_output_records": 9744, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955326659, "job": 92, "event": "table_file_deletion", "file_number": 146}
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955328747, "job": 92, "event": "table_file_deletion", "file_number": 144}
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.243073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:55.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.727 226310 DEBUG nova.compute.manager [req-43dea38b-6c60-4d02-9200-563bb925a93d req-f4c5f770-f184-44d3-ab2b-f60297539643 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.728 226310 DEBUG oslo_concurrency.lockutils [req-43dea38b-6c60-4d02-9200-563bb925a93d req-f4c5f770-f184-44d3-ab2b-f60297539643 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.728 226310 DEBUG oslo_concurrency.lockutils [req-43dea38b-6c60-4d02-9200-563bb925a93d req-f4c5f770-f184-44d3-ab2b-f60297539643 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.729 226310 DEBUG oslo_concurrency.lockutils [req-43dea38b-6c60-4d02-9200-563bb925a93d req-f4c5f770-f184-44d3-ab2b-f60297539643 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.729 226310 DEBUG nova.compute.manager [req-43dea38b-6c60-4d02-9200-563bb925a93d req-f4c5f770-f184-44d3-ab2b-f60297539643 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.730 226310 DEBUG nova.compute.manager [req-43dea38b-6c60-4d02-9200-563bb925a93d req-f4c5f770-f184-44d3-ab2b-f60297539643 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-unplugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.831 226310 INFO nova.compute.manager [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Took 2.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.831 226310 DEBUG oslo.service.loopingcall [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.832 226310 DEBUG nova.compute.manager [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:45:55 np0005539564 nova_compute[226295]: 2025-11-29 08:45:55.832 226310 DEBUG nova.network.neutron [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:45:56 np0005539564 podman[301224]: 2025-11-29 08:45:56.55436292 +0000 UTC m=+0.099867622 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:45:56 np0005539564 podman[301225]: 2025-11-29 08:45:56.589168513 +0000 UTC m=+0.126435332 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:56 np0005539564 podman[301223]: 2025-11-29 08:45:56.597908489 +0000 UTC m=+0.144471411 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:45:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:56.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.250 226310 DEBUG nova.network.neutron [req-49e08bee-5b94-4475-aaa8-abb67b5df716 req-2014bd7e-5054-47d7-ba6a-ab5a3ad5938d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updated VIF entry in instance network info cache for port 05cc7810-837e-49c5-98f5-e14ac1ff5796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.251 226310 DEBUG nova.network.neutron [req-49e08bee-5b94-4475-aaa8-abb67b5df716 req-2014bd7e-5054-47d7-ba6a-ab5a3ad5938d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updating instance_info_cache with network_info: [{"id": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "address": "fa:16:3e:a7:0e:d6", "network": {"id": "14047ff8-a7c2-4411-bbc5-cc7ce1023d2a", "bridge": "br-int", "label": "tempest-network-smoke--1066451220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cc7810-83", "ovs_interfaceid": "05cc7810-837e-49c5-98f5-e14ac1ff5796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.301 226310 DEBUG oslo_concurrency.lockutils [req-49e08bee-5b94-4475-aaa8-abb67b5df716 req-2014bd7e-5054-47d7-ba6a-ab5a3ad5938d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-000f2b4b-91a7-461e-8695-5285bfe53cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.369 226310 DEBUG nova.network.neutron [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.426 226310 INFO nova.compute.manager [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Took 1.59 seconds to deallocate network for instance.#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.488 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.489 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.549 226310 DEBUG oslo_concurrency.processutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.848 226310 DEBUG nova.compute.manager [req-dfe7fa49-f577-405e-ae15-0dfbea36ea8b req-5dd792da-01b8-4b41-a10a-4d9bf7ea6f10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.850 226310 DEBUG oslo_concurrency.lockutils [req-dfe7fa49-f577-405e-ae15-0dfbea36ea8b req-5dd792da-01b8-4b41-a10a-4d9bf7ea6f10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.850 226310 DEBUG oslo_concurrency.lockutils [req-dfe7fa49-f577-405e-ae15-0dfbea36ea8b req-5dd792da-01b8-4b41-a10a-4d9bf7ea6f10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.851 226310 DEBUG oslo_concurrency.lockutils [req-dfe7fa49-f577-405e-ae15-0dfbea36ea8b req-5dd792da-01b8-4b41-a10a-4d9bf7ea6f10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.851 226310 DEBUG nova.compute.manager [req-dfe7fa49-f577-405e-ae15-0dfbea36ea8b req-5dd792da-01b8-4b41-a10a-4d9bf7ea6f10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] No waiting events found dispatching network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:57 np0005539564 nova_compute[226295]: 2025-11-29 08:45:57.852 226310 WARNING nova.compute.manager [req-dfe7fa49-f577-405e-ae15-0dfbea36ea8b req-5dd792da-01b8-4b41-a10a-4d9bf7ea6f10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received unexpected event network-vif-plugged-05cc7810-837e-49c5-98f5-e14ac1ff5796 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:45:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2420364000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:58 np0005539564 nova_compute[226295]: 2025-11-29 08:45:58.048 226310 DEBUG oslo_concurrency.processutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:58 np0005539564 nova_compute[226295]: 2025-11-29 08:45:58.054 226310 DEBUG nova.compute.provider_tree [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:58 np0005539564 nova_compute[226295]: 2025-11-29 08:45:58.060 226310 DEBUG nova.compute.manager [req-55a41bde-a01c-4098-a276-d77ad49f3bcd req-95ce6134-cb81-4d1a-a64e-72c015c09a4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Received event network-vif-deleted-05cc7810-837e-49c5-98f5-e14ac1ff5796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:58 np0005539564 nova_compute[226295]: 2025-11-29 08:45:58.079 226310 DEBUG nova.scheduler.client.report [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:58 np0005539564 nova_compute[226295]: 2025-11-29 08:45:58.142 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:58 np0005539564 nova_compute[226295]: 2025-11-29 08:45:58.166 226310 INFO nova.scheduler.client.report [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance 000f2b4b-91a7-461e-8695-5285bfe53cc3#033[00m
Nov 29 03:45:58 np0005539564 nova_compute[226295]: 2025-11-29 08:45:58.305 226310 DEBUG oslo_concurrency.lockutils [None req-7aa5f054-f3b5-414d-8c16-9a8e2bff487d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "000f2b4b-91a7-461e-8695-5285bfe53cc3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:58.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:58 np0005539564 nova_compute[226295]: 2025-11-29 08:45:58.746 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:45:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:59.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:00 np0005539564 nova_compute[226295]: 2025-11-29 08:46:00.151 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:00.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:02.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:03.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:03.762 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:03.763 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:03.763 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:03 np0005539564 nova_compute[226295]: 2025-11-29 08:46:03.788 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:04.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:05 np0005539564 nova_compute[226295]: 2025-11-29 08:46:05.154 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:05.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:06.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:07.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:08 np0005539564 nova_compute[226295]: 2025-11-29 08:46:08.322 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:08 np0005539564 nova_compute[226295]: 2025-11-29 08:46:08.457 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:08.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:08 np0005539564 nova_compute[226295]: 2025-11-29 08:46:08.716 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405953.7146895, 000f2b4b-91a7-461e-8695-5285bfe53cc3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:08 np0005539564 nova_compute[226295]: 2025-11-29 08:46:08.717 226310 INFO nova.compute.manager [-] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:46:08 np0005539564 nova_compute[226295]: 2025-11-29 08:46:08.746 226310 DEBUG nova.compute.manager [None req-0ab372ba-1269-4ea4-a6fc-98ac61222889 - - - - - -] [instance: 000f2b4b-91a7-461e-8695-5285bfe53cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:08 np0005539564 nova_compute[226295]: 2025-11-29 08:46:08.790 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:09.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:10 np0005539564 nova_compute[226295]: 2025-11-29 08:46:10.155 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:10.218 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:10 np0005539564 nova_compute[226295]: 2025-11-29 08:46:10.218 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:10 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:10.221 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:46:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:10.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:11.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:46:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:46:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:12.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:13.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:13 np0005539564 nova_compute[226295]: 2025-11-29 08:46:13.832 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:14.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:15 np0005539564 nova_compute[226295]: 2025-11-29 08:46:15.158 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:15 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:15.224 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:15.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:16.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:18.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:18 np0005539564 nova_compute[226295]: 2025-11-29 08:46:18.835 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.117 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.118 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.167 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.168 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.169 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.169 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.346 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.346 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.384 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:46:19 np0005539564 nova_compute[226295]: 2025-11-29 08:46:19.385 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:19 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:19.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:20 np0005539564 nova_compute[226295]: 2025-11-29 08:46:20.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:20.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:23.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:23 np0005539564 nova_compute[226295]: 2025-11-29 08:46:23.838 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:24.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:25 np0005539564 nova_compute[226295]: 2025-11-29 08:46:25.164 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:25 np0005539564 nova_compute[226295]: 2025-11-29 08:46:25.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:25.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:27 np0005539564 podman[301496]: 2025-11-29 08:46:27.543189902 +0000 UTC m=+0.077751374 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:46:27 np0005539564 podman[301495]: 2025-11-29 08:46:27.554122068 +0000 UTC m=+0.091738372 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:27 np0005539564 podman[301494]: 2025-11-29 08:46:27.620174676 +0000 UTC m=+0.157787690 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:46:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:27.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:28 np0005539564 nova_compute[226295]: 2025-11-29 08:46:28.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:28 np0005539564 nova_compute[226295]: 2025-11-29 08:46:28.840 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:29.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:30 np0005539564 nova_compute[226295]: 2025-11-29 08:46:30.165 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:30.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:31.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:32 np0005539564 nova_compute[226295]: 2025-11-29 08:46:32.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:32 np0005539564 nova_compute[226295]: 2025-11-29 08:46:32.399 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:32 np0005539564 nova_compute[226295]: 2025-11-29 08:46:32.400 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:32 np0005539564 nova_compute[226295]: 2025-11-29 08:46:32.400 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:32 np0005539564 nova_compute[226295]: 2025-11-29 08:46:32.401 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:46:32 np0005539564 nova_compute[226295]: 2025-11-29 08:46:32.401 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:32.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4231914523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:32 np0005539564 nova_compute[226295]: 2025-11-29 08:46:32.910 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.082 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.085 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4254MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.086 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.086 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.215 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.216 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.323 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:33.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/238002989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.810 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.819 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.869 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.894 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.931 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:46:33 np0005539564 nova_compute[226295]: 2025-11-29 08:46:33.932 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:34.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:35 np0005539564 nova_compute[226295]: 2025-11-29 08:46:35.167 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:35.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:36.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:37.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:38.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:38 np0005539564 nova_compute[226295]: 2025-11-29 08:46:38.897 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:39.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:40 np0005539564 nova_compute[226295]: 2025-11-29 08:46:40.168 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:40.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:41.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:42.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:43.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:43 np0005539564 nova_compute[226295]: 2025-11-29 08:46:43.957 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:44.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:45 np0005539564 nova_compute[226295]: 2025-11-29 08:46:45.171 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:45.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:46.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:47.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:48 np0005539564 nova_compute[226295]: 2025-11-29 08:46:48.960 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:50 np0005539564 nova_compute[226295]: 2025-11-29 08:46:50.173 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:50.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:51.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:53.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:54 np0005539564 nova_compute[226295]: 2025-11-29 08:46:54.001 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:54.267 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:54 np0005539564 nova_compute[226295]: 2025-11-29 08:46:54.269 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:54.270 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:46:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:54.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:55 np0005539564 nova_compute[226295]: 2025-11-29 08:46:55.176 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:56 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:46:56.274 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:56.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:57.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:58 np0005539564 podman[301605]: 2025-11-29 08:46:58.511006033 +0000 UTC m=+0.062973355 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 29 03:46:58 np0005539564 podman[301604]: 2025-11-29 08:46:58.540769649 +0000 UTC m=+0.087823478 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:46:58 np0005539564 podman[301603]: 2025-11-29 08:46:58.549816873 +0000 UTC m=+0.101918229 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:46:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:58.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:59 np0005539564 nova_compute[226295]: 2025-11-29 08:46:59.003 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:46:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:59.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:00 np0005539564 nova_compute[226295]: 2025-11-29 08:47:00.178 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:02 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:02Z|00782|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 29 03:47:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:02.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:03.763 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:03.764 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:03.764 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:04 np0005539564 nova_compute[226295]: 2025-11-29 08:47:04.005 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:04.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:05 np0005539564 nova_compute[226295]: 2025-11-29 08:47:05.180 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:06.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:08.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:09 np0005539564 nova_compute[226295]: 2025-11-29 08:47:09.065 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:10 np0005539564 nova_compute[226295]: 2025-11-29 08:47:10.206 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:10.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:11.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:12.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:14 np0005539564 nova_compute[226295]: 2025-11-29 08:47:14.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:14.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:14 np0005539564 nova_compute[226295]: 2025-11-29 08:47:14.932 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:14 np0005539564 nova_compute[226295]: 2025-11-29 08:47:14.933 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:15 np0005539564 nova_compute[226295]: 2025-11-29 08:47:15.207 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:15 np0005539564 nova_compute[226295]: 2025-11-29 08:47:15.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:15 np0005539564 nova_compute[226295]: 2025-11-29 08:47:15.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:47:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:15.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:16.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:18.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:19 np0005539564 nova_compute[226295]: 2025-11-29 08:47:19.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:19 np0005539564 nova_compute[226295]: 2025-11-29 08:47:19.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:19 np0005539564 nova_compute[226295]: 2025-11-29 08:47:19.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:19.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:19 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:20 np0005539564 nova_compute[226295]: 2025-11-29 08:47:20.210 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:47:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:47:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:20.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:47:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:21 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:47:21 np0005539564 nova_compute[226295]: 2025-11-29 08:47:21.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:21 np0005539564 nova_compute[226295]: 2025-11-29 08:47:21.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:47:21 np0005539564 nova_compute[226295]: 2025-11-29 08:47:21.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:47:21 np0005539564 nova_compute[226295]: 2025-11-29 08:47:21.363 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:47:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:23.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:24 np0005539564 nova_compute[226295]: 2025-11-29 08:47:24.071 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:25 np0005539564 nova_compute[226295]: 2025-11-29 08:47:25.211 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:25 np0005539564 nova_compute[226295]: 2025-11-29 08:47:25.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:25.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:26 np0005539564 nova_compute[226295]: 2025-11-29 08:47:26.264 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:26 np0005539564 nova_compute[226295]: 2025-11-29 08:47:26.264 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:26 np0005539564 nova_compute[226295]: 2025-11-29 08:47:26.293 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:47:26 np0005539564 nova_compute[226295]: 2025-11-29 08:47:26.425 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:26 np0005539564 nova_compute[226295]: 2025-11-29 08:47:26.426 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:26 np0005539564 nova_compute[226295]: 2025-11-29 08:47:26.435 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:47:26 np0005539564 nova_compute[226295]: 2025-11-29 08:47:26.436 226310 INFO nova.compute.claims [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:47:26 np0005539564 nova_compute[226295]: 2025-11-29 08:47:26.551 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3330788449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.074 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.084 226310 DEBUG nova.compute.provider_tree [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.110 226310 DEBUG nova.scheduler.client.report [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.146 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.147 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.206 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.207 226310 DEBUG nova.network.neutron [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.227 226310 INFO nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.248 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.411 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.413 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.414 226310 INFO nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Creating image(s)#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.458 226310 DEBUG nova.storage.rbd_utils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.503 226310 DEBUG nova.storage.rbd_utils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.532 226310 DEBUG nova.storage.rbd_utils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.536 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.568 226310 DEBUG nova.policy [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5dbbf4fd34004538ad08aa4aa6ab8096', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5e836f8387a492c8119be72f1fb9980', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.619 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.620 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.620 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.620 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.652 226310 DEBUG nova.storage.rbd_utils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.656 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:27.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:27 np0005539564 nova_compute[226295]: 2025-11-29 08:47:27.926 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:28 np0005539564 nova_compute[226295]: 2025-11-29 08:47:28.019 226310 DEBUG nova.storage.rbd_utils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] resizing rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:47:28 np0005539564 nova_compute[226295]: 2025-11-29 08:47:28.139 226310 DEBUG nova.objects.instance [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'migration_context' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:28 np0005539564 nova_compute[226295]: 2025-11-29 08:47:28.154 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:47:28 np0005539564 nova_compute[226295]: 2025-11-29 08:47:28.154 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Ensure instance console log exists: /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:47:28 np0005539564 nova_compute[226295]: 2025-11-29 08:47:28.155 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:28 np0005539564 nova_compute[226295]: 2025-11-29 08:47:28.155 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:28 np0005539564 nova_compute[226295]: 2025-11-29 08:47:28.156 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:28 np0005539564 nova_compute[226295]: 2025-11-29 08:47:28.788 226310 DEBUG nova.network.neutron [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Successfully created port: 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:47:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:28.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:29 np0005539564 nova_compute[226295]: 2025-11-29 08:47:29.120 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:29 np0005539564 nova_compute[226295]: 2025-11-29 08:47:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:29 np0005539564 podman[302035]: 2025-11-29 08:47:29.534632094 +0000 UTC m=+0.067746514 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:47:29 np0005539564 podman[302034]: 2025-11-29 08:47:29.550483443 +0000 UTC m=+0.084659281 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Nov 29 03:47:29 np0005539564 podman[302033]: 2025-11-29 08:47:29.551257364 +0000 UTC m=+0.095812233 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:47:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:29.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.152 226310 DEBUG nova.network.neutron [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Successfully updated port: 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.173 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.174 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.174 226310 DEBUG nova.network.neutron [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.272 226310 DEBUG nova.compute.manager [req-617c35ea-4ea3-4418-8318-4b32817896ff req-cde7a745-34de-400d-911d-3e0f1a9b9328 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.274 226310 DEBUG nova.compute.manager [req-617c35ea-4ea3-4418-8318-4b32817896ff req-cde7a745-34de-400d-911d-3e0f1a9b9328 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing instance network info cache due to event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.274 226310 DEBUG oslo_concurrency.lockutils [req-617c35ea-4ea3-4418-8318-4b32817896ff req-cde7a745-34de-400d-911d-3e0f1a9b9328 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:30 np0005539564 nova_compute[226295]: 2025-11-29 08:47:30.364 226310 DEBUG nova.network.neutron [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:47:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:30.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.552 226310 DEBUG nova.network.neutron [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.582 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.582 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance network_info: |[{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.583 226310 DEBUG oslo_concurrency.lockutils [req-617c35ea-4ea3-4418-8318-4b32817896ff req-cde7a745-34de-400d-911d-3e0f1a9b9328 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.583 226310 DEBUG nova.network.neutron [req-617c35ea-4ea3-4418-8318-4b32817896ff req-cde7a745-34de-400d-911d-3e0f1a9b9328 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.586 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Start _get_guest_xml network_info=[{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.602 226310 WARNING nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.607 226310 DEBUG nova.virt.libvirt.host [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.607 226310 DEBUG nova.virt.libvirt.host [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.612 226310 DEBUG nova.virt.libvirt.host [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.612 226310 DEBUG nova.virt.libvirt.host [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.614 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.614 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.614 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.615 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.615 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.615 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.615 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.615 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.616 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.616 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.616 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.616 226310 DEBUG nova.virt.hardware [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:47:31 np0005539564 nova_compute[226295]: 2025-11-29 08:47:31.619 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:31.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1619977189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.100 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.135 226310 DEBUG nova.storage.rbd_utils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.141 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/654561874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.606 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.609 226310 DEBUG nova.virt.libvirt.vif [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-74860485',display_name='tempest-TestShelveInstance-server-74860485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-74860485',id=195,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKT08Tnkn1UqV0bF/J8wpnzMCF6Zkhpmk/usL6YnI+Le5YAvtauWosLF4Kvj259R/59WcHeLG4Cqd2MmjrgXGd9Nw0BxGgZcDldkgLq1Xl0jjL8yBMwXntpEhSzBHi8sNQ==',key_name='tempest-TestShelveInstance-876097080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-4rgnqe84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:27Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=6eda2a5e-bf92-4d34-b21e-ca4eaf01728b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.610 226310 DEBUG nova.network.os_vif_util [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.611 226310 DEBUG nova.network.os_vif_util [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.614 226310 DEBUG nova.objects.instance [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.644 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <uuid>6eda2a5e-bf92-4d34-b21e-ca4eaf01728b</uuid>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <name>instance-000000c3</name>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestShelveInstance-server-74860485</nova:name>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:47:31</nova:creationTime>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <nova:user uuid="5dbbf4fd34004538ad08aa4aa6ab8096">tempest-TestShelveInstance-1715482181-project-member</nova:user>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <nova:project uuid="c5e836f8387a492c8119be72f1fb9980">tempest-TestShelveInstance-1715482181</nova:project>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <nova:port uuid="3bfd1c43-1b2b-4fa1-8eb4-e366844ea174">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <entry name="serial">6eda2a5e-bf92-4d34-b21e-ca4eaf01728b</entry>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <entry name="uuid">6eda2a5e-bf92-4d34-b21e-ca4eaf01728b</entry>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:03:3b:02"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <target dev="tap3bfd1c43-1b"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/console.log" append="off"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:47:32 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:47:32 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:47:32 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:47:32 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.646 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Preparing to wait for external event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.647 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.648 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.649 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.652 226310 DEBUG nova.virt.libvirt.vif [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-74860485',display_name='tempest-TestShelveInstance-server-74860485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-74860485',id=195,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKT08Tnkn1UqV0bF/J8wpnzMCF6Zkhpmk/usL6YnI+Le5YAvtauWosLF4Kvj259R/59WcHeLG4Cqd2MmjrgXGd9Nw0BxGgZcDldkgLq1Xl0jjL8yBMwXntpEhSzBHi8sNQ==',key_name='tempest-TestShelveInstance-876097080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-4rgnqe84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:27Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=6eda2a5e-bf92-4d34-b21e-ca4eaf01728b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.653 226310 DEBUG nova.network.os_vif_util [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.654 226310 DEBUG nova.network.os_vif_util [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.654 226310 DEBUG os_vif [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.656 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.656 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.657 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.662 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.662 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bfd1c43-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.663 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3bfd1c43-1b, col_values=(('external_ids', {'iface-id': '3bfd1c43-1b2b-4fa1-8eb4-e366844ea174', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:3b:02', 'vm-uuid': '6eda2a5e-bf92-4d34-b21e-ca4eaf01728b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.665 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539564 NetworkManager[48997]: <info>  [1764406052.6683] manager: (tap3bfd1c43-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.671 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.678 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.680 226310 INFO os_vif [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b')#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.750 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.751 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.751 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No VIF found with MAC fa:16:3e:03:3b:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.752 226310 INFO nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Using config drive#033[00m
Nov 29 03:47:32 np0005539564 nova_compute[226295]: 2025-11-29 08:47:32.793 226310 DEBUG nova.storage.rbd_utils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:32.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.152 226310 DEBUG nova.network.neutron [req-617c35ea-4ea3-4418-8318-4b32817896ff req-cde7a745-34de-400d-911d-3e0f1a9b9328 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updated VIF entry in instance network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.152 226310 DEBUG nova.network.neutron [req-617c35ea-4ea3-4418-8318-4b32817896ff req-cde7a745-34de-400d-911d-3e0f1a9b9328 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.177 226310 DEBUG oslo_concurrency.lockutils [req-617c35ea-4ea3-4418-8318-4b32817896ff req-cde7a745-34de-400d-911d-3e0f1a9b9328 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.465 226310 INFO nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Creating config drive at /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config#033[00m
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.476 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt2q9yrdy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.623 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt2q9yrdy" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.661 226310 DEBUG nova.storage.rbd_utils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.665 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:33.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.900 226310 DEBUG oslo_concurrency.processutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:33 np0005539564 nova_compute[226295]: 2025-11-29 08:47:33.901 226310 INFO nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Deleting local config drive /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:47:33 np0005539564 kernel: tap3bfd1c43-1b: entered promiscuous mode
Nov 29 03:47:33 np0005539564 NetworkManager[48997]: <info>  [1764406053.9827] manager: (tap3bfd1c43-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Nov 29 03:47:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:34Z|00783|binding|INFO|Claiming lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for this chassis.
Nov 29 03:47:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:34Z|00784|binding|INFO|3bfd1c43-1b2b-4fa1-8eb4-e366844ea174: Claiming fa:16:3e:03:3b:02 10.100.0.10
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.018 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.027 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.038 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:3b:02 10.100.0.10'], port_security=['fa:16:3e:03:3b:02 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6eda2a5e-bf92-4d34-b21e-ca4eaf01728b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36d56553-7b52-4135-ab01-9fd93eb2713f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.040 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 bound to our chassis#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.042 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0636028a-96d5-4ad7-aa6e-9129edd44385#033[00m
Nov 29 03:47:34 np0005539564 systemd-udevd[302228]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.055 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a21e962d-aeb7-4822-bbc6-3e8282ecb65e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.057 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0636028a-91 in ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.059 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0636028a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.059 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec5e4b4-429e-417c-ba56-19a7a90ec2b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.061 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[722fa8a8-59ca-483f-b594-706ad3772b50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 systemd-machined[190128]: New machine qemu-91-instance-000000c3.
Nov 29 03:47:34 np0005539564 NetworkManager[48997]: <info>  [1764406054.0685] device (tap3bfd1c43-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:47:34 np0005539564 NetworkManager[48997]: <info>  [1764406054.0696] device (tap3bfd1c43-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.078 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[90b5d2ba-1c77-4e45-a778-20441bee9181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 systemd[1]: Started Virtual Machine qemu-91-instance-000000c3.
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.095 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[439b4180-ad80-4e56-9b41-49352746e1fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:34Z|00785|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 ovn-installed in OVS
Nov 29 03:47:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:34Z|00786|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 up in Southbound
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.102 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.127 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c15901d4-4519-4911-813a-a2e34d989cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.132 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2c19cd-3c84-4668-aa81-1caf38b02e9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 NetworkManager[48997]: <info>  [1764406054.1345] manager: (tap0636028a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/361)
Nov 29 03:47:34 np0005539564 systemd-udevd[302232]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.174 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b04a1e76-9e84-476f-b029-31e3c8141934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.178 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b7e6db-1ebe-4243-a373-217e9ccd3cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 NetworkManager[48997]: <info>  [1764406054.2102] device (tap0636028a-90): carrier: link connected
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.217 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0eeb544b-77b0-427a-a006-a1f8ecdb8377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.240 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[20ac7d56-e3fa-4fb2-b23b-8d5525d14458]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0636028a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:11:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902889, 'reachable_time': 27612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302261, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.259 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2ae24d-efe2-4ae1-b1a5-3d59eac23c17]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:1119'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 902889, 'tstamp': 902889}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302262, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.282 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2294b392-3637-4165-aff6-1313ec19f719]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0636028a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:11:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902889, 'reachable_time': 27612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302263, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.325 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4998e3b2-c1d9-4d1d-bef8-f848ab20ae1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.384 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.385 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.385 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.385 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.386 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.424 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[26884a18-bde6-41d2-a371-0e678926f26a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.426 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0636028a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.427 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.428 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0636028a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:34 np0005539564 NetworkManager[48997]: <info>  [1764406054.4324] manager: (tap0636028a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Nov 29 03:47:34 np0005539564 kernel: tap0636028a-90: entered promiscuous mode
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.432 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.435 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.437 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0636028a-90, col_values=(('external_ids', {'iface-id': '58043efe-c991-4914-9f0a-2bba8af4c408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.439 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:34Z|00787|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.465 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.471 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.473 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f39240-f7d1-4986-90ef-2856429ec7e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.477 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-0636028a-96d5-4ad7-aa6e-9129edd44385
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 0636028a-96d5-4ad7-aa6e-9129edd44385
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:47:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:34.479 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'env', 'PROCESS_TAG=haproxy-0636028a-96d5-4ad7-aa6e-9129edd44385', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0636028a-96d5-4ad7-aa6e-9129edd44385.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.578 226310 DEBUG nova.compute.manager [req-629430c3-3717-4e82-9972-571a78fe13f4 req-683edcff-99dc-4397-8c38-8b6147ce8034 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.579 226310 DEBUG oslo_concurrency.lockutils [req-629430c3-3717-4e82-9972-571a78fe13f4 req-683edcff-99dc-4397-8c38-8b6147ce8034 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.580 226310 DEBUG oslo_concurrency.lockutils [req-629430c3-3717-4e82-9972-571a78fe13f4 req-683edcff-99dc-4397-8c38-8b6147ce8034 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.580 226310 DEBUG oslo_concurrency.lockutils [req-629430c3-3717-4e82-9972-571a78fe13f4 req-683edcff-99dc-4397-8c38-8b6147ce8034 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.580 226310 DEBUG nova.compute.manager [req-629430c3-3717-4e82-9972-571a78fe13f4 req-683edcff-99dc-4397-8c38-8b6147ce8034 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Processing event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:47:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.878 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406054.8778446, 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.880 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.884 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.904 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.908 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/954613710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.918 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.923 226310 INFO nova.virt.libvirt.driver [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance spawned successfully.#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.924 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.931 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.953 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:34 np0005539564 podman[302356]: 2025-11-29 08:47:34.955938995 +0000 UTC m=+0.078688511 container create 159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.955 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406054.878012, 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.956 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.973 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.974 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.975 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.976 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.977 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:34 np0005539564 nova_compute[226295]: 2025-11-29 08:47:34.978 226310 DEBUG nova.virt.libvirt.driver [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:35 np0005539564 systemd[1]: Started libpod-conmon-159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9.scope.
Nov 29 03:47:35 np0005539564 podman[302356]: 2025-11-29 08:47:34.923114296 +0000 UTC m=+0.045863852 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:47:35 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:47:35 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495d0c4f0d2d42c2267e91374f73a4e3d17fde8512c5aecf8b7d54ec8099263f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:35 np0005539564 podman[302356]: 2025-11-29 08:47:35.069807705 +0000 UTC m=+0.192557261 container init 159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:47:35 np0005539564 podman[302356]: 2025-11-29 08:47:35.076340782 +0000 UTC m=+0.199090328 container start 159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:35 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[302374]: [NOTICE]   (302378) : New worker (302380) forked
Nov 29 03:47:35 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[302374]: [NOTICE]   (302378) : Loading success.
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.157 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.161 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406054.8897166, 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.161 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.187 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.191 226310 INFO nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Took 7.78 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.191 226310 DEBUG nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.194 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.201 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000c3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.201 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000c3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.236 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.265 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.271 226310 INFO nova.compute.manager [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Took 8.89 seconds to build instance.#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.296 226310 DEBUG oslo_concurrency.lockutils [None req-5d9cf05b-ae8a-43ca-bd17-54c042caa9fd 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.403 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.404 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4059MB free_disk=20.96752166748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.405 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.405 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.488 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.489 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.489 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.526 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:35.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3787107875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.976 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:35 np0005539564 nova_compute[226295]: 2025-11-29 08:47:35.983 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.003 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.025 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.026 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.818 226310 DEBUG nova.compute.manager [req-e15345df-db8e-4482-b607-aed8dc784cc4 req-beec74c4-4dd3-4379-914e-fb2faa48f49f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.819 226310 DEBUG oslo_concurrency.lockutils [req-e15345df-db8e-4482-b607-aed8dc784cc4 req-beec74c4-4dd3-4379-914e-fb2faa48f49f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.819 226310 DEBUG oslo_concurrency.lockutils [req-e15345df-db8e-4482-b607-aed8dc784cc4 req-beec74c4-4dd3-4379-914e-fb2faa48f49f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.820 226310 DEBUG oslo_concurrency.lockutils [req-e15345df-db8e-4482-b607-aed8dc784cc4 req-beec74c4-4dd3-4379-914e-fb2faa48f49f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.820 226310 DEBUG nova.compute.manager [req-e15345df-db8e-4482-b607-aed8dc784cc4 req-beec74c4-4dd3-4379-914e-fb2faa48f49f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] No waiting events found dispatching network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:47:36 np0005539564 nova_compute[226295]: 2025-11-29 08:47:36.821 226310 WARNING nova.compute.manager [req-e15345df-db8e-4482-b607-aed8dc784cc4 req-beec74c4-4dd3-4379-914e-fb2faa48f49f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received unexpected event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:47:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:37 np0005539564 nova_compute[226295]: 2025-11-29 08:47:37.669 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:37.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:39.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:40 np0005539564 nova_compute[226295]: 2025-11-29 08:47:40.265 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:40 np0005539564 NetworkManager[48997]: <info>  [1764406060.4225] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Nov 29 03:47:40 np0005539564 nova_compute[226295]: 2025-11-29 08:47:40.420 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:40 np0005539564 NetworkManager[48997]: <info>  [1764406060.4245] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 29 03:47:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:40Z|00788|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:47:40 np0005539564 nova_compute[226295]: 2025-11-29 08:47:40.543 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:40 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:40Z|00789|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:47:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:40 np0005539564 nova_compute[226295]: 2025-11-29 08:47:40.941 226310 DEBUG nova.compute.manager [req-66cf84f1-2fbd-46ed-9d5d-ddd9c3b20d29 req-a7361086-29d9-40ff-aba7-fcc4746f502c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:40 np0005539564 nova_compute[226295]: 2025-11-29 08:47:40.941 226310 DEBUG nova.compute.manager [req-66cf84f1-2fbd-46ed-9d5d-ddd9c3b20d29 req-a7361086-29d9-40ff-aba7-fcc4746f502c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing instance network info cache due to event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:47:40 np0005539564 nova_compute[226295]: 2025-11-29 08:47:40.942 226310 DEBUG oslo_concurrency.lockutils [req-66cf84f1-2fbd-46ed-9d5d-ddd9c3b20d29 req-a7361086-29d9-40ff-aba7-fcc4746f502c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:40 np0005539564 nova_compute[226295]: 2025-11-29 08:47:40.942 226310 DEBUG oslo_concurrency.lockutils [req-66cf84f1-2fbd-46ed-9d5d-ddd9c3b20d29 req-a7361086-29d9-40ff-aba7-fcc4746f502c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:40 np0005539564 nova_compute[226295]: 2025-11-29 08:47:40.942 226310 DEBUG nova.network.neutron [req-66cf84f1-2fbd-46ed-9d5d-ddd9c3b20d29 req-a7361086-29d9-40ff-aba7-fcc4746f502c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:41 np0005539564 nova_compute[226295]: 2025-11-29 08:47:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:41 np0005539564 nova_compute[226295]: 2025-11-29 08:47:41.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:47:41 np0005539564 nova_compute[226295]: 2025-11-29 08:47:41.364 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:47:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:41.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:42 np0005539564 nova_compute[226295]: 2025-11-29 08:47:42.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:43.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:44.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:45 np0005539564 nova_compute[226295]: 2025-11-29 08:47:45.212 226310 DEBUG nova.network.neutron [req-66cf84f1-2fbd-46ed-9d5d-ddd9c3b20d29 req-a7361086-29d9-40ff-aba7-fcc4746f502c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updated VIF entry in instance network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:45 np0005539564 nova_compute[226295]: 2025-11-29 08:47:45.213 226310 DEBUG nova.network.neutron [req-66cf84f1-2fbd-46ed-9d5d-ddd9c3b20d29 req-a7361086-29d9-40ff-aba7-fcc4746f502c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:45 np0005539564 nova_compute[226295]: 2025-11-29 08:47:45.245 226310 DEBUG oslo_concurrency.lockutils [req-66cf84f1-2fbd-46ed-9d5d-ddd9c3b20d29 req-a7361086-29d9-40ff-aba7-fcc4746f502c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:45 np0005539564 nova_compute[226295]: 2025-11-29 08:47:45.267 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:45.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:46.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:47 np0005539564 nova_compute[226295]: 2025-11-29 08:47:47.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:47 np0005539564 nova_compute[226295]: 2025-11-29 08:47:47.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:47.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:47 np0005539564 nova_compute[226295]: 2025-11-29 08:47:47.931 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:48 np0005539564 nova_compute[226295]: 2025-11-29 08:47:48.382 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:48 np0005539564 nova_compute[226295]: 2025-11-29 08:47:48.383 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:47:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:48.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:48 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:48Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:3b:02 10.100.0.10
Nov 29 03:47:48 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:48Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:3b:02 10.100.0.10
Nov 29 03:47:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:49.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:50 np0005539564 nova_compute[226295]: 2025-11-29 08:47:50.268 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:50.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:52 np0005539564 nova_compute[226295]: 2025-11-29 08:47:52.736 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:52.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:53.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:53 np0005539564 nova_compute[226295]: 2025-11-29 08:47:53.997 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:54.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:55 np0005539564 nova_compute[226295]: 2025-11-29 08:47:55.274 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:55.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:57 np0005539564 nova_compute[226295]: 2025-11-29 08:47:57.465 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:57 np0005539564 nova_compute[226295]: 2025-11-29 08:47:57.466 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:57 np0005539564 nova_compute[226295]: 2025-11-29 08:47:57.466 226310 INFO nova.compute.manager [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Shelving#033[00m
Nov 29 03:47:57 np0005539564 nova_compute[226295]: 2025-11-29 08:47:57.494 226310 DEBUG nova.virt.libvirt.driver [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:47:57 np0005539564 nova_compute[226295]: 2025-11-29 08:47:57.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:57.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:59 np0005539564 kernel: tap3bfd1c43-1b (unregistering): left promiscuous mode
Nov 29 03:47:59 np0005539564 NetworkManager[48997]: <info>  [1764406079.8327] device (tap3bfd1c43-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:47:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:59Z|00790|binding|INFO|Releasing lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 from this chassis (sb_readonly=0)
Nov 29 03:47:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:59Z|00791|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 down in Southbound
Nov 29 03:47:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:47:59Z|00792|binding|INFO|Removing iface tap3bfd1c43-1b ovn-installed in OVS
Nov 29 03:47:59 np0005539564 nova_compute[226295]: 2025-11-29 08:47:59.851 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:47:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:59.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:59.866 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:3b:02 10.100.0.10'], port_security=['fa:16:3e:03:3b:02 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6eda2a5e-bf92-4d34-b21e-ca4eaf01728b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36d56553-7b52-4135-ab01-9fd93eb2713f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:59.868 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 unbound from our chassis#033[00m
Nov 29 03:47:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:59.870 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0636028a-96d5-4ad7-aa6e-9129edd44385, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:47:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:59.872 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2471beba-d01f-4306-9f8d-7aa19adcf53d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:47:59.873 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 namespace which is not needed anymore#033[00m
Nov 29 03:47:59 np0005539564 nova_compute[226295]: 2025-11-29 08:47:59.879 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:59 np0005539564 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Nov 29 03:47:59 np0005539564 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c3.scope: Consumed 14.595s CPU time.
Nov 29 03:47:59 np0005539564 systemd-machined[190128]: Machine qemu-91-instance-000000c3 terminated.
Nov 29 03:47:59 np0005539564 podman[302416]: 2025-11-29 08:47:59.949707693 +0000 UTC m=+0.083413197 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Nov 29 03:47:59 np0005539564 podman[302417]: 2025-11-29 08:47:59.967778932 +0000 UTC m=+0.102421822 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:47:59 np0005539564 podman[302413]: 2025-11-29 08:47:59.992660526 +0000 UTC m=+0.125317572 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 03:48:00 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[302374]: [NOTICE]   (302378) : haproxy version is 2.8.14-c23fe91
Nov 29 03:48:00 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[302374]: [NOTICE]   (302378) : path to executable is /usr/sbin/haproxy
Nov 29 03:48:00 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[302374]: [ALERT]    (302378) : Current worker (302380) exited with code 143 (Terminated)
Nov 29 03:48:00 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[302374]: [WARNING]  (302378) : All workers exited. Exiting... (0)
Nov 29 03:48:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:00 np0005539564 systemd[1]: libpod-159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9.scope: Deactivated successfully.
Nov 29 03:48:00 np0005539564 podman[302490]: 2025-11-29 08:48:00.007681772 +0000 UTC m=+0.044625599 container died 159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:48:00 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9-userdata-shm.mount: Deactivated successfully.
Nov 29 03:48:00 np0005539564 systemd[1]: var-lib-containers-storage-overlay-495d0c4f0d2d42c2267e91374f73a4e3d17fde8512c5aecf8b7d54ec8099263f-merged.mount: Deactivated successfully.
Nov 29 03:48:00 np0005539564 podman[302490]: 2025-11-29 08:48:00.042749571 +0000 UTC m=+0.079693398 container cleanup 159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:48:00 np0005539564 systemd[1]: libpod-conmon-159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9.scope: Deactivated successfully.
Nov 29 03:48:00 np0005539564 podman[302526]: 2025-11-29 08:48:00.120121544 +0000 UTC m=+0.055593796 container remove 159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.127 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e14fc3-4dc2-4bd6-ba7f-b65690b19156]: (4, ('Sat Nov 29 08:47:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 (159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9)\n159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9\nSat Nov 29 08:48:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 (159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9)\n159f50175e68da1ecb32c6554f10b2efd1e5aa38250ced35d2021da6e36b4dd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.129 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e36144be-5af1-4690-a42c-8543cbb9a6a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.130 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0636028a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:00 np0005539564 nova_compute[226295]: 2025-11-29 08:48:00.132 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539564 kernel: tap0636028a-90: left promiscuous mode
Nov 29 03:48:00 np0005539564 nova_compute[226295]: 2025-11-29 08:48:00.193 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.198 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[08972b14-cea1-4e1e-bbad-8f4300b2d367]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.211 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7389d494-ccff-49e2-ba18-83f652e07f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.213 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8fc45b-f225-4401-8116-476f4b4f3c9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.237 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcd7490-6013-4129-b9d2-fb48767b98fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902880, 'reachable_time': 36685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302554, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.241 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:48:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:00.242 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ca9f3c-2683-4256-95ba-2b0424823bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539564 systemd[1]: run-netns-ovnmeta\x2d0636028a\x2d96d5\x2d4ad7\x2daa6e\x2d9129edd44385.mount: Deactivated successfully.
Nov 29 03:48:00 np0005539564 nova_compute[226295]: 2025-11-29 08:48:00.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539564 nova_compute[226295]: 2025-11-29 08:48:00.518 226310 INFO nova.virt.libvirt.driver [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:48:00 np0005539564 nova_compute[226295]: 2025-11-29 08:48:00.525 226310 INFO nova.virt.libvirt.driver [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance destroyed successfully.#033[00m
Nov 29 03:48:00 np0005539564 nova_compute[226295]: 2025-11-29 08:48:00.526 226310 DEBUG nova.objects.instance [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:00 np0005539564 nova_compute[226295]: 2025-11-29 08:48:00.853 226310 INFO nova.virt.libvirt.driver [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Beginning cold snapshot process#033[00m
Nov 29 03:48:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:00.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.072 226310 DEBUG nova.virt.libvirt.imagebackend [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.335 226310 DEBUG nova.compute.manager [req-e29c2db6-c629-42db-850f-0b65b1388249 req-e7de537f-15ff-4fc9-adaf-3e9eafdae360 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-unplugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.336 226310 DEBUG oslo_concurrency.lockutils [req-e29c2db6-c629-42db-850f-0b65b1388249 req-e7de537f-15ff-4fc9-adaf-3e9eafdae360 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.337 226310 DEBUG oslo_concurrency.lockutils [req-e29c2db6-c629-42db-850f-0b65b1388249 req-e7de537f-15ff-4fc9-adaf-3e9eafdae360 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.337 226310 DEBUG oslo_concurrency.lockutils [req-e29c2db6-c629-42db-850f-0b65b1388249 req-e7de537f-15ff-4fc9-adaf-3e9eafdae360 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.337 226310 DEBUG nova.compute.manager [req-e29c2db6-c629-42db-850f-0b65b1388249 req-e7de537f-15ff-4fc9-adaf-3e9eafdae360 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] No waiting events found dispatching network-vif-unplugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.338 226310 WARNING nova.compute.manager [req-e29c2db6-c629-42db-850f-0b65b1388249 req-e7de537f-15ff-4fc9-adaf-3e9eafdae360 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received unexpected event network-vif-unplugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.411 226310 DEBUG nova.storage.rbd_utils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] creating snapshot(e9ddc3ddd8134fd4874c7538871c27f2) on rbd image(6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:48:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:01.423 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:01.426 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:48:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:01.428 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:01 np0005539564 nova_compute[226295]: 2025-11-29 08:48:01.465 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:01.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e411 e411: 3 total, 3 up, 3 in
Nov 29 03:48:02 np0005539564 nova_compute[226295]: 2025-11-29 08:48:02.326 226310 DEBUG nova.storage.rbd_utils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] cloning vms/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk@e9ddc3ddd8134fd4874c7538871c27f2 to images/73ec6614-8649-4526-8040-59b3499a752c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:48:02 np0005539564 nova_compute[226295]: 2025-11-29 08:48:02.473 226310 DEBUG nova.storage.rbd_utils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] flattening images/73ec6614-8649-4526-8040-59b3499a752c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:48:02 np0005539564 nova_compute[226295]: 2025-11-29 08:48:02.744 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:02 np0005539564 nova_compute[226295]: 2025-11-29 08:48:02.861 226310 DEBUG nova.storage.rbd_utils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] removing snapshot(e9ddc3ddd8134fd4874c7538871c27f2) on rbd image(6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:48:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:02.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e412 e412: 3 total, 3 up, 3 in
Nov 29 03:48:03 np0005539564 nova_compute[226295]: 2025-11-29 08:48:03.395 226310 DEBUG nova.storage.rbd_utils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] creating snapshot(snap) on rbd image(73ec6614-8649-4526-8040-59b3499a752c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:48:03 np0005539564 nova_compute[226295]: 2025-11-29 08:48:03.492 226310 DEBUG nova.compute.manager [req-77be3857-757f-47f6-8e9a-c0cee560c801 req-c63682d3-29ba-4c26-8279-3d0b27ef0b2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:03 np0005539564 nova_compute[226295]: 2025-11-29 08:48:03.493 226310 DEBUG oslo_concurrency.lockutils [req-77be3857-757f-47f6-8e9a-c0cee560c801 req-c63682d3-29ba-4c26-8279-3d0b27ef0b2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:03 np0005539564 nova_compute[226295]: 2025-11-29 08:48:03.493 226310 DEBUG oslo_concurrency.lockutils [req-77be3857-757f-47f6-8e9a-c0cee560c801 req-c63682d3-29ba-4c26-8279-3d0b27ef0b2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:03 np0005539564 nova_compute[226295]: 2025-11-29 08:48:03.493 226310 DEBUG oslo_concurrency.lockutils [req-77be3857-757f-47f6-8e9a-c0cee560c801 req-c63682d3-29ba-4c26-8279-3d0b27ef0b2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:03 np0005539564 nova_compute[226295]: 2025-11-29 08:48:03.493 226310 DEBUG nova.compute.manager [req-77be3857-757f-47f6-8e9a-c0cee560c801 req-c63682d3-29ba-4c26-8279-3d0b27ef0b2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] No waiting events found dispatching network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:03 np0005539564 nova_compute[226295]: 2025-11-29 08:48:03.494 226310 WARNING nova.compute.manager [req-77be3857-757f-47f6-8e9a-c0cee560c801 req-c63682d3-29ba-4c26-8279-3d0b27ef0b2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received unexpected event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:48:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:03.764 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:03.764 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:03.764 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:03.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e413 e413: 3 total, 3 up, 3 in
Nov 29 03:48:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:04.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:05 np0005539564 nova_compute[226295]: 2025-11-29 08:48:05.278 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:05.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:06 np0005539564 nova_compute[226295]: 2025-11-29 08:48:06.464 226310 INFO nova.virt.libvirt.driver [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Snapshot image upload complete#033[00m
Nov 29 03:48:06 np0005539564 nova_compute[226295]: 2025-11-29 08:48:06.465 226310 DEBUG nova.compute.manager [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:06 np0005539564 nova_compute[226295]: 2025-11-29 08:48:06.582 226310 INFO nova.compute.manager [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Shelve offloading#033[00m
Nov 29 03:48:06 np0005539564 nova_compute[226295]: 2025-11-29 08:48:06.590 226310 INFO nova.virt.libvirt.driver [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance destroyed successfully.#033[00m
Nov 29 03:48:06 np0005539564 nova_compute[226295]: 2025-11-29 08:48:06.591 226310 DEBUG nova.compute.manager [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:06 np0005539564 nova_compute[226295]: 2025-11-29 08:48:06.593 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:06 np0005539564 nova_compute[226295]: 2025-11-29 08:48:06.593 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:06 np0005539564 nova_compute[226295]: 2025-11-29 08:48:06.593 226310 DEBUG nova.network.neutron [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:48:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:06.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:07 np0005539564 nova_compute[226295]: 2025-11-29 08:48:07.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:08 np0005539564 nova_compute[226295]: 2025-11-29 08:48:08.655 226310 DEBUG nova.network.neutron [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:08 np0005539564 nova_compute[226295]: 2025-11-29 08:48:08.693 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:08.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.355 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.543 226310 INFO nova.virt.libvirt.driver [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance destroyed successfully.#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.543 226310 DEBUG nova.objects.instance [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'resources' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.567 226310 DEBUG nova.virt.libvirt.vif [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:47:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-74860485',display_name='tempest-TestShelveInstance-server-74860485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-74860485',id=195,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKT08Tnkn1UqV0bF/J8wpnzMCF6Zkhpmk/usL6YnI+Le5YAvtauWosLF4Kvj259R/59WcHeLG4Cqd2MmjrgXGd9Nw0BxGgZcDldkgLq1Xl0jjL8yBMwXntpEhSzBHi8sNQ==',key_name='tempest-TestShelveInstance-876097080',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-4rgnqe84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member',shelved_at='2025-11-29T08:48:06.464838',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='73ec6614-8649-4526-8040-59b3499a752c'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:48:00Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=6eda2a5e-bf92-4d34-b21e-ca4eaf01728b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.567 226310 DEBUG nova.network.os_vif_util [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.568 226310 DEBUG nova.network.os_vif_util [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.568 226310 DEBUG os_vif [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.569 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.570 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bfd1c43-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.571 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.573 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.574 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.578 226310 INFO os_vif [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b')#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.654 226310 DEBUG nova.compute.manager [req-15497e35-0b35-41bd-a339-d3065ac140d1 req-84e9978f-6929-4976-9c26-8dcd95f6091c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.654 226310 DEBUG nova.compute.manager [req-15497e35-0b35-41bd-a339-d3065ac140d1 req-84e9978f-6929-4976-9c26-8dcd95f6091c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing instance network info cache due to event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.655 226310 DEBUG oslo_concurrency.lockutils [req-15497e35-0b35-41bd-a339-d3065ac140d1 req-84e9978f-6929-4976-9c26-8dcd95f6091c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.655 226310 DEBUG oslo_concurrency.lockutils [req-15497e35-0b35-41bd-a339-d3065ac140d1 req-84e9978f-6929-4976-9c26-8dcd95f6091c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:09 np0005539564 nova_compute[226295]: 2025-11-29 08:48:09.655 226310 DEBUG nova.network.neutron [req-15497e35-0b35-41bd-a339-d3065ac140d1 req-84e9978f-6929-4976-9c26-8dcd95f6091c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:09.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:10 np0005539564 nova_compute[226295]: 2025-11-29 08:48:10.280 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:10 np0005539564 nova_compute[226295]: 2025-11-29 08:48:10.763 226310 INFO nova.virt.libvirt.driver [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Deleting instance files /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_del#033[00m
Nov 29 03:48:10 np0005539564 nova_compute[226295]: 2025-11-29 08:48:10.765 226310 INFO nova.virt.libvirt.driver [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Deletion of /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_del complete#033[00m
Nov 29 03:48:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:10 np0005539564 nova_compute[226295]: 2025-11-29 08:48:10.909 226310 INFO nova.scheduler.client.report [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Deleted allocations for instance 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b#033[00m
Nov 29 03:48:11 np0005539564 nova_compute[226295]: 2025-11-29 08:48:11.075 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:11 np0005539564 nova_compute[226295]: 2025-11-29 08:48:11.076 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:11 np0005539564 nova_compute[226295]: 2025-11-29 08:48:11.106 226310 DEBUG oslo_concurrency.processutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2815288591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:11 np0005539564 nova_compute[226295]: 2025-11-29 08:48:11.584 226310 DEBUG oslo_concurrency.processutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:11 np0005539564 nova_compute[226295]: 2025-11-29 08:48:11.594 226310 DEBUG nova.compute.provider_tree [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:11 np0005539564 nova_compute[226295]: 2025-11-29 08:48:11.625 226310 DEBUG nova.scheduler.client.report [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:11 np0005539564 nova_compute[226295]: 2025-11-29 08:48:11.667 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:11 np0005539564 nova_compute[226295]: 2025-11-29 08:48:11.755 226310 DEBUG oslo_concurrency.lockutils [None req-ddeaca02-d420-476a-a59e-edfcf59fe27d 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:11.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e414 e414: 3 total, 3 up, 3 in
Nov 29 03:48:12 np0005539564 nova_compute[226295]: 2025-11-29 08:48:12.183 226310 DEBUG nova.network.neutron [req-15497e35-0b35-41bd-a339-d3065ac140d1 req-84e9978f-6929-4976-9c26-8dcd95f6091c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updated VIF entry in instance network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:12 np0005539564 nova_compute[226295]: 2025-11-29 08:48:12.184 226310 DEBUG nova.network.neutron [req-15497e35-0b35-41bd-a339-d3065ac140d1 req-84e9978f-6929-4976-9c26-8dcd95f6091c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": null, "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:12 np0005539564 nova_compute[226295]: 2025-11-29 08:48:12.223 226310 DEBUG oslo_concurrency.lockutils [req-15497e35-0b35-41bd-a339-d3065ac140d1 req-84e9978f-6929-4976-9c26-8dcd95f6091c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:13.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:14 np0005539564 nova_compute[226295]: 2025-11-29 08:48:14.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:14 np0005539564 nova_compute[226295]: 2025-11-29 08:48:14.574 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:14.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:15 np0005539564 nova_compute[226295]: 2025-11-29 08:48:15.078 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406080.077168, 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:15 np0005539564 nova_compute[226295]: 2025-11-29 08:48:15.079 226310 INFO nova.compute.manager [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:48:15 np0005539564 nova_compute[226295]: 2025-11-29 08:48:15.120 226310 DEBUG nova.compute.manager [None req-4d308308-2d94-4608-8942-74637925a330 - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:15 np0005539564 nova_compute[226295]: 2025-11-29 08:48:15.283 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:15 np0005539564 nova_compute[226295]: 2025-11-29 08:48:15.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:15.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:17 np0005539564 nova_compute[226295]: 2025-11-29 08:48:17.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:17 np0005539564 nova_compute[226295]: 2025-11-29 08:48:17.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:48:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:17.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:18.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:19 np0005539564 nova_compute[226295]: 2025-11-29 08:48:19.577 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:19.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:20 np0005539564 nova_compute[226295]: 2025-11-29 08:48:20.285 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:21 np0005539564 nova_compute[226295]: 2025-11-29 08:48:21.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:21 np0005539564 nova_compute[226295]: 2025-11-29 08:48:21.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:21.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:22 np0005539564 nova_compute[226295]: 2025-11-29 08:48:22.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:22 np0005539564 nova_compute[226295]: 2025-11-29 08:48:22.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:48:22 np0005539564 nova_compute[226295]: 2025-11-29 08:48:22.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:48:22 np0005539564 nova_compute[226295]: 2025-11-29 08:48:22.378 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:48:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:22.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:24 np0005539564 nova_compute[226295]: 2025-11-29 08:48:24.581 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:24.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:25 np0005539564 nova_compute[226295]: 2025-11-29 08:48:25.287 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e415 e415: 3 total, 3 up, 3 in
Nov 29 03:48:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:25.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:26.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:27 np0005539564 nova_compute[226295]: 2025-11-29 08:48:27.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:28.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:29 np0005539564 nova_compute[226295]: 2025-11-29 08:48:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:29 np0005539564 nova_compute[226295]: 2025-11-29 08:48:29.621 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:48:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:48:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:29.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:30 np0005539564 nova_compute[226295]: 2025-11-29 08:48:30.291 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:30 np0005539564 podman[302870]: 2025-11-29 08:48:30.546759614 +0000 UTC m=+0.090601123 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 03:48:30 np0005539564 podman[302871]: 2025-11-29 08:48:30.551191143 +0000 UTC m=+0.092526164 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:48:30 np0005539564 podman[302869]: 2025-11-29 08:48:30.573726083 +0000 UTC m=+0.123123052 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:48:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:30.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:31.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:34 np0005539564 nova_compute[226295]: 2025-11-29 08:48:34.686 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:34.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:35 np0005539564 nova_compute[226295]: 2025-11-29 08:48:35.292 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:35 np0005539564 nova_compute[226295]: 2025-11-29 08:48:35.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:35 np0005539564 nova_compute[226295]: 2025-11-29 08:48:35.553 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:35 np0005539564 nova_compute[226295]: 2025-11-29 08:48:35.554 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:35 np0005539564 nova_compute[226295]: 2025-11-29 08:48:35.554 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:35 np0005539564 nova_compute[226295]: 2025-11-29 08:48:35.555 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:48:35 np0005539564 nova_compute[226295]: 2025-11-29 08:48:35.556 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1461582722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.068 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.241 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.242 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4196MB free_disk=20.897106170654297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.242 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.242 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.338 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.338 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.365 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.381 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.381 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.400 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.426 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.443 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1038626851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.931 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:36.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.939 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.960 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:36 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.999 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:48:37 np0005539564 nova_compute[226295]: 2025-11-29 08:48:36.999 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 e416: 3 total, 3 up, 3 in
Nov 29 03:48:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:39 np0005539564 nova_compute[226295]: 2025-11-29 08:48:39.689 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:39.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:40 np0005539564 nova_compute[226295]: 2025-11-29 08:48:40.295 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:41.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:42.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.043 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.044 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.066 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.152 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.153 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.160 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.161 226310 INFO nova.compute.claims [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.166 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.266 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/512683767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.750 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.757 226310 DEBUG nova.compute.provider_tree [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.784 226310 DEBUG nova.scheduler.client.report [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.810 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.811 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.848 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.849 226310 DEBUG nova.network.neutron [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.865 226310 INFO nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.879 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:48:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:43.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.970 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.972 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:48:43 np0005539564 nova_compute[226295]: 2025-11-29 08:48:43.973 226310 INFO nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Creating image(s)#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.011 226310 DEBUG nova.storage.rbd_utils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.054 226310 DEBUG nova.storage.rbd_utils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.096 226310 DEBUG nova.storage.rbd_utils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.102 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.182 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.183 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.184 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.184 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.211 226310 DEBUG nova.storage.rbd_utils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.215 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.603 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.704 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.713 226310 DEBUG nova.storage.rbd_utils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.869 226310 DEBUG nova.objects.instance [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.924 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.925 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Ensure instance console log exists: /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.926 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.926 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:44 np0005539564 nova_compute[226295]: 2025-11-29 08:48:44.927 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:44.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:45 np0005539564 nova_compute[226295]: 2025-11-29 08:48:45.020 226310 DEBUG nova.policy [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:48:45 np0005539564 nova_compute[226295]: 2025-11-29 08:48:45.297 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:46.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:47 np0005539564 nova_compute[226295]: 2025-11-29 08:48:47.662 226310 DEBUG nova.network.neutron [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Successfully created port: e0f5752b-83ef-40d0-87cd-2dc09f977b2a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:48:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:47.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:48 np0005539564 nova_compute[226295]: 2025-11-29 08:48:48.584 226310 DEBUG nova.network.neutron [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Successfully updated port: e0f5752b-83ef-40d0-87cd-2dc09f977b2a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:48:48 np0005539564 nova_compute[226295]: 2025-11-29 08:48:48.603 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:48 np0005539564 nova_compute[226295]: 2025-11-29 08:48:48.604 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:48 np0005539564 nova_compute[226295]: 2025-11-29 08:48:48.604 226310 DEBUG nova.network.neutron [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:48:48 np0005539564 nova_compute[226295]: 2025-11-29 08:48:48.778 226310 DEBUG nova.compute.manager [req-da45f6f8-da6e-4ba6-9cf1-8ee05bc521ab req-4a0d4517-255d-4f81-9060-38616f5a5475 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received event network-changed-e0f5752b-83ef-40d0-87cd-2dc09f977b2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:48 np0005539564 nova_compute[226295]: 2025-11-29 08:48:48.778 226310 DEBUG nova.compute.manager [req-da45f6f8-da6e-4ba6-9cf1-8ee05bc521ab req-4a0d4517-255d-4f81-9060-38616f5a5475 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Refreshing instance network info cache due to event network-changed-e0f5752b-83ef-40d0-87cd-2dc09f977b2a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:48 np0005539564 nova_compute[226295]: 2025-11-29 08:48:48.779 226310 DEBUG oslo_concurrency.lockutils [req-da45f6f8-da6e-4ba6-9cf1-8ee05bc521ab req-4a0d4517-255d-4f81-9060-38616f5a5475 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:48 np0005539564 nova_compute[226295]: 2025-11-29 08:48:48.845 226310 DEBUG nova.network.neutron [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:48:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:48.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.595 226310 DEBUG nova.network.neutron [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Updating instance_info_cache with network_info: [{"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.617 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.617 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Instance network_info: |[{"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.618 226310 DEBUG oslo_concurrency.lockutils [req-da45f6f8-da6e-4ba6-9cf1-8ee05bc521ab req-4a0d4517-255d-4f81-9060-38616f5a5475 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.618 226310 DEBUG nova.network.neutron [req-da45f6f8-da6e-4ba6-9cf1-8ee05bc521ab req-4a0d4517-255d-4f81-9060-38616f5a5475 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Refreshing network info cache for port e0f5752b-83ef-40d0-87cd-2dc09f977b2a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.620 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Start _get_guest_xml network_info=[{"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.625 226310 WARNING nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.630 226310 DEBUG nova.virt.libvirt.host [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.630 226310 DEBUG nova.virt.libvirt.host [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.633 226310 DEBUG nova.virt.libvirt.host [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.634 226310 DEBUG nova.virt.libvirt.host [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.635 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.635 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.636 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.636 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.636 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.636 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.636 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.637 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.637 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.637 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.637 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.638 226310 DEBUG nova.virt.hardware [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.641 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:49 np0005539564 nova_compute[226295]: 2025-11-29 08:48:49.709 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:49.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2972507368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.080 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.121 226310 DEBUG nova.storage.rbd_utils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.127 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.300 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1410571772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.622 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.624 226310 DEBUG nova.virt.libvirt.vif [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-75894674',display_name='tempest-TestNetworkBasicOps-server-75894674',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-75894674',id=197,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJCaDnTi79Wq19uM+DWaH3n/tMkS3PAoJon6CoNpttKybR1a81wFLftIiUNj2GUJ/I9F20Mf5UejpBfiEzQ6/3FXstK/VK0MoMMwzFrp1DKtYpOATwtNVF0oFaENxDfWAw==',key_name='tempest-TestNetworkBasicOps-144939467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-ruv2mbdd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:43Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.624 226310 DEBUG nova.network.os_vif_util [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.625 226310 DEBUG nova.network.os_vif_util [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:da:39,bridge_name='br-int',has_traffic_filtering=True,id=e0f5752b-83ef-40d0-87cd-2dc09f977b2a,network=Network(5538092b-9d5e-42d1-ad96-28bbf92ceb71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0f5752b-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.626 226310 DEBUG nova.objects.instance [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.680 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <uuid>81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3</uuid>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <name>instance-000000c5</name>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestNetworkBasicOps-server-75894674</nova:name>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:48:49</nova:creationTime>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <nova:port uuid="e0f5752b-83ef-40d0-87cd-2dc09f977b2a">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <entry name="serial">81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3</entry>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <entry name="uuid">81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3</entry>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk.config">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:6b:da:39"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <target dev="tape0f5752b-83"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3/console.log" append="off"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:48:50 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:48:50 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:48:50 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:48:50 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.682 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Preparing to wait for external event network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.683 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.683 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.684 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.685 226310 DEBUG nova.virt.libvirt.vif [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-75894674',display_name='tempest-TestNetworkBasicOps-server-75894674',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-75894674',id=197,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJCaDnTi79Wq19uM+DWaH3n/tMkS3PAoJon6CoNpttKybR1a81wFLftIiUNj2GUJ/I9F20Mf5UejpBfiEzQ6/3FXstK/VK0MoMMwzFrp1DKtYpOATwtNVF0oFaENxDfWAw==',key_name='tempest-TestNetworkBasicOps-144939467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-ruv2mbdd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:43Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.686 226310 DEBUG nova.network.os_vif_util [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.687 226310 DEBUG nova.network.os_vif_util [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:da:39,bridge_name='br-int',has_traffic_filtering=True,id=e0f5752b-83ef-40d0-87cd-2dc09f977b2a,network=Network(5538092b-9d5e-42d1-ad96-28bbf92ceb71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0f5752b-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.688 226310 DEBUG os_vif [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:da:39,bridge_name='br-int',has_traffic_filtering=True,id=e0f5752b-83ef-40d0-87cd-2dc09f977b2a,network=Network(5538092b-9d5e-42d1-ad96-28bbf92ceb71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0f5752b-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.689 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.690 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.690 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.695 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0f5752b-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.695 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0f5752b-83, col_values=(('external_ids', {'iface-id': 'e0f5752b-83ef-40d0-87cd-2dc09f977b2a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:da:39', 'vm-uuid': '81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:50 np0005539564 NetworkManager[48997]: <info>  [1764406130.7321] manager: (tape0f5752b-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.742 226310 INFO os_vif [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:da:39,bridge_name='br-int',has_traffic_filtering=True,id=e0f5752b-83ef-40d0-87cd-2dc09f977b2a,network=Network(5538092b-9d5e-42d1-ad96-28bbf92ceb71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0f5752b-83')#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.822 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.822 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.822 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:6b:da:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.823 226310 INFO nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Using config drive#033[00m
Nov 29 03:48:50 np0005539564 nova_compute[226295]: 2025-11-29 08:48:50.863 226310 DEBUG nova.storage.rbd_utils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:50.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:51.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.034 226310 INFO nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Creating config drive at /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3/disk.config#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.039 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv72j7mc9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.183 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.184 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.184 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.187 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv72j7mc9" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.223 226310 DEBUG nova.storage.rbd_utils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.230 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3/disk.config 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.310 226310 DEBUG nova.network.neutron [req-da45f6f8-da6e-4ba6-9cf1-8ee05bc521ab req-4a0d4517-255d-4f81-9060-38616f5a5475 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Updated VIF entry in instance network info cache for port e0f5752b-83ef-40d0-87cd-2dc09f977b2a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.311 226310 DEBUG nova.network.neutron [req-da45f6f8-da6e-4ba6-9cf1-8ee05bc521ab req-4a0d4517-255d-4f81-9060-38616f5a5475 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Updating instance_info_cache with network_info: [{"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.380 226310 DEBUG oslo_concurrency.lockutils [req-da45f6f8-da6e-4ba6-9cf1-8ee05bc521ab req-4a0d4517-255d-4f81-9060-38616f5a5475 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.460 226310 DEBUG oslo_concurrency.processutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3/disk.config 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.461 226310 INFO nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Deleting local config drive /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:48:52 np0005539564 kernel: tape0f5752b-83: entered promiscuous mode
Nov 29 03:48:52 np0005539564 NetworkManager[48997]: <info>  [1764406132.5325] manager: (tape0f5752b-83): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.533 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:48:52Z|00793|binding|INFO|Claiming lport e0f5752b-83ef-40d0-87cd-2dc09f977b2a for this chassis.
Nov 29 03:48:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:48:52Z|00794|binding|INFO|e0f5752b-83ef-40d0-87cd-2dc09f977b2a: Claiming fa:16:3e:6b:da:39 10.100.0.27
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.567 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:da:39 10.100.0.27'], port_security=['fa:16:3e:6b:da:39 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5538092b-9d5e-42d1-ad96-28bbf92ceb71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0b5333e-fd29-40d6-a7bc-1d0acaf9861e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a39b396-be6b-4918-82c4-b6ccfe99227a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=e0f5752b-83ef-40d0-87cd-2dc09f977b2a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.569 139780 INFO neutron.agent.ovn.metadata.agent [-] Port e0f5752b-83ef-40d0-87cd-2dc09f977b2a in datapath 5538092b-9d5e-42d1-ad96-28bbf92ceb71 bound to our chassis#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.571 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5538092b-9d5e-42d1-ad96-28bbf92ceb71#033[00m
Nov 29 03:48:52 np0005539564 systemd-machined[190128]: New machine qemu-92-instance-000000c5.
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.593 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ce8747-63d6-4da7-83d2-adfc1012e36a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.594 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5538092b-91 in ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.596 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5538092b-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.597 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe1ace6-d182-4d1b-9149-3aca8e260902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.597 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7fee8eef-daf0-4789-95e1-3792d5d43b07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.600 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.607 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:48:52Z|00795|binding|INFO|Setting lport e0f5752b-83ef-40d0-87cd-2dc09f977b2a ovn-installed in OVS
Nov 29 03:48:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:48:52Z|00796|binding|INFO|Setting lport e0f5752b-83ef-40d0-87cd-2dc09f977b2a up in Southbound
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.610 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 systemd[1]: Started Virtual Machine qemu-92-instance-000000c5.
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.618 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[8581bf1e-425e-4cc0-85f0-2e194a3cc101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 systemd-udevd[303353]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:48:52 np0005539564 NetworkManager[48997]: <info>  [1764406132.6417] device (tape0f5752b-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:48:52 np0005539564 NetworkManager[48997]: <info>  [1764406132.6426] device (tape0f5752b-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.647 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[24a9f883-e885-403c-ae6b-f3625008b5ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.679 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1430b2-d640-4fe0-b51e-e5bf59c0258b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.686 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d3899727-9b7b-49e0-b4e2-8d1911f0a94f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 NetworkManager[48997]: <info>  [1764406132.6868] manager: (tap5538092b-90): new Veth device (/org/freedesktop/NetworkManager/Devices/367)
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.724 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[2a071b71-31b5-4c4d-8f39-1e8335285984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.728 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[22df76e8-4623-48e6-a251-229d86b11d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 NetworkManager[48997]: <info>  [1764406132.7567] device (tap5538092b-90): carrier: link connected
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.762 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[06498027-8860-4d13-9587-31ffda34d991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.786 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[818cee8a-a3a2-4b00-8176-4dd592e4aaaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5538092b-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:62:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910743, 'reachable_time': 18927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303383, 'error': None, 'target': 'ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.807 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1f021c80-398c-4611-900f-6d4e66309c16]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:621a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 910743, 'tstamp': 910743}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303384, 'error': None, 'target': 'ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.826 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1a503cb4-418e-4555-8127-2c8f900bfe85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5538092b-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:62:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910743, 'reachable_time': 18927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303385, 'error': None, 'target': 'ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.867 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[25e1bfca-a6fa-47cf-ac5b-f84543ad3637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.951 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e7292d72-5ddf-4f60-9ab0-00317ff07b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.953 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5538092b-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.954 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.954 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5538092b-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.957 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 kernel: tap5538092b-90: entered promiscuous mode
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.958 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 NetworkManager[48997]: <info>  [1764406132.9589] manager: (tap5538092b-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.963 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5538092b-90, col_values=(('external_ids', {'iface-id': 'bb5680d0-5a15-4490-8c99-a116e16e3a33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:52.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.965 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 ovn_controller[130591]: 2025-11-29T08:48:52Z|00797|binding|INFO|Releasing lport bb5680d0-5a15-4490-8c99-a116e16e3a33 from this chassis (sb_readonly=0)
Nov 29 03:48:52 np0005539564 nova_compute[226295]: 2025-11-29 08:48:52.979 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.980 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5538092b-9d5e-42d1-ad96-28bbf92ceb71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5538092b-9d5e-42d1-ad96-28bbf92ceb71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.981 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6a241103-1447-4275-b966-c112d44d968d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.982 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-5538092b-9d5e-42d1-ad96-28bbf92ceb71
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/5538092b-9d5e-42d1-ad96-28bbf92ceb71.pid.haproxy
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 5538092b-9d5e-42d1-ad96-28bbf92ceb71
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:48:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:52.984 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71', 'env', 'PROCESS_TAG=haproxy-5538092b-9d5e-42d1-ad96-28bbf92ceb71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5538092b-9d5e-42d1-ad96-28bbf92ceb71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.082 226310 DEBUG nova.compute.manager [req-b2552fe4-2a33-4ec0-98db-2fe5c35f8eb7 req-c9f8f61d-f4f6-4c51-858b-a91d2dc09209 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received event network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.083 226310 DEBUG oslo_concurrency.lockutils [req-b2552fe4-2a33-4ec0-98db-2fe5c35f8eb7 req-c9f8f61d-f4f6-4c51-858b-a91d2dc09209 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.083 226310 DEBUG oslo_concurrency.lockutils [req-b2552fe4-2a33-4ec0-98db-2fe5c35f8eb7 req-c9f8f61d-f4f6-4c51-858b-a91d2dc09209 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.083 226310 DEBUG oslo_concurrency.lockutils [req-b2552fe4-2a33-4ec0-98db-2fe5c35f8eb7 req-c9f8f61d-f4f6-4c51-858b-a91d2dc09209 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.084 226310 DEBUG nova.compute.manager [req-b2552fe4-2a33-4ec0-98db-2fe5c35f8eb7 req-c9f8f61d-f4f6-4c51-858b-a91d2dc09209 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Processing event network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.202 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406133.2020602, 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.202 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.205 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.208 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.212 226310 INFO nova.virt.libvirt.driver [-] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Instance spawned successfully.#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.212 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.238 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.244 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.249 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.249 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.250 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.250 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.250 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.251 226310 DEBUG nova.virt.libvirt.driver [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.285 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.285 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406133.2047791, 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.285 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:48:53 np0005539564 podman[303459]: 2025-11-29 08:48:53.341390086 +0000 UTC m=+0.050492838 container create 76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.345 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.351 226310 INFO nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Took 9.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.352 226310 DEBUG nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.356 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406133.207777, 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.357 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:48:53 np0005539564 systemd[1]: Started libpod-conmon-76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896.scope.
Nov 29 03:48:53 np0005539564 podman[303459]: 2025-11-29 08:48:53.312704689 +0000 UTC m=+0.021807471 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.408 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.416 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:53 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:48:53 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38246fe4dfd37605376ff54bb9d0a6999ab9f96d21a8cb1a44e0ef1f13b0907/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:53 np0005539564 podman[303459]: 2025-11-29 08:48:53.449253123 +0000 UTC m=+0.158355925 container init 76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.454 226310 INFO nova.compute.manager [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Took 10.34 seconds to build instance.#033[00m
Nov 29 03:48:53 np0005539564 podman[303459]: 2025-11-29 08:48:53.455845061 +0000 UTC m=+0.164947843 container start 76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:48:53 np0005539564 neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71[303474]: [NOTICE]   (303478) : New worker (303480) forked
Nov 29 03:48:53 np0005539564 neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71[303474]: [NOTICE]   (303478) : Loading success.
Nov 29 03:48:53 np0005539564 nova_compute[226295]: 2025-11-29 08:48:53.487 226310 DEBUG oslo_concurrency.lockutils [None req-c855111e-8b7b-4da9-a8a6-7263024bb01a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:53.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:54.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:55 np0005539564 nova_compute[226295]: 2025-11-29 08:48:55.171 226310 DEBUG nova.compute.manager [req-89ca9f62-dc2a-46eb-bad1-f05562bedcfa req-0946417d-337c-498f-9b9f-83f8b63c0dfb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received event network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:55 np0005539564 nova_compute[226295]: 2025-11-29 08:48:55.171 226310 DEBUG oslo_concurrency.lockutils [req-89ca9f62-dc2a-46eb-bad1-f05562bedcfa req-0946417d-337c-498f-9b9f-83f8b63c0dfb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:55 np0005539564 nova_compute[226295]: 2025-11-29 08:48:55.172 226310 DEBUG oslo_concurrency.lockutils [req-89ca9f62-dc2a-46eb-bad1-f05562bedcfa req-0946417d-337c-498f-9b9f-83f8b63c0dfb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:55 np0005539564 nova_compute[226295]: 2025-11-29 08:48:55.172 226310 DEBUG oslo_concurrency.lockutils [req-89ca9f62-dc2a-46eb-bad1-f05562bedcfa req-0946417d-337c-498f-9b9f-83f8b63c0dfb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:55 np0005539564 nova_compute[226295]: 2025-11-29 08:48:55.172 226310 DEBUG nova.compute.manager [req-89ca9f62-dc2a-46eb-bad1-f05562bedcfa req-0946417d-337c-498f-9b9f-83f8b63c0dfb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] No waiting events found dispatching network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:55 np0005539564 nova_compute[226295]: 2025-11-29 08:48:55.173 226310 WARNING nova.compute.manager [req-89ca9f62-dc2a-46eb-bad1-f05562bedcfa req-0946417d-337c-498f-9b9f-83f8b63c0dfb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received unexpected event network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:48:55 np0005539564 nova_compute[226295]: 2025-11-29 08:48:55.305 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:55 np0005539564 nova_compute[226295]: 2025-11-29 08:48:55.731 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:55.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:56.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:48:57.186 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:58.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:48:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:59.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:00 np0005539564 nova_compute[226295]: 2025-11-29 08:49:00.331 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:00 np0005539564 nova_compute[226295]: 2025-11-29 08:49:00.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:00.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:01 np0005539564 podman[303491]: 2025-11-29 08:49:01.551262767 +0000 UTC m=+0.083495440 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:49:01 np0005539564 podman[303490]: 2025-11-29 08:49:01.560827777 +0000 UTC m=+0.101284363 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:49:01 np0005539564 podman[303489]: 2025-11-29 08:49:01.603784298 +0000 UTC m=+0.143318978 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:49:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:01.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:02.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:03.765 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:03.766 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:03.767 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:03.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:49:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:04.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:49:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:05 np0005539564 nova_compute[226295]: 2025-11-29 08:49:05.334 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:05 np0005539564 nova_compute[226295]: 2025-11-29 08:49:05.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:05.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:06.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:06 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:06Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:da:39 10.100.0.27
Nov 29 03:49:06 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:06Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:da:39 10.100.0.27
Nov 29 03:49:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:07.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:08.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:49:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668438124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:49:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:09.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:10 np0005539564 nova_compute[226295]: 2025-11-29 08:49:10.337 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:10 np0005539564 nova_compute[226295]: 2025-11-29 08:49:10.737 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:10.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:11.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:12.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:13.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.241 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.241 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.242 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.242 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.243 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.245 226310 INFO nova.compute.manager [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Terminating instance#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.247 226310 DEBUG nova.compute.manager [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:49:14 np0005539564 kernel: tape0f5752b-83 (unregistering): left promiscuous mode
Nov 29 03:49:14 np0005539564 NetworkManager[48997]: <info>  [1764406154.3114] device (tape0f5752b-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:49:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:14Z|00798|binding|INFO|Releasing lport e0f5752b-83ef-40d0-87cd-2dc09f977b2a from this chassis (sb_readonly=0)
Nov 29 03:49:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:14Z|00799|binding|INFO|Setting lport e0f5752b-83ef-40d0-87cd-2dc09f977b2a down in Southbound
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.353 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:14Z|00800|binding|INFO|Removing iface tape0f5752b-83 ovn-installed in OVS
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.357 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.363 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:da:39 10.100.0.27'], port_security=['fa:16:3e:6b:da:39 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5538092b-9d5e-42d1-ad96-28bbf92ceb71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0b5333e-fd29-40d6-a7bc-1d0acaf9861e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a39b396-be6b-4918-82c4-b6ccfe99227a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=e0f5752b-83ef-40d0-87cd-2dc09f977b2a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.365 139780 INFO neutron.agent.ovn.metadata.agent [-] Port e0f5752b-83ef-40d0-87cd-2dc09f977b2a in datapath 5538092b-9d5e-42d1-ad96-28bbf92ceb71 unbound from our chassis#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.367 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5538092b-9d5e-42d1-ad96-28bbf92ceb71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.369 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.368 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8316db63-944b-44b4-94e4-ef3f765cf47a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.369 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71 namespace which is not needed anymore#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.370 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:14 np0005539564 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Nov 29 03:49:14 np0005539564 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c5.scope: Consumed 14.703s CPU time.
Nov 29 03:49:14 np0005539564 systemd-machined[190128]: Machine qemu-92-instance-000000c5 terminated.
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.507 226310 INFO nova.virt.libvirt.driver [-] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Instance destroyed successfully.#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.507 226310 DEBUG nova.objects.instance [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.527 226310 DEBUG nova.virt.libvirt.vif [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-75894674',display_name='tempest-TestNetworkBasicOps-server-75894674',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-75894674',id=197,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJCaDnTi79Wq19uM+DWaH3n/tMkS3PAoJon6CoNpttKybR1a81wFLftIiUNj2GUJ/I9F20Mf5UejpBfiEzQ6/3FXstK/VK0MoMMwzFrp1DKtYpOATwtNVF0oFaENxDfWAw==',key_name='tempest-TestNetworkBasicOps-144939467',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:48:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-ruv2mbdd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:48:53Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.528 226310 DEBUG nova.network.os_vif_util [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "address": "fa:16:3e:6b:da:39", "network": {"id": "5538092b-9d5e-42d1-ad96-28bbf92ceb71", "bridge": "br-int", "label": "tempest-network-smoke--521452968", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f5752b-83", "ovs_interfaceid": "e0f5752b-83ef-40d0-87cd-2dc09f977b2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.528 226310 DEBUG nova.network.os_vif_util [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:da:39,bridge_name='br-int',has_traffic_filtering=True,id=e0f5752b-83ef-40d0-87cd-2dc09f977b2a,network=Network(5538092b-9d5e-42d1-ad96-28bbf92ceb71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0f5752b-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.529 226310 DEBUG os_vif [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:da:39,bridge_name='br-int',has_traffic_filtering=True,id=e0f5752b-83ef-40d0-87cd-2dc09f977b2a,network=Network(5538092b-9d5e-42d1-ad96-28bbf92ceb71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0f5752b-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.531 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.531 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0f5752b-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:14 np0005539564 neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71[303474]: [NOTICE]   (303478) : haproxy version is 2.8.14-c23fe91
Nov 29 03:49:14 np0005539564 neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71[303474]: [NOTICE]   (303478) : path to executable is /usr/sbin/haproxy
Nov 29 03:49:14 np0005539564 neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71[303474]: [WARNING]  (303478) : Exiting Master process...
Nov 29 03:49:14 np0005539564 neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71[303474]: [ALERT]    (303478) : Current worker (303480) exited with code 143 (Terminated)
Nov 29 03:49:14 np0005539564 neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71[303474]: [WARNING]  (303478) : All workers exited. Exiting... (0)
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.535 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:49:14 np0005539564 systemd[1]: libpod-76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896.scope: Deactivated successfully.
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.538 226310 INFO os_vif [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:da:39,bridge_name='br-int',has_traffic_filtering=True,id=e0f5752b-83ef-40d0-87cd-2dc09f977b2a,network=Network(5538092b-9d5e-42d1-ad96-28bbf92ceb71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0f5752b-83')#033[00m
Nov 29 03:49:14 np0005539564 podman[303576]: 2025-11-29 08:49:14.544205381 +0000 UTC m=+0.065233436 container died 76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:49:14 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896-userdata-shm.mount: Deactivated successfully.
Nov 29 03:49:14 np0005539564 systemd[1]: var-lib-containers-storage-overlay-e38246fe4dfd37605376ff54bb9d0a6999ab9f96d21a8cb1a44e0ef1f13b0907-merged.mount: Deactivated successfully.
Nov 29 03:49:14 np0005539564 podman[303576]: 2025-11-29 08:49:14.586629619 +0000 UTC m=+0.107657694 container cleanup 76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:49:14 np0005539564 systemd[1]: libpod-conmon-76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896.scope: Deactivated successfully.
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.605 226310 DEBUG nova.compute.manager [req-8143d8ae-6484-4191-bf5c-7e3357e45d5d req-281b16af-b283-4250-b8ae-2029a374c83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received event network-vif-unplugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.606 226310 DEBUG oslo_concurrency.lockutils [req-8143d8ae-6484-4191-bf5c-7e3357e45d5d req-281b16af-b283-4250-b8ae-2029a374c83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.606 226310 DEBUG oslo_concurrency.lockutils [req-8143d8ae-6484-4191-bf5c-7e3357e45d5d req-281b16af-b283-4250-b8ae-2029a374c83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.606 226310 DEBUG oslo_concurrency.lockutils [req-8143d8ae-6484-4191-bf5c-7e3357e45d5d req-281b16af-b283-4250-b8ae-2029a374c83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.607 226310 DEBUG nova.compute.manager [req-8143d8ae-6484-4191-bf5c-7e3357e45d5d req-281b16af-b283-4250-b8ae-2029a374c83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] No waiting events found dispatching network-vif-unplugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.607 226310 DEBUG nova.compute.manager [req-8143d8ae-6484-4191-bf5c-7e3357e45d5d req-281b16af-b283-4250-b8ae-2029a374c83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received event network-vif-unplugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:49:14 np0005539564 podman[303636]: 2025-11-29 08:49:14.657495606 +0000 UTC m=+0.042042968 container remove 76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.664 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[713447d5-ce74-4f83-8eac-970eaf4566b8]: (4, ('Sat Nov 29 08:49:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71 (76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896)\n76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896\nSat Nov 29 08:49:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71 (76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896)\n76d02d878eb955e90d767ac2a9348ad1b539ca45894ac7f722da328884319896\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.666 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[66662779-3045-46f4-b349-9120bc59ef4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.667 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5538092b-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:14 np0005539564 kernel: tap5538092b-90: left promiscuous mode
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.669 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539564 nova_compute[226295]: 2025-11-29 08:49:14.692 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.695 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf77341-6a20-479d-8873-f390aa2b47c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.708 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3ece8180-417b-4ad9-ac60-1e0f4158d7c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.709 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5eafddba-9fcd-4963-b0aa-2b18d3f8f0ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.730 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d46d5033-4aa3-4b14-af22-ef854211215d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910735, 'reachable_time': 22352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303650, 'error': None, 'target': 'ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539564 systemd[1]: run-netns-ovnmeta\x2d5538092b\x2d9d5e\x2d42d1\x2dad96\x2d28bbf92ceb71.mount: Deactivated successfully.
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.736 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5538092b-9d5e-42d1-ad96-28bbf92ceb71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:49:14 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:14.736 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[8f31659f-bc0f-4ee9-b33e-e50bb0a98bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:14.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.038 226310 INFO nova.virt.libvirt.driver [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Deleting instance files /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_del#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.039 226310 INFO nova.virt.libvirt.driver [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Deletion of /var/lib/nova/instances/81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3_del complete#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.107 226310 INFO nova.compute.manager [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.107 226310 DEBUG oslo.service.loopingcall [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.108 226310 DEBUG nova.compute.manager [-] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.108 226310 DEBUG nova.network.neutron [-] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.339 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.911 226310 DEBUG nova.network.neutron [-] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.930 226310 INFO nova.compute.manager [-] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Took 0.82 seconds to deallocate network for instance.#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.979 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.980 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:15.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:15 np0005539564 nova_compute[226295]: 2025-11-29 08:49:15.996 226310 DEBUG nova.compute.manager [req-b43afeab-d0e1-416b-89a2-eb5b5841f7c9 req-2b5ac95b-b20f-4b5f-85ef-22dbc876be14 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received event network-vif-deleted-e0f5752b-83ef-40d0-87cd-2dc09f977b2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.039 226310 DEBUG oslo_concurrency.processutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:16 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3549323233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.496 226310 DEBUG oslo_concurrency.processutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.504 226310 DEBUG nova.compute.provider_tree [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.527 226310 DEBUG nova.scheduler.client.report [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.559 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.603 226310 INFO nova.scheduler.client.report [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.672 226310 DEBUG oslo_concurrency.lockutils [None req-82bdff6d-1d91-458e-be2f-43f8bedaea17 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.699 226310 DEBUG nova.compute.manager [req-eafb5df1-e0b6-4b0c-ba55-52c228dad9aa req-e1e416ad-d19f-4b3f-aae6-a0e0dd3debcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received event network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.699 226310 DEBUG oslo_concurrency.lockutils [req-eafb5df1-e0b6-4b0c-ba55-52c228dad9aa req-e1e416ad-d19f-4b3f-aae6-a0e0dd3debcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.700 226310 DEBUG oslo_concurrency.lockutils [req-eafb5df1-e0b6-4b0c-ba55-52c228dad9aa req-e1e416ad-d19f-4b3f-aae6-a0e0dd3debcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.700 226310 DEBUG oslo_concurrency.lockutils [req-eafb5df1-e0b6-4b0c-ba55-52c228dad9aa req-e1e416ad-d19f-4b3f-aae6-a0e0dd3debcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.701 226310 DEBUG nova.compute.manager [req-eafb5df1-e0b6-4b0c-ba55-52c228dad9aa req-e1e416ad-d19f-4b3f-aae6-a0e0dd3debcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] No waiting events found dispatching network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:16 np0005539564 nova_compute[226295]: 2025-11-29 08:49:16.701 226310 WARNING nova.compute.manager [req-eafb5df1-e0b6-4b0c-ba55-52c228dad9aa req-e1e416ad-d19f-4b3f-aae6-a0e0dd3debcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Received unexpected event network-vif-plugged-e0f5752b-83ef-40d0-87cd-2dc09f977b2a for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:49:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:17.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:17.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:19.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:19 np0005539564 nova_compute[226295]: 2025-11-29 08:49:19.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:19 np0005539564 nova_compute[226295]: 2025-11-29 08:49:19.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:49:19 np0005539564 nova_compute[226295]: 2025-11-29 08:49:19.451 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539564 nova_compute[226295]: 2025-11-29 08:49:19.534 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539564 nova_compute[226295]: 2025-11-29 08:49:19.677 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:20.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:20 np0005539564 nova_compute[226295]: 2025-11-29 08:49:20.342 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:21.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:21 np0005539564 nova_compute[226295]: 2025-11-29 08:49:21.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:22.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:22 np0005539564 nova_compute[226295]: 2025-11-29 08:49:22.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:23.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:23 np0005539564 nova_compute[226295]: 2025-11-29 08:49:23.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:23 np0005539564 nova_compute[226295]: 2025-11-29 08:49:23.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:49:23 np0005539564 nova_compute[226295]: 2025-11-29 08:49:23.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:49:23 np0005539564 nova_compute[226295]: 2025-11-29 08:49:23.380 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:49:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:24.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:24 np0005539564 nova_compute[226295]: 2025-11-29 08:49:24.536 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:25.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:25 np0005539564 nova_compute[226295]: 2025-11-29 08:49:25.343 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.444008) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165444073, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2395, "num_deletes": 253, "total_data_size": 5708713, "memory_usage": 5779968, "flush_reason": "Manual Compaction"}
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165488305, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3742524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72586, "largest_seqno": 74976, "table_properties": {"data_size": 3732857, "index_size": 6096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20029, "raw_average_key_size": 20, "raw_value_size": 3713546, "raw_average_value_size": 3804, "num_data_blocks": 265, "num_entries": 976, "num_filter_entries": 976, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405955, "oldest_key_time": 1764405955, "file_creation_time": 1764406165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 44334 microseconds, and 15868 cpu microseconds.
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.488346) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3742524 bytes OK
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.488365) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.491032) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.491047) EVENT_LOG_v1 {"time_micros": 1764406165491042, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.491064) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5698295, prev total WAL file size 5698295, number of live WAL files 2.
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.492774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3654KB)], [147(10MB)]
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165492968, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 15049217, "oldest_snapshot_seqno": -1}
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 10195 keys, 13102178 bytes, temperature: kUnknown
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165633217, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 13102178, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13036132, "index_size": 39437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 268987, "raw_average_key_size": 26, "raw_value_size": 12857108, "raw_average_value_size": 1261, "num_data_blocks": 1499, "num_entries": 10195, "num_filter_entries": 10195, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764406165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.633497) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 13102178 bytes
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.634660) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.2 rd, 93.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.8 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 10720, records dropped: 525 output_compression: NoCompression
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.634679) EVENT_LOG_v1 {"time_micros": 1764406165634669, "job": 94, "event": "compaction_finished", "compaction_time_micros": 140342, "compaction_time_cpu_micros": 41894, "output_level": 6, "num_output_files": 1, "total_output_size": 13102178, "num_input_records": 10720, "num_output_records": 10195, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165635760, "job": 94, "event": "table_file_deletion", "file_number": 149}
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165638315, "job": 94, "event": "table_file_deletion", "file_number": 147}
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.492600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.638510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.638518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.638523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.638527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:49:25.638531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:26.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:27.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539564 nova_compute[226295]: 2025-11-29 08:49:27.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:28.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:29.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:29 np0005539564 nova_compute[226295]: 2025-11-29 08:49:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:29 np0005539564 nova_compute[226295]: 2025-11-29 08:49:29.502 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406154.5017512, 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:29 np0005539564 nova_compute[226295]: 2025-11-29 08:49:29.503 226310 INFO nova.compute.manager [-] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:49:29 np0005539564 nova_compute[226295]: 2025-11-29 08:49:29.522 226310 DEBUG nova.compute.manager [None req-d3292b8b-8bfc-43c9-bc89-d32ed47eda2d - - - - - -] [instance: 81b48f2c-0c4a-4cad-bdf9-ee0a6f6b8bf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:29 np0005539564 nova_compute[226295]: 2025-11-29 08:49:29.592 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:30.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:30 np0005539564 nova_compute[226295]: 2025-11-29 08:49:30.346 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:31.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:32.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:32 np0005539564 podman[303677]: 2025-11-29 08:49:32.535021679 +0000 UTC m=+0.075605056 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:49:32 np0005539564 podman[303676]: 2025-11-29 08:49:32.544926027 +0000 UTC m=+0.085233457 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:49:32 np0005539564 podman[303675]: 2025-11-29 08:49:32.567158199 +0000 UTC m=+0.113459431 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:49:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:33.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:34.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:34 np0005539564 nova_compute[226295]: 2025-11-29 08:49:34.596 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:35.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:35 np0005539564 nova_compute[226295]: 2025-11-29 08:49:35.348 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:36.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:36 np0005539564 nova_compute[226295]: 2025-11-29 08:49:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:36 np0005539564 nova_compute[226295]: 2025-11-29 08:49:36.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:36 np0005539564 nova_compute[226295]: 2025-11-29 08:49:36.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:36 np0005539564 nova_compute[226295]: 2025-11-29 08:49:36.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:36 np0005539564 nova_compute[226295]: 2025-11-29 08:49:36.376 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:49:36 np0005539564 nova_compute[226295]: 2025-11-29 08:49:36.377 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:36 np0005539564 nova_compute[226295]: 2025-11-29 08:49:36.842 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.011 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.012 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4171MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.012 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.013 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.122 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.122 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.207 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:49:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:49:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:49:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3467271190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.632 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.639 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.659 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.680 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:49:37 np0005539564 nova_compute[226295]: 2025-11-29 08:49:37.681 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:38.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:39.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:39 np0005539564 nova_compute[226295]: 2025-11-29 08:49:39.599 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:40.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:40 np0005539564 nova_compute[226295]: 2025-11-29 08:49:40.371 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:41.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:42 np0005539564 nova_compute[226295]: 2025-11-29 08:49:42.574 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:42.574 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:42.576 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:49:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:43.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:43 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:44 np0005539564 nova_compute[226295]: 2025-11-29 08:49:44.603 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3366238603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:45.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:45 np0005539564 nova_compute[226295]: 2025-11-29 08:49:45.375 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:46.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:47.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.568 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.568 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.568 226310 INFO nova.compute.manager [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Unshelving#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.621 226310 INFO nova.virt.block_device [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Booting with volume 60d0f33b-7946-4e21-ac67-19a83123d623 at /dev/vda#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.787 226310 DEBUG os_brick.utils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.790 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.809 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.809 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[57c79f12-9a2a-4d5e-af9f-0dd4ed2dd381]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.813 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.827 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.828 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f5ba4c-edd0-4d18-8aba-19c007982a37]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.831 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.846 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.846 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[218932b7-8ad5-4d88-a84d-62ce226210cc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.849 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[87a73c5e-b160-486a-9634-39cbd20b480a]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.850 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.902 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "nvme version" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.904 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.905 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.905 226310 DEBUG os_brick.initiator.connectors.lightos [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.905 226310 DEBUG os_brick.utils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] <== get_connector_properties: return (117ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:49:47 np0005539564 nova_compute[226295]: 2025-11-29 08:49:47.906 226310 DEBUG nova.virt.block_device [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating existing volume attachment record: 239afcdc-17d9-42ca-a2d9-92f2249f8377 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:49:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:48.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:48 np0005539564 nova_compute[226295]: 2025-11-29 08:49:48.843 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:48 np0005539564 nova_compute[226295]: 2025-11-29 08:49:48.844 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:48 np0005539564 nova_compute[226295]: 2025-11-29 08:49:48.849 226310 DEBUG nova.objects.instance [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:48 np0005539564 nova_compute[226295]: 2025-11-29 08:49:48.864 226310 DEBUG nova.objects.instance [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:48 np0005539564 nova_compute[226295]: 2025-11-29 08:49:48.877 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:49:48 np0005539564 nova_compute[226295]: 2025-11-29 08:49:48.878 226310 INFO nova.compute.claims [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:49:49 np0005539564 nova_compute[226295]: 2025-11-29 08:49:49.025 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:49.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:49 np0005539564 nova_compute[226295]: 2025-11-29 08:49:49.511 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:49 np0005539564 nova_compute[226295]: 2025-11-29 08:49:49.518 226310 DEBUG nova.compute.provider_tree [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:49 np0005539564 nova_compute[226295]: 2025-11-29 08:49:49.535 226310 DEBUG nova.scheduler.client.report [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:49 np0005539564 nova_compute[226295]: 2025-11-29 08:49:49.556 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:49 np0005539564 nova_compute[226295]: 2025-11-29 08:49:49.605 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:50.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:50 np0005539564 nova_compute[226295]: 2025-11-29 08:49:50.181 226310 INFO nova.network.neutron [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:49:50 np0005539564 nova_compute[226295]: 2025-11-29 08:49:50.377 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:51.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:52.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:52.578 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:52 np0005539564 nova_compute[226295]: 2025-11-29 08:49:52.866 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:52 np0005539564 nova_compute[226295]: 2025-11-29 08:49:52.866 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:52 np0005539564 nova_compute[226295]: 2025-11-29 08:49:52.866 226310 DEBUG nova.network.neutron [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:49:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:49:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:54.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:54 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:54.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:54 np0005539564 nova_compute[226295]: 2025-11-29 08:49:54.221 226310 DEBUG nova.compute.manager [req-79079024-3de5-4366-b3f0-84d1e779c338 req-db039b50-573f-46f2-b3e2-593880bc529a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:54 np0005539564 nova_compute[226295]: 2025-11-29 08:49:54.221 226310 DEBUG nova.compute.manager [req-79079024-3de5-4366-b3f0-84d1e779c338 req-db039b50-573f-46f2-b3e2-593880bc529a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing instance network info cache due to event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:54 np0005539564 nova_compute[226295]: 2025-11-29 08:49:54.222 226310 DEBUG oslo_concurrency.lockutils [req-79079024-3de5-4366-b3f0-84d1e779c338 req-db039b50-573f-46f2-b3e2-593880bc529a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:54 np0005539564 nova_compute[226295]: 2025-11-29 08:49:54.607 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:55 np0005539564 nova_compute[226295]: 2025-11-29 08:49:55.417 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:49:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:56 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:56.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:56.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.566 226310 DEBUG nova.network.neutron [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.590 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.593 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.594 226310 INFO nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Creating image(s)#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.595 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.596 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Ensure instance console log exists: /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.597 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.598 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.598 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.606 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Start _get_guest_xml network_info=[{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-60d0f33b-7946-4e21-ac67-19a83123d623', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '60d0f33b-7946-4e21-ac67-19a83123d623', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8e875192-3bcb-45b5-b98e-ed3fcce55779', 'attached_at': '', 'detached_at': '', 'volume_id': '60d0f33b-7946-4e21-ac67-19a83123d623', 'serial': '60d0f33b-7946-4e21-ac67-19a83123d623'}, 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '239afcdc-17d9-42ca-a2d9-92f2249f8377', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.608 226310 DEBUG oslo_concurrency.lockutils [req-79079024-3de5-4366-b3f0-84d1e779c338 req-db039b50-573f-46f2-b3e2-593880bc529a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.608 226310 DEBUG nova.network.neutron [req-79079024-3de5-4366-b3f0-84d1e779c338 req-db039b50-573f-46f2-b3e2-593880bc529a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.617 226310 WARNING nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.623 226310 DEBUG nova.virt.libvirt.host [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.624 226310 DEBUG nova.virt.libvirt.host [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.634 226310 DEBUG nova.virt.libvirt.host [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.634 226310 DEBUG nova.virt.libvirt.host [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.636 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.637 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.637 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.638 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.638 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.639 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.639 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.640 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.640 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.641 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.641 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.642 226310 DEBUG nova.virt.hardware [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.642 226310 DEBUG nova.objects.instance [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.706 226310 DEBUG nova.storage.rbd_utils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:57 np0005539564 nova_compute[226295]: 2025-11-29 08:49:57.712 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:49:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/200399157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.210 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:49:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:49:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:58 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:58.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:58.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.231 226310 DEBUG nova.virt.libvirt.vif [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:49:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1554816171',display_name='tempest-TestShelveInstance-server-1554816171',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1554816171',id=198,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1032744962',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-zqh0kbjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:47Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=8e875192-3bcb-45b5-b98e-ed3fcce55779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.231 226310 DEBUG nova.network.os_vif_util [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.232 226310 DEBUG nova.network.os_vif_util [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.233 226310 DEBUG nova.objects.instance [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.243 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <uuid>8e875192-3bcb-45b5-b98e-ed3fcce55779</uuid>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <name>instance-000000c6</name>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestShelveInstance-server-1554816171</nova:name>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:49:57</nova:creationTime>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <nova:user uuid="5dbbf4fd34004538ad08aa4aa6ab8096">tempest-TestShelveInstance-1715482181-project-member</nova:user>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <nova:project uuid="c5e836f8387a492c8119be72f1fb9980">tempest-TestShelveInstance-1715482181</nova:project>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <nova:port uuid="02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <entry name="serial">8e875192-3bcb-45b5-b98e-ed3fcce55779</entry>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <entry name="uuid">8e875192-3bcb-45b5-b98e-ed3fcce55779</entry>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-60d0f33b-7946-4e21-ac67-19a83123d623">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <serial>60d0f33b-7946-4e21-ac67-19a83123d623</serial>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:4d:e7:09"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <target dev="tap02cfe8ea-9c"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/console.log" append="off"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:49:58 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:49:58 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:49:58 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:49:58 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.245 226310 DEBUG nova.compute.manager [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Preparing to wait for external event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.245 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.245 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.245 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.246 226310 DEBUG nova.virt.libvirt.vif [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:49:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1554816171',display_name='tempest-TestShelveInstance-server-1554816171',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1554816171',id=198,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1032744962',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-zqh0kbjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:47Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=8e875192-3bcb-45b5-b98e-ed3fcce55779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.246 226310 DEBUG nova.network.os_vif_util [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.247 226310 DEBUG nova.network.os_vif_util [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.247 226310 DEBUG os_vif [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.248 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.248 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.248 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.251 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.251 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02cfe8ea-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.251 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02cfe8ea-9c, col_values=(('external_ids', {'iface-id': '02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:e7:09', 'vm-uuid': '8e875192-3bcb-45b5-b98e-ed3fcce55779'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.253 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:58 np0005539564 NetworkManager[48997]: <info>  [1764406198.2546] manager: (tap02cfe8ea-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.259 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.260 226310 INFO os_vif [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c')#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.309 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.310 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.310 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No VIF found with MAC fa:16:3e:4d:e7:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.310 226310 INFO nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Using config drive#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.336 226310 DEBUG nova.storage.rbd_utils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.397 226310 DEBUG nova.objects.instance [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.462 226310 DEBUG nova.objects.instance [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'keypairs' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.909 226310 INFO nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Creating config drive at /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.914 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5oio57c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.963 226310 DEBUG nova.network.neutron [req-79079024-3de5-4366-b3f0-84d1e779c338 req-db039b50-573f-46f2-b3e2-593880bc529a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updated VIF entry in instance network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.965 226310 DEBUG nova.network.neutron [req-79079024-3de5-4366-b3f0-84d1e779c338 req-db039b50-573f-46f2-b3e2-593880bc529a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:58 np0005539564 nova_compute[226295]: 2025-11-29 08:49:58.992 226310 DEBUG oslo_concurrency.lockutils [req-79079024-3de5-4366-b3f0-84d1e779c338 req-db039b50-573f-46f2-b3e2-593880bc529a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.078 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5oio57c" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.117 226310 DEBUG nova.storage.rbd_utils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.124 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.487 226310 DEBUG oslo_concurrency.processutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.489 226310 INFO nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Deleting local config drive /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config because it was imported into RBD.#033[00m
Nov 29 03:49:59 np0005539564 kernel: tap02cfe8ea-9c: entered promiscuous mode
Nov 29 03:49:59 np0005539564 NetworkManager[48997]: <info>  [1764406199.5744] manager: (tap02cfe8ea-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Nov 29 03:49:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:59Z|00801|binding|INFO|Claiming lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 for this chassis.
Nov 29 03:49:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:59Z|00802|binding|INFO|02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137: Claiming fa:16:3e:4d:e7:09 10.100.0.8
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.575 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.582 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.591 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.597 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539564 NetworkManager[48997]: <info>  [1764406199.6081] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Nov 29 03:49:59 np0005539564 NetworkManager[48997]: <info>  [1764406199.6099] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.606 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.612 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:e7:09 10.100.0.8'], port_security=['fa:16:3e:4d:e7:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8e875192-3bcb-45b5-b98e-ed3fcce55779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9a7a05da-d569-4f0e-9366-7d699d1285bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.614 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 bound to our chassis#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.616 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0636028a-96d5-4ad7-aa6e-9129edd44385#033[00m
Nov 29 03:49:59 np0005539564 systemd-machined[190128]: New machine qemu-93-instance-000000c6.
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.632 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bd086d58-2daf-4aba-bccc-0e260dd13010]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.634 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0636028a-91 in ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.637 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0636028a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.637 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3af0ce01-7fcf-430f-bbcb-078b95308818]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.639 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[38466be7-cadd-461c-abe5-63a2bb484137]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 systemd[1]: Started Virtual Machine qemu-93-instance-000000c6.
Nov 29 03:49:59 np0005539564 systemd-udevd[304110]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.663 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[bd740a06-3e60-4b72-aea5-a78cb7be9bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 NetworkManager[48997]: <info>  [1764406199.6789] device (tap02cfe8ea-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:49:59 np0005539564 NetworkManager[48997]: <info>  [1764406199.6798] device (tap02cfe8ea-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.699 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7b57379f-1d07-42b5-9c6a-1c2c76df4031]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.742 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cf21c83f-0d98-4a54-aa0b-9ef27dbe5beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 NetworkManager[48997]: <info>  [1764406199.7544] manager: (tap0636028a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/373)
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.755 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[faf2a638-b302-4e65-acdb-1d169d86eeda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.791 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.792 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bf920138-5bfd-4c9e-9196-7833a4a1ce8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.797 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3d186e5b-7c31-4710-9705-648fadb4e591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.810 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:59Z|00803|binding|INFO|Setting lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 ovn-installed in OVS
Nov 29 03:49:59 np0005539564 ovn_controller[130591]: 2025-11-29T08:49:59Z|00804|binding|INFO|Setting lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 up in Southbound
Nov 29 03:49:59 np0005539564 nova_compute[226295]: 2025-11-29 08:49:59.820 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539564 NetworkManager[48997]: <info>  [1764406199.8237] device (tap0636028a-90): carrier: link connected
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.829 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1c6b5a-21e2-4913-9287-e637dab00546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.844 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[037aee31-0181-446f-88d1-ce61d9f3e6a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0636028a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:11:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917450, 'reachable_time': 32261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304141, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.860 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6c796faf-6acc-41e8-8984-1b9836437e21]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:1119'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 917450, 'tstamp': 917450}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304142, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.877 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[33e900b0-17a7-41b3-b6ca-143cbd04aff2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0636028a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:11:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917450, 'reachable_time': 32261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304143, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:49:59.915 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc37fd2-7396-48e4-9dda-567c7452cfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.001 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0c8ce5-61a8-4df2-8c39-7d2091a1f474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.005 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0636028a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.005 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.006 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0636028a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:00 np0005539564 NetworkManager[48997]: <info>  [1764406200.0095] manager: (tap0636028a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.008 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:00 np0005539564 kernel: tap0636028a-90: entered promiscuous mode
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.011 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.015 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0636028a-90, col_values=(('external_ids', {'iface-id': '58043efe-c991-4914-9f0a-2bba8af4c408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:00 np0005539564 ovn_controller[130591]: 2025-11-29T08:50:00Z|00805|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.021 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:50:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.022 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ee4ee4-93a1-4f2b-adbf-f7d608f85d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.023 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-0636028a-96d5-4ad7-aa6e-9129edd44385
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 0636028a-96d5-4ad7-aa6e-9129edd44385
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:50:00 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 03:50:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:00.024 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'env', 'PROCESS_TAG=haproxy-0636028a-96d5-4ad7-aa6e-9129edd44385', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0636028a-96d5-4ad7-aa6e-9129edd44385.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.046 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:00.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:00.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.462 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:00 np0005539564 podman[304173]: 2025-11-29 08:50:00.472533949 +0000 UTC m=+0.068346170 container create b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:50:00 np0005539564 systemd[1]: Started libpod-conmon-b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5.scope.
Nov 29 03:50:00 np0005539564 podman[304173]: 2025-11-29 08:50:00.429466304 +0000 UTC m=+0.025278525 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:50:00 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:50:00 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05abd162903a98ba09e0ffbe81322df15694af0e26fdab69312a9c25f8f772ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:00 np0005539564 podman[304173]: 2025-11-29 08:50:00.589476493 +0000 UTC m=+0.185288734 container init b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:50:00 np0005539564 podman[304173]: 2025-11-29 08:50:00.597823869 +0000 UTC m=+0.193636130 container start b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:50:00 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[304189]: [NOTICE]   (304193) : New worker (304195) forked
Nov 29 03:50:00 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[304189]: [NOTICE]   (304193) : Loading success.
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.682 226310 DEBUG nova.compute.manager [req-a2d3df1a-3f9d-494d-8e4e-4bafae5dc1a6 req-b5908d41-fff7-4f2d-81dc-210a8ccd3231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.682 226310 DEBUG oslo_concurrency.lockutils [req-a2d3df1a-3f9d-494d-8e4e-4bafae5dc1a6 req-b5908d41-fff7-4f2d-81dc-210a8ccd3231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.683 226310 DEBUG oslo_concurrency.lockutils [req-a2d3df1a-3f9d-494d-8e4e-4bafae5dc1a6 req-b5908d41-fff7-4f2d-81dc-210a8ccd3231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.683 226310 DEBUG oslo_concurrency.lockutils [req-a2d3df1a-3f9d-494d-8e4e-4bafae5dc1a6 req-b5908d41-fff7-4f2d-81dc-210a8ccd3231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:00 np0005539564 nova_compute[226295]: 2025-11-29 08:50:00.683 226310 DEBUG nova.compute.manager [req-a2d3df1a-3f9d-494d-8e4e-4bafae5dc1a6 req-b5908d41-fff7-4f2d-81dc-210a8ccd3231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Processing event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.513 226310 DEBUG nova.compute.manager [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.514 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406201.5131192, 8e875192-3bcb-45b5-b98e-ed3fcce55779 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.514 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] VM Started (Lifecycle Event)#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.521 226310 DEBUG nova.virt.libvirt.driver [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.526 226310 INFO nova.virt.libvirt.driver [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance spawned successfully.#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.527 226310 DEBUG nova.compute.manager [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.542 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.548 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.582 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.583 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406201.5140126, 8e875192-3bcb-45b5-b98e-ed3fcce55779 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.584 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.611 226310 DEBUG oslo_concurrency.lockutils [None req-6a950996-fcc1-455a-b310-4a50a058b479 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 14.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.616 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.621 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406201.518507, 8e875192-3bcb-45b5-b98e-ed3fcce55779 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.621 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.640 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:01 np0005539564 nova_compute[226295]: 2025-11-29 08:50:01.645 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:50:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:02.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:02 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:02.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:02 np0005539564 nova_compute[226295]: 2025-11-29 08:50:02.793 226310 DEBUG nova.compute.manager [req-8a93ab85-9643-405a-bf3b-4b3c3879e392 req-1fe011d8-a3a9-4a31-92ed-57b4fa35b8f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:02 np0005539564 nova_compute[226295]: 2025-11-29 08:50:02.793 226310 DEBUG oslo_concurrency.lockutils [req-8a93ab85-9643-405a-bf3b-4b3c3879e392 req-1fe011d8-a3a9-4a31-92ed-57b4fa35b8f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:02 np0005539564 nova_compute[226295]: 2025-11-29 08:50:02.793 226310 DEBUG oslo_concurrency.lockutils [req-8a93ab85-9643-405a-bf3b-4b3c3879e392 req-1fe011d8-a3a9-4a31-92ed-57b4fa35b8f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:02 np0005539564 nova_compute[226295]: 2025-11-29 08:50:02.794 226310 DEBUG oslo_concurrency.lockutils [req-8a93ab85-9643-405a-bf3b-4b3c3879e392 req-1fe011d8-a3a9-4a31-92ed-57b4fa35b8f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:02 np0005539564 nova_compute[226295]: 2025-11-29 08:50:02.794 226310 DEBUG nova.compute.manager [req-8a93ab85-9643-405a-bf3b-4b3c3879e392 req-1fe011d8-a3a9-4a31-92ed-57b4fa35b8f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] No waiting events found dispatching network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:50:02 np0005539564 nova_compute[226295]: 2025-11-29 08:50:02.794 226310 WARNING nova.compute.manager [req-8a93ab85-9643-405a-bf3b-4b3c3879e392 req-1fe011d8-a3a9-4a31-92ed-57b4fa35b8f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received unexpected event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:50:03 np0005539564 nova_compute[226295]: 2025-11-29 08:50:03.255 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:03 np0005539564 podman[304248]: 2025-11-29 08:50:03.52200242 +0000 UTC m=+0.065284967 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:50:03 np0005539564 podman[304247]: 2025-11-29 08:50:03.547326966 +0000 UTC m=+0.088092865 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:50:03 np0005539564 podman[304246]: 2025-11-29 08:50:03.567185153 +0000 UTC m=+0.114677154 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 03:50:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:03.767 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:03.768 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:03.769 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:04.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:04 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:04.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:05 np0005539564 nova_compute[226295]: 2025-11-29 08:50:05.465 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:06.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:06 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:06.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:08.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:08 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:08.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:08 np0005539564 nova_compute[226295]: 2025-11-29 08:50:08.258 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:10.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:10.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:10 np0005539564 nova_compute[226295]: 2025-11-29 08:50:10.469 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:12 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:13 np0005539564 nova_compute[226295]: 2025-11-29 08:50:13.264 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:14.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:14 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:14.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:14 np0005539564 nova_compute[226295]: 2025-11-29 08:50:14.673 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:15 np0005539564 nova_compute[226295]: 2025-11-29 08:50:15.471 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:16 np0005539564 ovn_controller[130591]: 2025-11-29T08:50:16Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:e7:09 10.100.0.8
Nov 29 03:50:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:16 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:16.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:16.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:16 np0005539564 nova_compute[226295]: 2025-11-29 08:50:16.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:17 np0005539564 nova_compute[226295]: 2025-11-29 08:50:17.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:18.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:18 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:18.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:18 np0005539564 nova_compute[226295]: 2025-11-29 08:50:18.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:20.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:20 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:20.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:20 np0005539564 nova_compute[226295]: 2025-11-29 08:50:20.508 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:21 np0005539564 nova_compute[226295]: 2025-11-29 08:50:21.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:21 np0005539564 nova_compute[226295]: 2025-11-29 08:50:21.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:50:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:22 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:22.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:22.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:22.548 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:22 np0005539564 nova_compute[226295]: 2025-11-29 08:50:22.548 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:22 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:22.550 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:50:23 np0005539564 nova_compute[226295]: 2025-11-29 08:50:23.282 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:23 np0005539564 nova_compute[226295]: 2025-11-29 08:50:23.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:24.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:24 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:24.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:24 np0005539564 nova_compute[226295]: 2025-11-29 08:50:24.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:24 np0005539564 nova_compute[226295]: 2025-11-29 08:50:24.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:50:24 np0005539564 nova_compute[226295]: 2025-11-29 08:50:24.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:50:24 np0005539564 nova_compute[226295]: 2025-11-29 08:50:24.874 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:50:24 np0005539564 nova_compute[226295]: 2025-11-29 08:50:24.874 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:50:24 np0005539564 nova_compute[226295]: 2025-11-29 08:50:24.875 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:50:24 np0005539564 nova_compute[226295]: 2025-11-29 08:50:24.875 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:50:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:25 np0005539564 nova_compute[226295]: 2025-11-29 08:50:25.511 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539564 nova_compute[226295]: 2025-11-29 08:50:26.161 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:26 np0005539564 nova_compute[226295]: 2025-11-29 08:50:26.190 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:50:26 np0005539564 nova_compute[226295]: 2025-11-29 08:50:26.190 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:50:26 np0005539564 nova_compute[226295]: 2025-11-29 08:50:26.190 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:26.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:26 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:26.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:28.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:28 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:28.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:28 np0005539564 nova_compute[226295]: 2025-11-29 08:50:28.323 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:28 np0005539564 nova_compute[226295]: 2025-11-29 08:50:28.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:30.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:30 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:30.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:30 np0005539564 nova_compute[226295]: 2025-11-29 08:50:30.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:30.552 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:30 np0005539564 nova_compute[226295]: 2025-11-29 08:50:30.565 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:32.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:32.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:33 np0005539564 nova_compute[226295]: 2025-11-29 08:50:33.327 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:34.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:34.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:34 np0005539564 podman[304307]: 2025-11-29 08:50:34.569734565 +0000 UTC m=+0.103066840 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 29 03:50:34 np0005539564 podman[304313]: 2025-11-29 08:50:34.57068073 +0000 UTC m=+0.097011626 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:50:34 np0005539564 podman[304306]: 2025-11-29 08:50:34.591240106 +0000 UTC m=+0.142268190 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:50:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.366 226310 DEBUG nova.compute.manager [req-6eaeb3db-6718-4257-afc3-14d802578e5a req-73e03a5a-d03c-49dd-b6ea-decf5867a54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.367 226310 DEBUG nova.compute.manager [req-6eaeb3db-6718-4257-afc3-14d802578e5a req-73e03a5a-d03c-49dd-b6ea-decf5867a54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing instance network info cache due to event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.367 226310 DEBUG oslo_concurrency.lockutils [req-6eaeb3db-6718-4257-afc3-14d802578e5a req-73e03a5a-d03c-49dd-b6ea-decf5867a54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.367 226310 DEBUG oslo_concurrency.lockutils [req-6eaeb3db-6718-4257-afc3-14d802578e5a req-73e03a5a-d03c-49dd-b6ea-decf5867a54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.368 226310 DEBUG nova.network.neutron [req-6eaeb3db-6718-4257-afc3-14d802578e5a req-73e03a5a-d03c-49dd-b6ea-decf5867a54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.373 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.373 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.374 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.374 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.374 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.376 226310 INFO nova.compute.manager [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Terminating instance#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.377 226310 DEBUG nova.compute.manager [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:50:35 np0005539564 kernel: tap02cfe8ea-9c (unregistering): left promiscuous mode
Nov 29 03:50:35 np0005539564 NetworkManager[48997]: <info>  [1764406235.4362] device (tap02cfe8ea-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.457 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:50:35Z|00806|binding|INFO|Releasing lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 from this chassis (sb_readonly=0)
Nov 29 03:50:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:50:35Z|00807|binding|INFO|Setting lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 down in Southbound
Nov 29 03:50:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:50:35Z|00808|binding|INFO|Removing iface tap02cfe8ea-9c ovn-installed in OVS
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.463 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:e7:09 10.100.0.8'], port_security=['fa:16:3e:4d:e7:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8e875192-3bcb-45b5-b98e-ed3fcce55779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9a7a05da-d569-4f0e-9366-7d699d1285bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.464 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 unbound from our chassis#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.465 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0636028a-96d5-4ad7-aa6e-9129edd44385, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.466 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8a576cdd-a745-4886-8da7-16b5c2a37fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.467 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 namespace which is not needed anymore#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.504 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Nov 29 03:50:35 np0005539564 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c6.scope: Consumed 17.601s CPU time.
Nov 29 03:50:35 np0005539564 systemd-machined[190128]: Machine qemu-93-instance-000000c6 terminated.
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.566 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 NetworkManager[48997]: <info>  [1764406235.6048] manager: (tap02cfe8ea-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.605 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.612 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.621 226310 INFO nova.virt.libvirt.driver [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance destroyed successfully.#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.621 226310 DEBUG nova.objects.instance [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'resources' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:50:35 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[304189]: [NOTICE]   (304193) : haproxy version is 2.8.14-c23fe91
Nov 29 03:50:35 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[304189]: [NOTICE]   (304193) : path to executable is /usr/sbin/haproxy
Nov 29 03:50:35 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[304189]: [WARNING]  (304193) : Exiting Master process...
Nov 29 03:50:35 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[304189]: [ALERT]    (304193) : Current worker (304195) exited with code 143 (Terminated)
Nov 29 03:50:35 np0005539564 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[304189]: [WARNING]  (304193) : All workers exited. Exiting... (0)
Nov 29 03:50:35 np0005539564 systemd[1]: libpod-b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5.scope: Deactivated successfully.
Nov 29 03:50:35 np0005539564 podman[304395]: 2025-11-29 08:50:35.635100517 +0000 UTC m=+0.054870936 container died b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.652 226310 DEBUG nova.virt.libvirt.vif [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:49:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1554816171',display_name='tempest-TestShelveInstance-server-1554816171',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1554816171',id=198,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGtvikMAmWyKtz3G3oHOmTiaNE9UQ1Ju0e0lx2pz3ihtev7i/wsJX3O3ljU9qYZfHQILbh0YI0gMgFhLFsRZmDRrEreGW4wntvuPAkftPwbOEG8U0ceDBmuI6Y+BB4Dm+g==',key_name='tempest-TestShelveInstance-1032744962',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:50:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-zqh0kbjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:50:01Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=8e875192-3bcb-45b5-b98e-ed3fcce55779,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.653 226310 DEBUG nova.network.os_vif_util [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.654 226310 DEBUG nova.network.os_vif_util [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.654 226310 DEBUG os_vif [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.656 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.656 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02cfe8ea-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.659 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.661 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.663 226310 INFO os_vif [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c')#033[00m
Nov 29 03:50:35 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:50:35 np0005539564 systemd[1]: var-lib-containers-storage-overlay-05abd162903a98ba09e0ffbe81322df15694af0e26fdab69312a9c25f8f772ed-merged.mount: Deactivated successfully.
Nov 29 03:50:35 np0005539564 podman[304395]: 2025-11-29 08:50:35.684162334 +0000 UTC m=+0.103932733 container cleanup b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:35 np0005539564 systemd[1]: libpod-conmon-b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5.scope: Deactivated successfully.
Nov 29 03:50:35 np0005539564 podman[304446]: 2025-11-29 08:50:35.757156699 +0000 UTC m=+0.047878927 container remove b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.763 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[99c39aeb-c8d4-41a2-a423-628a7dd7ec99]: (4, ('Sat Nov 29 08:50:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 (b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5)\nb482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5\nSat Nov 29 08:50:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 (b482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5)\nb482a1666564dc55dba1afcf9127d5b58cf7fb83bbd4cb180181379f9283c5f5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.764 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[46adf98b-77a3-422d-892d-18e026466fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.765 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0636028a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 kernel: tap0636028a-90: left promiscuous mode
Nov 29 03:50:35 np0005539564 nova_compute[226295]: 2025-11-29 08:50:35.781 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.783 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd491fd-052d-4865-a156-0967cf2e018b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.796 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[de51b2dc-8c24-408f-b301-db89f66b24e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.797 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b562c5b-45c3-47c4-b1e4-8dadece613a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.813 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dccdc0db-415b-4267-a78e-453996a86186]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917441, 'reachable_time': 41174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304462, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:35 np0005539564 systemd[1]: run-netns-ovnmeta\x2d0636028a\x2d96d5\x2d4ad7\x2daa6e\x2d9129edd44385.mount: Deactivated successfully.
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.819 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:50:35 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:50:35.819 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0b8033-55b1-4d99-9296-2bbd662f3272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.007 226310 INFO nova.virt.libvirt.driver [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Deleting instance files /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779_del#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.008 226310 INFO nova.virt.libvirt.driver [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Deletion of /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779_del complete#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.104 226310 INFO nova.compute.manager [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.105 226310 DEBUG oslo.service.loopingcall [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.106 226310 DEBUG nova.compute.manager [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.106 226310 DEBUG nova.network.neutron [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:50:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:36.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:36.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.378 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.378 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.786 226310 DEBUG nova.network.neutron [req-6eaeb3db-6718-4257-afc3-14d802578e5a req-73e03a5a-d03c-49dd-b6ea-decf5867a54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updated VIF entry in instance network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.787 226310 DEBUG nova.network.neutron [req-6eaeb3db-6718-4257-afc3-14d802578e5a req-73e03a5a-d03c-49dd-b6ea-decf5867a54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.795 226310 DEBUG nova.network.neutron [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.821 226310 DEBUG oslo_concurrency.lockutils [req-6eaeb3db-6718-4257-afc3-14d802578e5a req-73e03a5a-d03c-49dd-b6ea-decf5867a54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.823 226310 INFO nova.compute.manager [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Took 0.72 seconds to deallocate network for instance.#033[00m
Nov 29 03:50:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2854528638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:36 np0005539564 nova_compute[226295]: 2025-11-29 08:50:36.920 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.097 226310 INFO nova.compute.manager [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.100 226310 DEBUG nova.compute.manager [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Deleting volume: 60d0f33b-7946-4e21-ac67-19a83123d623 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.143 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.145 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4184MB free_disk=20.9425048828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.145 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.145 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.250 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 8e875192-3bcb-45b5-b98e-ed3fcce55779 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.250 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.251 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.319 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.478 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3791662641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.781 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.790 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.797 226310 DEBUG nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-unplugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.798 226310 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.798 226310 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.799 226310 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.799 226310 DEBUG nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] No waiting events found dispatching network-vif-unplugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.800 226310 WARNING nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received unexpected event network-vif-unplugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.801 226310 DEBUG nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.801 226310 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.802 226310 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.803 226310 DEBUG oslo_concurrency.lockutils [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.803 226310 DEBUG nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] No waiting events found dispatching network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.804 226310 WARNING nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received unexpected event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.804 226310 DEBUG nova.compute.manager [req-9af3371e-66c1-46bc-9caf-a43be96bada1 req-1ec1d7f8-6018-4c30-83bf-856d40e49111 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-deleted-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.810 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.848 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.849 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.850 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:37 np0005539564 nova_compute[226295]: 2025-11-29 08:50:37.953 226310 DEBUG oslo_concurrency.processutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:38.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1581305535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:38 np0005539564 nova_compute[226295]: 2025-11-29 08:50:38.475 226310 DEBUG oslo_concurrency.processutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:38 np0005539564 nova_compute[226295]: 2025-11-29 08:50:38.486 226310 DEBUG nova.compute.provider_tree [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:38 np0005539564 nova_compute[226295]: 2025-11-29 08:50:38.510 226310 DEBUG nova.scheduler.client.report [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:38 np0005539564 nova_compute[226295]: 2025-11-29 08:50:38.536 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:38 np0005539564 nova_compute[226295]: 2025-11-29 08:50:38.567 226310 INFO nova.scheduler.client.report [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Deleted allocations for instance 8e875192-3bcb-45b5-b98e-ed3fcce55779#033[00m
Nov 29 03:50:38 np0005539564 nova_compute[226295]: 2025-11-29 08:50:38.661 226310 DEBUG oslo_concurrency.lockutils [None req-0ff509ca-c99c-4124-8ce9-7e3d8da627af 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:50:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1245714797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:50:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:50:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1245714797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:50:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:40.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:40.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:40 np0005539564 nova_compute[226295]: 2025-11-29 08:50:40.569 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539564 nova_compute[226295]: 2025-11-29 08:50:40.659 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:42.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:42.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:44.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:44.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:44 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:50:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:45 np0005539564 nova_compute[226295]: 2025-11-29 08:50:45.573 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:45 np0005539564 nova_compute[226295]: 2025-11-29 08:50:45.661 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:50:46 np0005539564 nova_compute[226295]: 2025-11-29 08:50:46.095 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:46 np0005539564 nova_compute[226295]: 2025-11-29 08:50:46.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:46.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:46.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:50:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 64K writes, 253K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 64K writes, 23K syncs, 2.71 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4901 writes, 19K keys, 4901 commit groups, 1.0 writes per commit group, ingest: 20.92 MB, 0.03 MB/s#012Interval WAL: 4901 writes, 1995 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 03:50:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:48.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:48 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:48.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:50.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:50 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:50.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:50 np0005539564 nova_compute[226295]: 2025-11-29 08:50:50.592 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:50 np0005539564 nova_compute[226295]: 2025-11-29 08:50:50.619 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406235.6181731, 8e875192-3bcb-45b5-b98e-ed3fcce55779 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:50 np0005539564 nova_compute[226295]: 2025-11-29 08:50:50.619 226310 INFO nova.compute.manager [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:50:50 np0005539564 nova_compute[226295]: 2025-11-29 08:50:50.643 226310 DEBUG nova.compute.manager [None req-6d9a670a-9631-44fc-a96c-f2fb04f5b815 - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:50 np0005539564 nova_compute[226295]: 2025-11-29 08:50:50.663 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:52.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:52 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:52.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:54.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:54.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:55 np0005539564 nova_compute[226295]: 2025-11-29 08:50:55.595 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:55 np0005539564 nova_compute[226295]: 2025-11-29 08:50:55.666 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:56.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:56 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:56.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:50:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:58.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:50:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:58 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:58.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:51:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:00 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:00 np0005539564 nova_compute[226295]: 2025-11-29 08:51:00.597 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:00 np0005539564 nova_compute[226295]: 2025-11-29 08:51:00.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:51:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:02.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:02 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:02.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:02.360 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:51:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:02.361 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:51:02 np0005539564 nova_compute[226295]: 2025-11-29 08:51:02.406 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:03.767 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:03.768 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:03.768 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:51:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:04.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:04 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:04.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:05 np0005539564 podman[304715]: 2025-11-29 08:51:05.548823581 +0000 UTC m=+0.094514989 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 03:51:05 np0005539564 podman[304716]: 2025-11-29 08:51:05.557063394 +0000 UTC m=+0.107050237 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:51:05 np0005539564 podman[304714]: 2025-11-29 08:51:05.574701341 +0000 UTC m=+0.120421759 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:51:05 np0005539564 nova_compute[226295]: 2025-11-29 08:51:05.598 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:05 np0005539564 nova_compute[226295]: 2025-11-29 08:51:05.670 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:06.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:06.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:06.363 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:10.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:51:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:10 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:10.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:10 np0005539564 nova_compute[226295]: 2025-11-29 08:51:10.601 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:10 np0005539564 nova_compute[226295]: 2025-11-29 08:51:10.671 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:12.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:12.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:14.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:15 np0005539564 nova_compute[226295]: 2025-11-29 08:51:15.603 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:15 np0005539564 nova_compute[226295]: 2025-11-29 08:51:15.674 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:16.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:16.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:18.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:18.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:19 np0005539564 nova_compute[226295]: 2025-11-29 08:51:19.844 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:19 np0005539564 nova_compute[226295]: 2025-11-29 08:51:19.845 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:20.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.606 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.628 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "defc87c3-85a5-47bb-8d50-3121d5d780c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.629 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.650 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.677 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.759 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.759 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.768 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.769 226310 INFO nova.compute.claims [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:51:20 np0005539564 nova_compute[226295]: 2025-11-29 08:51:20.892 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:21 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:21 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1206935863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.344 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.354 226310 DEBUG nova.compute.provider_tree [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.384 226310 DEBUG nova.scheduler.client.report [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.422 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.423 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.483 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.484 226310 DEBUG nova.network.neutron [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.513 226310 INFO nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.536 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.625 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.627 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.628 226310 INFO nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Creating image(s)#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.672 226310 DEBUG nova.storage.rbd_utils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image defc87c3-85a5-47bb-8d50-3121d5d780c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.711 226310 DEBUG nova.storage.rbd_utils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image defc87c3-85a5-47bb-8d50-3121d5d780c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.747 226310 DEBUG nova.storage.rbd_utils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image defc87c3-85a5-47bb-8d50-3121d5d780c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.753 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.850 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.852 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.852 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.853 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.887 226310 DEBUG nova.storage.rbd_utils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image defc87c3-85a5-47bb-8d50-3121d5d780c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:51:21 np0005539564 nova_compute[226295]: 2025-11-29 08:51:21.892 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf defc87c3-85a5-47bb-8d50-3121d5d780c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.231 226310 DEBUG nova.policy [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b36e3f2406043c2a741c24fb14de7df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0596f9d1e5a5444ca2640f6e8244d53f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:51:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:22.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.513 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf defc87c3-85a5-47bb-8d50-3121d5d780c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.609 226310 DEBUG nova.storage.rbd_utils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] resizing rbd image defc87c3-85a5-47bb-8d50-3121d5d780c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.765 226310 DEBUG nova.objects.instance [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lazy-loading 'migration_context' on Instance uuid defc87c3-85a5-47bb-8d50-3121d5d780c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.799 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.799 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Ensure instance console log exists: /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.800 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.801 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:22 np0005539564 nova_compute[226295]: 2025-11-29 08:51:22.801 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:22 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:22Z|00809|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 29 03:51:23 np0005539564 nova_compute[226295]: 2025-11-29 08:51:23.845 226310 DEBUG nova.network.neutron [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Successfully created port: fecb7ef7-1d7d-446e-a531-15713ec4c8ce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:51:24 np0005539564 nova_compute[226295]: 2025-11-29 08:51:24.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:24 np0005539564 nova_compute[226295]: 2025-11-29 08:51:24.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:24.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:24.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:25 np0005539564 nova_compute[226295]: 2025-11-29 08:51:25.609 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:25 np0005539564 nova_compute[226295]: 2025-11-29 08:51:25.679 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:25 np0005539564 nova_compute[226295]: 2025-11-29 08:51:25.954 226310 DEBUG nova.network.neutron [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Successfully updated port: fecb7ef7-1d7d-446e-a531-15713ec4c8ce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:51:25 np0005539564 nova_compute[226295]: 2025-11-29 08:51:25.978 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:51:25 np0005539564 nova_compute[226295]: 2025-11-29 08:51:25.978 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquired lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:51:25 np0005539564 nova_compute[226295]: 2025-11-29 08:51:25.978 226310 DEBUG nova.network.neutron [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.133 226310 DEBUG nova.compute.manager [req-55850a4a-fa57-4791-a345-af0980522319 req-1120f06b-84a9-4aa1-8988-2840765c7c84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-changed-fecb7ef7-1d7d-446e-a531-15713ec4c8ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.134 226310 DEBUG nova.compute.manager [req-55850a4a-fa57-4791-a345-af0980522319 req-1120f06b-84a9-4aa1-8988-2840765c7c84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Refreshing instance network info cache due to event network-changed-fecb7ef7-1d7d-446e-a531-15713ec4c8ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.134 226310 DEBUG oslo_concurrency.lockutils [req-55850a4a-fa57-4791-a345-af0980522319 req-1120f06b-84a9-4aa1-8988-2840765c7c84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.239 226310 DEBUG nova.network.neutron [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:51:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:26.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.366 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:51:26 np0005539564 nova_compute[226295]: 2025-11-29 08:51:26.367 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.443 226310 DEBUG nova.network.neutron [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updating instance_info_cache with network_info: [{"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.478 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Releasing lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.479 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Instance network_info: |[{"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.480 226310 DEBUG oslo_concurrency.lockutils [req-55850a4a-fa57-4791-a345-af0980522319 req-1120f06b-84a9-4aa1-8988-2840765c7c84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.480 226310 DEBUG nova.network.neutron [req-55850a4a-fa57-4791-a345-af0980522319 req-1120f06b-84a9-4aa1-8988-2840765c7c84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Refreshing network info cache for port fecb7ef7-1d7d-446e-a531-15713ec4c8ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.485 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Start _get_guest_xml network_info=[{"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.492 226310 WARNING nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.499 226310 DEBUG nova.virt.libvirt.host [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.500 226310 DEBUG nova.virt.libvirt.host [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.512 226310 DEBUG nova.virt.libvirt.host [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.513 226310 DEBUG nova.virt.libvirt.host [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.515 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.515 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.516 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.517 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.518 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.518 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.519 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.519 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.520 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.520 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.520 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.521 226310 DEBUG nova.virt.hardware [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.526 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:51:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1202105207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:51:27 np0005539564 nova_compute[226295]: 2025-11-29 08:51:27.992 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.035 226310 DEBUG nova.storage.rbd_utils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image defc87c3-85a5-47bb-8d50-3121d5d780c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.042 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:28.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.008000213s ======
Nov 29 03:51:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:28.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.008000213s
Nov 29 03:51:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:51:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2013506896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.518 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.521 226310 DEBUG nova.virt.libvirt.vif [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:51:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-254958693',display_name='tempest-TestSnapshotPattern-server-254958693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-254958693',id=201,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBImEx8+jhsxRFNI/zXiqCIp6lKyzrmzXueICkOx8YGb02aphTL5Mlw1+YiMaTW8XLhYmBtqvqII/hnTIhC95ctb8YpefMaS6Qv1/vv9QrNRmuoy5csFiSCQsYM34gKdoxw==',key_name='tempest-TestSnapshotPattern-299175359',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0596f9d1e5a5444ca2640f6e8244d53f',ramdisk_id='',reservation_id='r-whquhgc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-32695225',owner_user_name='tempest-TestSnapshotPattern-32695225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:51:21Z,user_data=None,user_id='7b36e3f2406043c2a741c24fb14de7df',uuid=defc87c3-85a5-47bb-8d50-3121d5d780c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.521 226310 DEBUG nova.network.os_vif_util [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converting VIF {"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.523 226310 DEBUG nova.network.os_vif_util [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:79:4a,bridge_name='br-int',has_traffic_filtering=True,id=fecb7ef7-1d7d-446e-a531-15713ec4c8ce,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfecb7ef7-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.526 226310 DEBUG nova.objects.instance [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lazy-loading 'pci_devices' on Instance uuid defc87c3-85a5-47bb-8d50-3121d5d780c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.547 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <uuid>defc87c3-85a5-47bb-8d50-3121d5d780c1</uuid>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <name>instance-000000c9</name>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestSnapshotPattern-server-254958693</nova:name>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:51:27</nova:creationTime>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <nova:user uuid="7b36e3f2406043c2a741c24fb14de7df">tempest-TestSnapshotPattern-32695225-project-member</nova:user>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <nova:project uuid="0596f9d1e5a5444ca2640f6e8244d53f">tempest-TestSnapshotPattern-32695225</nova:project>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <nova:port uuid="fecb7ef7-1d7d-446e-a531-15713ec4c8ce">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <entry name="serial">defc87c3-85a5-47bb-8d50-3121d5d780c1</entry>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <entry name="uuid">defc87c3-85a5-47bb-8d50-3121d5d780c1</entry>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/defc87c3-85a5-47bb-8d50-3121d5d780c1_disk">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/defc87c3-85a5-47bb-8d50-3121d5d780c1_disk.config">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5b:79:4a"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <target dev="tapfecb7ef7-1d"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1/console.log" append="off"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:51:28 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:51:28 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:51:28 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:51:28 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.549 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Preparing to wait for external event network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.550 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.551 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.551 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.553 226310 DEBUG nova.virt.libvirt.vif [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:51:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-254958693',display_name='tempest-TestSnapshotPattern-server-254958693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-254958693',id=201,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBImEx8+jhsxRFNI/zXiqCIp6lKyzrmzXueICkOx8YGb02aphTL5Mlw1+YiMaTW8XLhYmBtqvqII/hnTIhC95ctb8YpefMaS6Qv1/vv9QrNRmuoy5csFiSCQsYM34gKdoxw==',key_name='tempest-TestSnapshotPattern-299175359',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0596f9d1e5a5444ca2640f6e8244d53f',ramdisk_id='',reservation_id='r-whquhgc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-32695225',owner_user_name='tempest-TestSnapshotPattern-32695225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:51:21Z,user_data=None,user_id='7b36e3f2406043c2a741c24fb14de7df',uuid=defc87c3-85a5-47bb-8d50-3121d5d780c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.553 226310 DEBUG nova.network.os_vif_util [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converting VIF {"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.555 226310 DEBUG nova.network.os_vif_util [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:79:4a,bridge_name='br-int',has_traffic_filtering=True,id=fecb7ef7-1d7d-446e-a531-15713ec4c8ce,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfecb7ef7-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.555 226310 DEBUG os_vif [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:79:4a,bridge_name='br-int',has_traffic_filtering=True,id=fecb7ef7-1d7d-446e-a531-15713ec4c8ce,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfecb7ef7-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.556 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.557 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.558 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.567 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.568 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfecb7ef7-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.569 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfecb7ef7-1d, col_values=(('external_ids', {'iface-id': 'fecb7ef7-1d7d-446e-a531-15713ec4c8ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:79:4a', 'vm-uuid': 'defc87c3-85a5-47bb-8d50-3121d5d780c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.572 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:28 np0005539564 NetworkManager[48997]: <info>  [1764406288.5736] manager: (tapfecb7ef7-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.576 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.583 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.586 226310 INFO os_vif [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:79:4a,bridge_name='br-int',has_traffic_filtering=True,id=fecb7ef7-1d7d-446e-a531-15713ec4c8ce,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfecb7ef7-1d')#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.653 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.655 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.655 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] No VIF found with MAC fa:16:3e:5b:79:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.656 226310 INFO nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Using config drive#033[00m
Nov 29 03:51:28 np0005539564 nova_compute[226295]: 2025-11-29 08:51:28.697 226310 DEBUG nova.storage.rbd_utils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image defc87c3-85a5-47bb-8d50-3121d5d780c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.347 226310 INFO nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Creating config drive at /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1/disk.config#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.354 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplo9ih0kg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.416 226310 DEBUG nova.network.neutron [req-55850a4a-fa57-4791-a345-af0980522319 req-1120f06b-84a9-4aa1-8988-2840765c7c84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updated VIF entry in instance network info cache for port fecb7ef7-1d7d-446e-a531-15713ec4c8ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.418 226310 DEBUG nova.network.neutron [req-55850a4a-fa57-4791-a345-af0980522319 req-1120f06b-84a9-4aa1-8988-2840765c7c84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updating instance_info_cache with network_info: [{"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.441 226310 DEBUG oslo_concurrency.lockutils [req-55850a4a-fa57-4791-a345-af0980522319 req-1120f06b-84a9-4aa1-8988-2840765c7c84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.518 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplo9ih0kg" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.555 226310 DEBUG nova.storage.rbd_utils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image defc87c3-85a5-47bb-8d50-3121d5d780c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.560 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1/disk.config defc87c3-85a5-47bb-8d50-3121d5d780c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.750 226310 DEBUG oslo_concurrency.processutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1/disk.config defc87c3-85a5-47bb-8d50-3121d5d780c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.752 226310 INFO nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Deleting local config drive /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:51:29 np0005539564 kernel: tapfecb7ef7-1d: entered promiscuous mode
Nov 29 03:51:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:29Z|00810|binding|INFO|Claiming lport fecb7ef7-1d7d-446e-a531-15713ec4c8ce for this chassis.
Nov 29 03:51:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:29Z|00811|binding|INFO|fecb7ef7-1d7d-446e-a531-15713ec4c8ce: Claiming fa:16:3e:5b:79:4a 10.100.0.8
Nov 29 03:51:29 np0005539564 NetworkManager[48997]: <info>  [1764406289.8246] manager: (tapfecb7ef7-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.823 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.833 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.838 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.844 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:79:4a 10.100.0.8'], port_security=['fa:16:3e:5b:79:4a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'defc87c3-85a5-47bb-8d50-3121d5d780c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0596f9d1e5a5444ca2640f6e8244d53f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49368169-f673-45da-b454-bf6c8bb93b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46809892-ffee-4015-b7f0-51515653f0e9, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=fecb7ef7-1d7d-446e-a531-15713ec4c8ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.846 139780 INFO neutron.agent.ovn.metadata.agent [-] Port fecb7ef7-1d7d-446e-a531-15713ec4c8ce in datapath 9094c67b-5d6f-4130-9ec6-7da5c871a564 bound to our chassis#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.848 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9094c67b-5d6f-4130-9ec6-7da5c871a564#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.860 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[32121d0a-7e90-4d38-9b8f-ee3ed4d29970]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.862 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9094c67b-51 in ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.864 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9094c67b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.864 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0565bc3a-9d2e-4aca-9db6-fc92d3506e48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.865 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7f2a54-642c-4505-9db4-cd04f1224835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:29 np0005539564 systemd-udevd[305098]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:51:29 np0005539564 systemd-machined[190128]: New machine qemu-94-instance-000000c9.
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.882 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[a5be399e-af47-4ace-ad19-1991b9c48bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:29 np0005539564 NetworkManager[48997]: <info>  [1764406289.8943] device (tapfecb7ef7-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:51:29 np0005539564 NetworkManager[48997]: <info>  [1764406289.8958] device (tapfecb7ef7-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:51:29 np0005539564 systemd[1]: Started Virtual Machine qemu-94-instance-000000c9.
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.917 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[85b24a01-026b-4b72-a842-f726583add02]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:29Z|00812|binding|INFO|Setting lport fecb7ef7-1d7d-446e-a531-15713ec4c8ce ovn-installed in OVS
Nov 29 03:51:29 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:29Z|00813|binding|INFO|Setting lport fecb7ef7-1d7d-446e-a531-15713ec4c8ce up in Southbound
Nov 29 03:51:29 np0005539564 nova_compute[226295]: 2025-11-29 08:51:29.923 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.953 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6e0871-93fd-46a4-a403-333d4ed30977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:29 np0005539564 NetworkManager[48997]: <info>  [1764406289.9624] manager: (tap9094c67b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.961 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8097d41f-83e3-41bb-9480-d16c6513ded1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.992 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[59475099-0b54-40b6-bb3a-d449f49a6ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:29.997 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[71a85f3c-cb29-4603-838b-05b49446116a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:30 np0005539564 NetworkManager[48997]: <info>  [1764406290.0280] device (tap9094c67b-50): carrier: link connected
Nov 29 03:51:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.038 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[446c0867-5259-4042-9a61-175ade94f036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.062 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c97497-a37a-4d5d-999b-6ca1420cb546]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9094c67b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:fd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926470, 'reachable_time': 43485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305130, 'error': None, 'target': 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.085 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b09bc30-22f8-4a51-af0b-d6dde0661182]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:fd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 926470, 'tstamp': 926470}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305131, 'error': None, 'target': 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.113 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5dcf08-70dc-4d99-91fc-b687a0224da8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9094c67b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:fd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926470, 'reachable_time': 43485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305132, 'error': None, 'target': 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.162 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[10de3ecd-0611-4169-b23f-36f5f2b3b672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.247 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e0544fb4-5d80-4ed6-98ff-dafa7564d1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.249 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9094c67b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.249 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.249 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9094c67b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:30 np0005539564 kernel: tap9094c67b-50: entered promiscuous mode
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.252 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:30 np0005539564 NetworkManager[48997]: <info>  [1764406290.2535] manager: (tap9094c67b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.255 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.259 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9094c67b-50, col_values=(('external_ids', {'iface-id': '52cea514-684d-4e12-87ec-eee5c187481b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.261 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:30 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:30Z|00814|binding|INFO|Releasing lport 52cea514-684d-4e12-87ec-eee5c187481b from this chassis (sb_readonly=0)
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.281 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9094c67b-5d6f-4130-9ec6-7da5c871a564.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9094c67b-5d6f-4130-9ec6-7da5c871a564.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.282 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1e429a98-f421-452e-acfa-212dd8b3182b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.283 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-9094c67b-5d6f-4130-9ec6-7da5c871a564
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/9094c67b-5d6f-4130-9ec6-7da5c871a564.pid.haproxy
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 9094c67b-5d6f-4130-9ec6-7da5c871a564
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:51:30 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:30.284 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'env', 'PROCESS_TAG=haproxy-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9094c67b-5d6f-4130-9ec6-7da5c871a564.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:51:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:30.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.394 226310 DEBUG nova.compute.manager [req-169ebfd1-aae1-4734-921d-544d460eeae6 req-7edf4288-cbac-4be8-a629-a777d3c984ac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.394 226310 DEBUG oslo_concurrency.lockutils [req-169ebfd1-aae1-4734-921d-544d460eeae6 req-7edf4288-cbac-4be8-a629-a777d3c984ac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.394 226310 DEBUG oslo_concurrency.lockutils [req-169ebfd1-aae1-4734-921d-544d460eeae6 req-7edf4288-cbac-4be8-a629-a777d3c984ac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.395 226310 DEBUG oslo_concurrency.lockutils [req-169ebfd1-aae1-4734-921d-544d460eeae6 req-7edf4288-cbac-4be8-a629-a777d3c984ac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.395 226310 DEBUG nova.compute.manager [req-169ebfd1-aae1-4734-921d-544d460eeae6 req-7edf4288-cbac-4be8-a629-a777d3c984ac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Processing event network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.402 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.538 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406290.5375874, defc87c3-85a5-47bb-8d50-3121d5d780c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.538 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.542 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.548 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.553 226310 INFO nova.virt.libvirt.driver [-] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Instance spawned successfully.#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.554 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.559 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.564 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.578 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.579 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.580 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.580 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.581 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.581 226310 DEBUG nova.virt.libvirt.driver [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.586 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.587 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406290.5377545, defc87c3-85a5-47bb-8d50-3121d5d780c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.587 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.610 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.654 226310 INFO nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Took 9.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.655 226310 DEBUG nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.655 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.662 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406290.5480285, defc87c3-85a5-47bb-8d50-3121d5d780c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.663 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.696 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.699 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.718 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.727 226310 INFO nova.compute.manager [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Took 10.00 seconds to build instance.#033[00m
Nov 29 03:51:30 np0005539564 nova_compute[226295]: 2025-11-29 08:51:30.739 226310 DEBUG oslo_concurrency.lockutils [None req-cbcf5857-d9a6-4676-b138-6a4f20696ba8 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:30 np0005539564 podman[305205]: 2025-11-29 08:51:30.778014438 +0000 UTC m=+0.046937851 container create af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:51:30 np0005539564 systemd[1]: Started libpod-conmon-af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33.scope.
Nov 29 03:51:30 np0005539564 podman[305205]: 2025-11-29 08:51:30.752611571 +0000 UTC m=+0.021535004 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:51:30 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:51:30 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/478af4f59d37dd5e6dcfa6abdfb8babf30b93a50d5ff5e4afb080c45c838890d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:30 np0005539564 podman[305205]: 2025-11-29 08:51:30.879582015 +0000 UTC m=+0.148505478 container init af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:51:30 np0005539564 podman[305205]: 2025-11-29 08:51:30.886834962 +0000 UTC m=+0.155758395 container start af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:51:30 np0005539564 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[305220]: [NOTICE]   (305224) : New worker (305226) forked
Nov 29 03:51:30 np0005539564 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[305220]: [NOTICE]   (305224) : Loading success.
Nov 29 03:51:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:32.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:32.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:32 np0005539564 nova_compute[226295]: 2025-11-29 08:51:32.478 226310 DEBUG nova.compute.manager [req-2da8eb87-1512-4e5f-ad4f-86a45df5301b req-9bf65021-b570-4e81-8ee3-ebc47af00acb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:51:32 np0005539564 nova_compute[226295]: 2025-11-29 08:51:32.478 226310 DEBUG oslo_concurrency.lockutils [req-2da8eb87-1512-4e5f-ad4f-86a45df5301b req-9bf65021-b570-4e81-8ee3-ebc47af00acb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:32 np0005539564 nova_compute[226295]: 2025-11-29 08:51:32.479 226310 DEBUG oslo_concurrency.lockutils [req-2da8eb87-1512-4e5f-ad4f-86a45df5301b req-9bf65021-b570-4e81-8ee3-ebc47af00acb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:32 np0005539564 nova_compute[226295]: 2025-11-29 08:51:32.479 226310 DEBUG oslo_concurrency.lockutils [req-2da8eb87-1512-4e5f-ad4f-86a45df5301b req-9bf65021-b570-4e81-8ee3-ebc47af00acb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:32 np0005539564 nova_compute[226295]: 2025-11-29 08:51:32.479 226310 DEBUG nova.compute.manager [req-2da8eb87-1512-4e5f-ad4f-86a45df5301b req-9bf65021-b570-4e81-8ee3-ebc47af00acb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] No waiting events found dispatching network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:51:32 np0005539564 nova_compute[226295]: 2025-11-29 08:51:32.480 226310 WARNING nova.compute.manager [req-2da8eb87-1512-4e5f-ad4f-86a45df5301b req-9bf65021-b570-4e81-8ee3-ebc47af00acb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received unexpected event network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce for instance with vm_state active and task_state None.#033[00m
Nov 29 03:51:33 np0005539564 nova_compute[226295]: 2025-11-29 08:51:33.574 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:34.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:34.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:51:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 15K writes, 76K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1547 writes, 7313 keys, 1547 commit groups, 1.0 writes per commit group, ingest: 15.72 MB, 0.03 MB/s#012Interval WAL: 1547 writes, 1547 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     23.8      3.90              0.38        47    0.083       0      0       0.0       0.0#012  L6      1/0   12.50 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.1     53.1     45.4     10.39              1.64        46    0.226    351K    24K       0.0       0.0#012 Sum      1/0   12.50 MB   0.0      0.5     0.1      0.4       0.6      0.1       0.0   6.1     38.6     39.5     14.28              2.02        93    0.154    351K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.5     80.4     82.2      0.83              0.27        10    0.083     51K   2590       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0     53.1     45.4     10.39              1.64        46    0.226    351K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     23.8      3.90              0.38        46    0.085       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.091, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.55 GB write, 0.09 MB/s write, 0.54 GB read, 0.09 MB/s read, 14.3 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 63.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000568 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3492,61.08 MB,20.0932%) FilterBlock(93,965.23 KB,0.31007%) IndexBlock(93,1.56 MB,0.513915%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:51:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:35 np0005539564 NetworkManager[48997]: <info>  [1764406295.1116] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Nov 29 03:51:35 np0005539564 NetworkManager[48997]: <info>  [1764406295.1138] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Nov 29 03:51:35 np0005539564 nova_compute[226295]: 2025-11-29 08:51:35.113 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:35 np0005539564 nova_compute[226295]: 2025-11-29 08:51:35.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:35 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:35Z|00815|binding|INFO|Releasing lport 52cea514-684d-4e12-87ec-eee5c187481b from this chassis (sb_readonly=0)
Nov 29 03:51:35 np0005539564 nova_compute[226295]: 2025-11-29 08:51:35.231 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:35 np0005539564 nova_compute[226295]: 2025-11-29 08:51:35.658 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:36 np0005539564 nova_compute[226295]: 2025-11-29 08:51:36.119 226310 DEBUG nova.compute.manager [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-changed-fecb7ef7-1d7d-446e-a531-15713ec4c8ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:51:36 np0005539564 nova_compute[226295]: 2025-11-29 08:51:36.120 226310 DEBUG nova.compute.manager [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Refreshing instance network info cache due to event network-changed-fecb7ef7-1d7d-446e-a531-15713ec4c8ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:51:36 np0005539564 nova_compute[226295]: 2025-11-29 08:51:36.121 226310 DEBUG oslo_concurrency.lockutils [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:51:36 np0005539564 nova_compute[226295]: 2025-11-29 08:51:36.121 226310 DEBUG oslo_concurrency.lockutils [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:51:36 np0005539564 nova_compute[226295]: 2025-11-29 08:51:36.122 226310 DEBUG nova.network.neutron [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Refreshing network info cache for port fecb7ef7-1d7d-446e-a531-15713ec4c8ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:51:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:36.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:36.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:36 np0005539564 podman[305238]: 2025-11-29 08:51:36.559641406 +0000 UTC m=+0.089583575 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:51:36 np0005539564 podman[305237]: 2025-11-29 08:51:36.5605347 +0000 UTC m=+0.097778447 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:51:36 np0005539564 podman[305236]: 2025-11-29 08:51:36.592894026 +0000 UTC m=+0.136775532 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:51:37 np0005539564 nova_compute[226295]: 2025-11-29 08:51:37.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:37 np0005539564 nova_compute[226295]: 2025-11-29 08:51:37.571 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:37 np0005539564 nova_compute[226295]: 2025-11-29 08:51:37.571 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:37 np0005539564 nova_compute[226295]: 2025-11-29 08:51:37.572 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:37 np0005539564 nova_compute[226295]: 2025-11-29 08:51:37.572 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:51:37 np0005539564 nova_compute[226295]: 2025-11-29 08:51:37.572 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3855196618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.141 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.230 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.231 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.289 226310 DEBUG nova.network.neutron [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updated VIF entry in instance network info cache for port fecb7ef7-1d7d-446e-a531-15713ec4c8ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.290 226310 DEBUG nova.network.neutron [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updating instance_info_cache with network_info: [{"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.318 226310 DEBUG oslo_concurrency.lockutils [req-4e26dbfe-a0c0-429f-a4b1-88fc39de11ea req-4abd2894-a8c1-4bd1-8ffa-db5ac1e3283a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:51:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:38.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:38.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.410 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.411 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4012MB free_disk=20.941429138183594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.411 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.412 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.557 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance defc87c3-85a5-47bb-8d50-3121d5d780c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.558 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.558 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.576 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:38 np0005539564 nova_compute[226295]: 2025-11-29 08:51:38.697 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/482374199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:39 np0005539564 nova_compute[226295]: 2025-11-29 08:51:39.147 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:39 np0005539564 nova_compute[226295]: 2025-11-29 08:51:39.155 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:51:39 np0005539564 nova_compute[226295]: 2025-11-29 08:51:39.277 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:51:39 np0005539564 nova_compute[226295]: 2025-11-29 08:51:39.409 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:51:39 np0005539564 nova_compute[226295]: 2025-11-29 08:51:39.410 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:40.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:40.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:40 np0005539564 nova_compute[226295]: 2025-11-29 08:51:40.663 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.181234) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302181281, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1596, "num_deletes": 257, "total_data_size": 3554966, "memory_usage": 3618984, "flush_reason": "Manual Compaction"}
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302213084, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 2333108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74982, "largest_seqno": 76572, "table_properties": {"data_size": 2326562, "index_size": 3680, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14156, "raw_average_key_size": 19, "raw_value_size": 2313252, "raw_average_value_size": 3235, "num_data_blocks": 162, "num_entries": 715, "num_filter_entries": 715, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406166, "oldest_key_time": 1764406166, "file_creation_time": 1764406302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 31897 microseconds, and 7733 cpu microseconds.
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.213130) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 2333108 bytes OK
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.213154) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.215848) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.215869) EVENT_LOG_v1 {"time_micros": 1764406302215862, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.215889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 3547665, prev total WAL file size 3547665, number of live WAL files 2.
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.217114) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373638' seq:72057594037927935, type:22 .. '6C6F676D0033303231' seq:0, type:0; will stop at (end)
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(2278KB)], [150(12MB)]
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302217147, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 15435286, "oldest_snapshot_seqno": -1}
Nov 29 03:51:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:42.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:42.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 10383 keys, 15300889 bytes, temperature: kUnknown
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302448643, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 15300889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15231297, "index_size": 42535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 273930, "raw_average_key_size": 26, "raw_value_size": 15046856, "raw_average_value_size": 1449, "num_data_blocks": 1629, "num_entries": 10383, "num_filter_entries": 10383, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764406302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.449000) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 15300889 bytes
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.453082) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.7 rd, 66.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 12.5 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(13.2) write-amplify(6.6) OK, records in: 10910, records dropped: 527 output_compression: NoCompression
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.453109) EVENT_LOG_v1 {"time_micros": 1764406302453097, "job": 96, "event": "compaction_finished", "compaction_time_micros": 231575, "compaction_time_cpu_micros": 39506, "output_level": 6, "num_output_files": 1, "total_output_size": 15300889, "num_input_records": 10910, "num_output_records": 10383, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302453831, "job": 96, "event": "table_file_deletion", "file_number": 152}
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302456810, "job": 96, "event": "table_file_deletion", "file_number": 150}
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.216980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.456931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.456939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.456942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.456944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:42.456947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:43 np0005539564 nova_compute[226295]: 2025-11-29 08:51:43.582 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:44.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:44.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:45 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:45Z|00816|binding|INFO|Releasing lport 52cea514-684d-4e12-87ec-eee5c187481b from this chassis (sb_readonly=0)
Nov 29 03:51:45 np0005539564 nova_compute[226295]: 2025-11-29 08:51:45.367 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:45 np0005539564 nova_compute[226295]: 2025-11-29 08:51:45.710 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:46 np0005539564 nova_compute[226295]: 2025-11-29 08:51:46.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:46.189 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:51:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:46.191 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:51:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:46Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:79:4a 10.100.0.8
Nov 29 03:51:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:51:46Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:79:4a 10.100.0.8
Nov 29 03:51:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:46.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:48.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:48 np0005539564 nova_compute[226295]: 2025-11-29 08:51:48.586 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:50.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:50.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:50 np0005539564 nova_compute[226295]: 2025-11-29 08:51:50.713 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:51 np0005539564 nova_compute[226295]: 2025-11-29 08:51:51.612 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:52.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:52.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:51:53.194 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:53 np0005539564 nova_compute[226295]: 2025-11-29 08:51:53.590 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:54.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:55 np0005539564 nova_compute[226295]: 2025-11-29 08:51:55.714 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:51:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:51:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:56.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:56 np0005539564 nova_compute[226295]: 2025-11-29 08:51:56.610 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.108168) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317108221, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 419, "num_deletes": 251, "total_data_size": 444719, "memory_usage": 452680, "flush_reason": "Manual Compaction"}
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317113970, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 277327, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76577, "largest_seqno": 76991, "table_properties": {"data_size": 274910, "index_size": 516, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6547, "raw_average_key_size": 20, "raw_value_size": 270037, "raw_average_value_size": 851, "num_data_blocks": 22, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406302, "oldest_key_time": 1764406302, "file_creation_time": 1764406317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 5856 microseconds, and 2782 cpu microseconds.
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.114023) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 277327 bytes OK
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.114050) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.115720) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.115744) EVENT_LOG_v1 {"time_micros": 1764406317115735, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.115768) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 442051, prev total WAL file size 442051, number of live WAL files 2.
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.116548) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353137' seq:72057594037927935, type:22 .. '6D6772737461740032373639' seq:0, type:0; will stop at (end)
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(270KB)], [153(14MB)]
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317116612, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15578216, "oldest_snapshot_seqno": -1}
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 10187 keys, 11737524 bytes, temperature: kUnknown
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317266788, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 11737524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11673985, "index_size": 36994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 270053, "raw_average_key_size": 26, "raw_value_size": 11497551, "raw_average_value_size": 1128, "num_data_blocks": 1397, "num_entries": 10187, "num_filter_entries": 10187, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764406317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.267577) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11737524 bytes
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.269222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.3 rd, 77.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.6 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(98.5) write-amplify(42.3) OK, records in: 10700, records dropped: 513 output_compression: NoCompression
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.269237) EVENT_LOG_v1 {"time_micros": 1764406317269231, "job": 98, "event": "compaction_finished", "compaction_time_micros": 150773, "compaction_time_cpu_micros": 60673, "output_level": 6, "num_output_files": 1, "total_output_size": 11737524, "num_input_records": 10700, "num_output_records": 10187, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317269885, "job": 98, "event": "table_file_deletion", "file_number": 155}
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317272387, "job": 98, "event": "table_file_deletion", "file_number": 153}
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.116383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.272539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.272549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.272553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.272557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:51:57.272562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:51:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:58 np0005539564 nova_compute[226295]: 2025-11-29 08:51:58.594 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:00.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:00.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:00 np0005539564 nova_compute[226295]: 2025-11-29 08:52:00.716 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:52:01 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:52:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:02.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:03 np0005539564 nova_compute[226295]: 2025-11-29 08:52:03.598 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:03.768 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:03.770 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:03.771 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:04.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:04.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:05 np0005539564 nova_compute[226295]: 2025-11-29 08:52:05.756 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:07 np0005539564 podman[305530]: 2025-11-29 08:52:07.507743914 +0000 UTC m=+0.054355631 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:52:07 np0005539564 podman[305528]: 2025-11-29 08:52:07.530242503 +0000 UTC m=+0.087451807 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:52:07 np0005539564 podman[305529]: 2025-11-29 08:52:07.549664058 +0000 UTC m=+0.092945115 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:52:08 np0005539564 nova_compute[226295]: 2025-11-29 08:52:08.054 226310 DEBUG nova.compute.manager [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:08 np0005539564 nova_compute[226295]: 2025-11-29 08:52:08.118 226310 INFO nova.compute.manager [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] instance snapshotting#033[00m
Nov 29 03:52:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:08.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:08 np0005539564 nova_compute[226295]: 2025-11-29 08:52:08.455 226310 INFO nova.virt.libvirt.driver [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Beginning live snapshot process#033[00m
Nov 29 03:52:08 np0005539564 nova_compute[226295]: 2025-11-29 08:52:08.638 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:08 np0005539564 nova_compute[226295]: 2025-11-29 08:52:08.646 226310 DEBUG nova.virt.libvirt.imagebackend [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:52:08 np0005539564 nova_compute[226295]: 2025-11-29 08:52:08.861 226310 DEBUG nova.storage.rbd_utils [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] creating snapshot(935a04ef9fe74b379042ebf72312c820) on rbd image(defc87c3-85a5-47bb-8d50-3121d5d780c1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:52:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e417 e417: 3 total, 3 up, 3 in
Nov 29 03:52:09 np0005539564 nova_compute[226295]: 2025-11-29 08:52:09.914 226310 DEBUG nova.storage.rbd_utils [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] cloning vms/defc87c3-85a5-47bb-8d50-3121d5d780c1_disk@935a04ef9fe74b379042ebf72312c820 to images/a7583cd4-d395-48e2-9f81-567bf2845ae0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:52:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:10 np0005539564 nova_compute[226295]: 2025-11-29 08:52:10.082 226310 DEBUG nova.storage.rbd_utils [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] flattening images/a7583cd4-d395-48e2-9f81-567bf2845ae0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:52:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:10.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:10 np0005539564 nova_compute[226295]: 2025-11-29 08:52:10.533 226310 DEBUG nova.storage.rbd_utils [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] removing snapshot(935a04ef9fe74b379042ebf72312c820) on rbd image(defc87c3-85a5-47bb-8d50-3121d5d780c1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:52:10 np0005539564 nova_compute[226295]: 2025-11-29 08:52:10.760 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e418 e418: 3 total, 3 up, 3 in
Nov 29 03:52:10 np0005539564 nova_compute[226295]: 2025-11-29 08:52:10.928 226310 DEBUG nova.storage.rbd_utils [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] creating snapshot(snap) on rbd image(a7583cd4-d395-48e2-9f81-567bf2845ae0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:52:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e419 e419: 3 total, 3 up, 3 in
Nov 29 03:52:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:12.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:12.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:13 np0005539564 nova_compute[226295]: 2025-11-29 08:52:13.531 226310 INFO nova.virt.libvirt.driver [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Snapshot image upload complete#033[00m
Nov 29 03:52:13 np0005539564 nova_compute[226295]: 2025-11-29 08:52:13.532 226310 INFO nova.compute.manager [None req-2b456dec-81be-4883-969e-ae826854cc5e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Took 5.41 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:52:13 np0005539564 nova_compute[226295]: 2025-11-29 08:52:13.642 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:14.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:14.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:15 np0005539564 nova_compute[226295]: 2025-11-29 08:52:15.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:52:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:52:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:16.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 e420: 3 total, 3 up, 3 in
Nov 29 03:52:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:18.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:18.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:18 np0005539564 nova_compute[226295]: 2025-11-29 08:52:18.647 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:19 np0005539564 nova_compute[226295]: 2025-11-29 08:52:19.404 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:19 np0005539564 nova_compute[226295]: 2025-11-29 08:52:19.404 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:19 np0005539564 nova_compute[226295]: 2025-11-29 08:52:19.448 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:20.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:20.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:20 np0005539564 nova_compute[226295]: 2025-11-29 08:52:20.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:22.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:23 np0005539564 nova_compute[226295]: 2025-11-29 08:52:23.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:23 np0005539564 nova_compute[226295]: 2025-11-29 08:52:23.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:52:23 np0005539564 nova_compute[226295]: 2025-11-29 08:52:23.651 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:24 np0005539564 nova_compute[226295]: 2025-11-29 08:52:24.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:24.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:24.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:25 np0005539564 nova_compute[226295]: 2025-11-29 08:52:25.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:25 np0005539564 nova_compute[226295]: 2025-11-29 08:52:25.806 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:26.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:26.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:27 np0005539564 nova_compute[226295]: 2025-11-29 08:52:27.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:27 np0005539564 nova_compute[226295]: 2025-11-29 08:52:27.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:52:27 np0005539564 nova_compute[226295]: 2025-11-29 08:52:27.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:52:27 np0005539564 nova_compute[226295]: 2025-11-29 08:52:27.668 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:27 np0005539564 nova_compute[226295]: 2025-11-29 08:52:27.668 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:27 np0005539564 nova_compute[226295]: 2025-11-29 08:52:27.669 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:52:27 np0005539564 nova_compute[226295]: 2025-11-29 08:52:27.669 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid defc87c3-85a5-47bb-8d50-3121d5d780c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:52:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:28.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:52:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:28.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:28 np0005539564 nova_compute[226295]: 2025-11-29 08:52:28.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:30.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:30.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:30 np0005539564 nova_compute[226295]: 2025-11-29 08:52:30.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:32 np0005539564 nova_compute[226295]: 2025-11-29 08:52:32.308 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updating instance_info_cache with network_info: [{"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:32 np0005539564 nova_compute[226295]: 2025-11-29 08:52:32.448 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:32 np0005539564 nova_compute[226295]: 2025-11-29 08:52:32.449 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:52:32 np0005539564 nova_compute[226295]: 2025-11-29 08:52:32.450 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:32 np0005539564 nova_compute[226295]: 2025-11-29 08:52:32.450 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:32.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:32.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:33 np0005539564 nova_compute[226295]: 2025-11-29 08:52:33.695 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:34.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:34 np0005539564 nova_compute[226295]: 2025-11-29 08:52:34.738 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:34.737 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:34.739 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:52:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:35 np0005539564 nova_compute[226295]: 2025-11-29 08:52:35.855 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:36.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:36.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.372 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.373 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:38.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:38.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:38 np0005539564 podman[305733]: 2025-11-29 08:52:38.555730023 +0000 UTC m=+0.086323656 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:52:38 np0005539564 podman[305732]: 2025-11-29 08:52:38.564764168 +0000 UTC m=+0.097692994 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:52:38 np0005539564 podman[305734]: 2025-11-29 08:52:38.564817629 +0000 UTC m=+0.099512713 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.698 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2611578934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.849 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.922 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:52:38 np0005539564 nova_compute[226295]: 2025-11-29 08:52:38.923 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.133 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.134 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3999MB free_disk=20.896976470947266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.135 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.135 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.219 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance defc87c3-85a5-47bb-8d50-3121d5d780c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.220 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.220 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.263 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/275755828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.753 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.760 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.779 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.780 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:52:39 np0005539564 nova_compute[226295]: 2025-11-29 08:52:39.780 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:40.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:40.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:40 np0005539564 nova_compute[226295]: 2025-11-29 08:52:40.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:42 np0005539564 nova_compute[226295]: 2025-11-29 08:52:42.275 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "2e1fe562-114c-4c94-99f0-592bfab32b88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:42 np0005539564 nova_compute[226295]: 2025-11-29 08:52:42.275 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:42 np0005539564 nova_compute[226295]: 2025-11-29 08:52:42.292 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:52:42 np0005539564 nova_compute[226295]: 2025-11-29 08:52:42.367 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:42 np0005539564 nova_compute[226295]: 2025-11-29 08:52:42.368 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:42 np0005539564 nova_compute[226295]: 2025-11-29 08:52:42.375 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:52:42 np0005539564 nova_compute[226295]: 2025-11-29 08:52:42.376 226310 INFO nova.compute.claims [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:52:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:42.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:42.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:42 np0005539564 nova_compute[226295]: 2025-11-29 08:52:42.592 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:42 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:42.741 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3050799729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.094 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.102 226310 DEBUG nova.compute.provider_tree [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.126 226310 DEBUG nova.scheduler.client.report [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.154 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.155 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.211 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.212 226310 DEBUG nova.network.neutron [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.235 226310 INFO nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.265 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.386 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.388 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.388 226310 INFO nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Creating image(s)#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.422 226310 DEBUG nova.storage.rbd_utils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2e1fe562-114c-4c94-99f0-592bfab32b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.464 226310 DEBUG nova.storage.rbd_utils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2e1fe562-114c-4c94-99f0-592bfab32b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.499 226310 DEBUG nova.storage.rbd_utils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2e1fe562-114c-4c94-99f0-592bfab32b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.504 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.595 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.597 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.598 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.599 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.648 226310 DEBUG nova.storage.rbd_utils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2e1fe562-114c-4c94-99f0-592bfab32b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.654 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2e1fe562-114c-4c94-99f0-592bfab32b88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.699 226310 DEBUG nova.policy [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:52:43 np0005539564 nova_compute[226295]: 2025-11-29 08:52:43.704 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.005 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2e1fe562-114c-4c94-99f0-592bfab32b88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.089 226310 DEBUG nova.storage.rbd_utils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 2e1fe562-114c-4c94-99f0-592bfab32b88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.199 226310 DEBUG nova.objects.instance [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e1fe562-114c-4c94-99f0-592bfab32b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.224 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.225 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Ensure instance console log exists: /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.225 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.225 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.226 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:44.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:44.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:44 np0005539564 nova_compute[226295]: 2025-11-29 08:52:44.884 226310 DEBUG nova.network.neutron [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Successfully created port: 96696820-c014-432e-b309-68908aae6b1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:52:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:45 np0005539564 nova_compute[226295]: 2025-11-29 08:52:45.823 226310 DEBUG nova.network.neutron [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Successfully updated port: 96696820-c014-432e-b309-68908aae6b1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:52:45 np0005539564 nova_compute[226295]: 2025-11-29 08:52:45.848 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:45 np0005539564 nova_compute[226295]: 2025-11-29 08:52:45.848 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:45 np0005539564 nova_compute[226295]: 2025-11-29 08:52:45.850 226310 DEBUG nova.network.neutron [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:52:45 np0005539564 nova_compute[226295]: 2025-11-29 08:52:45.969 226310 DEBUG nova.compute.manager [req-f02ce742-c909-45c5-ae2b-04ed844c0859 req-0cb40505-22c6-4e4c-99d1-cf7b9f24a455 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received event network-changed-96696820-c014-432e-b309-68908aae6b1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:45 np0005539564 nova_compute[226295]: 2025-11-29 08:52:45.970 226310 DEBUG nova.compute.manager [req-f02ce742-c909-45c5-ae2b-04ed844c0859 req-0cb40505-22c6-4e4c-99d1-cf7b9f24a455 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Refreshing instance network info cache due to event network-changed-96696820-c014-432e-b309-68908aae6b1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:45 np0005539564 nova_compute[226295]: 2025-11-29 08:52:45.970 226310 DEBUG oslo_concurrency.lockutils [req-f02ce742-c909-45c5-ae2b-04ed844c0859 req-0cb40505-22c6-4e4c-99d1-cf7b9f24a455 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:45 np0005539564 nova_compute[226295]: 2025-11-29 08:52:45.971 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:46 np0005539564 nova_compute[226295]: 2025-11-29 08:52:46.051 226310 DEBUG nova.network.neutron [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:52:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:46.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:46.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.512 226310 DEBUG nova.network.neutron [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Updating instance_info_cache with network_info: [{"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.542 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.543 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Instance network_info: |[{"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.544 226310 DEBUG oslo_concurrency.lockutils [req-f02ce742-c909-45c5-ae2b-04ed844c0859 req-0cb40505-22c6-4e4c-99d1-cf7b9f24a455 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.545 226310 DEBUG nova.network.neutron [req-f02ce742-c909-45c5-ae2b-04ed844c0859 req-0cb40505-22c6-4e4c-99d1-cf7b9f24a455 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Refreshing network info cache for port 96696820-c014-432e-b309-68908aae6b1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.550 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Start _get_guest_xml network_info=[{"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.556 226310 WARNING nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.564 226310 DEBUG nova.virt.libvirt.host [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.565 226310 DEBUG nova.virt.libvirt.host [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.569 226310 DEBUG nova.virt.libvirt.host [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.570 226310 DEBUG nova.virt.libvirt.host [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.571 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.572 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.572 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.572 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.573 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.573 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.573 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.574 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.574 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.574 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.574 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.575 226310 DEBUG nova.virt.hardware [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:52:47 np0005539564 nova_compute[226295]: 2025-11-29 08:52:47.578 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:52:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/784458687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.052 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.088 226310 DEBUG nova.storage.rbd_utils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2e1fe562-114c-4c94-99f0-592bfab32b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.092 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:48.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:48.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:52:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1615450652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.534 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.536 226310 DEBUG nova.virt.libvirt.vif [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:52:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-818791151',display_name='tempest-TestNetworkBasicOps-server-818791151',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-818791151',id=204,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIy96kdU0U2vKT7AneAeYgbiAIwd/yp9NQoL5foiF54ThFbWCcFWMiTO0tCcxGmc97Oe3YZnbK8y6a0yspI7rRwgK3rllEqhoG+/6ZIQ9CWBNyVzmjoE4e73MGCznNkuZg==',key_name='tempest-TestNetworkBasicOps-59251133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-8xyt2us5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:52:43Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2e1fe562-114c-4c94-99f0-592bfab32b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.536 226310 DEBUG nova.network.os_vif_util [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.537 226310 DEBUG nova.network.os_vif_util [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:90:57,bridge_name='br-int',has_traffic_filtering=True,id=96696820-c014-432e-b309-68908aae6b1f,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96696820-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.538 226310 DEBUG nova.objects.instance [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e1fe562-114c-4c94-99f0-592bfab32b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.560 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <uuid>2e1fe562-114c-4c94-99f0-592bfab32b88</uuid>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <name>instance-000000cc</name>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestNetworkBasicOps-server-818791151</nova:name>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:52:47</nova:creationTime>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <nova:port uuid="96696820-c014-432e-b309-68908aae6b1f">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <entry name="serial">2e1fe562-114c-4c94-99f0-592bfab32b88</entry>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <entry name="uuid">2e1fe562-114c-4c94-99f0-592bfab32b88</entry>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/2e1fe562-114c-4c94-99f0-592bfab32b88_disk">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/2e1fe562-114c-4c94-99f0-592bfab32b88_disk.config">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:37:90:57"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <target dev="tap96696820-c0"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88/console.log" append="off"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:52:48 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:52:48 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:52:48 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:52:48 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.562 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Preparing to wait for external event network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.563 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.563 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.564 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.565 226310 DEBUG nova.virt.libvirt.vif [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:52:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-818791151',display_name='tempest-TestNetworkBasicOps-server-818791151',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-818791151',id=204,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIy96kdU0U2vKT7AneAeYgbiAIwd/yp9NQoL5foiF54ThFbWCcFWMiTO0tCcxGmc97Oe3YZnbK8y6a0yspI7rRwgK3rllEqhoG+/6ZIQ9CWBNyVzmjoE4e73MGCznNkuZg==',key_name='tempest-TestNetworkBasicOps-59251133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-8xyt2us5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:52:43Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2e1fe562-114c-4c94-99f0-592bfab32b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.566 226310 DEBUG nova.network.os_vif_util [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.567 226310 DEBUG nova.network.os_vif_util [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:90:57,bridge_name='br-int',has_traffic_filtering=True,id=96696820-c014-432e-b309-68908aae6b1f,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96696820-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.567 226310 DEBUG os_vif [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:90:57,bridge_name='br-int',has_traffic_filtering=True,id=96696820-c014-432e-b309-68908aae6b1f,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96696820-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.568 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.569 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.570 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.574 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.574 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96696820-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.575 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96696820-c0, col_values=(('external_ids', {'iface-id': '96696820-c014-432e-b309-68908aae6b1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:90:57', 'vm-uuid': '2e1fe562-114c-4c94-99f0-592bfab32b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.576 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:48 np0005539564 NetworkManager[48997]: <info>  [1764406368.5777] manager: (tap96696820-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.579 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.586 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.588 226310 INFO os_vif [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:90:57,bridge_name='br-int',has_traffic_filtering=True,id=96696820-c014-432e-b309-68908aae6b1f,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96696820-c0')#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.645 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.645 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.646 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:37:90:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.646 226310 INFO nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Using config drive#033[00m
Nov 29 03:52:48 np0005539564 nova_compute[226295]: 2025-11-29 08:52:48.696 226310 DEBUG nova.storage.rbd_utils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2e1fe562-114c-4c94-99f0-592bfab32b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.270 226310 INFO nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Creating config drive at /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88/disk.config#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.276 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp376k2ijp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.314 226310 DEBUG nova.network.neutron [req-f02ce742-c909-45c5-ae2b-04ed844c0859 req-0cb40505-22c6-4e4c-99d1-cf7b9f24a455 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Updated VIF entry in instance network info cache for port 96696820-c014-432e-b309-68908aae6b1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.315 226310 DEBUG nova.network.neutron [req-f02ce742-c909-45c5-ae2b-04ed844c0859 req-0cb40505-22c6-4e4c-99d1-cf7b9f24a455 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Updating instance_info_cache with network_info: [{"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.341 226310 DEBUG oslo_concurrency.lockutils [req-f02ce742-c909-45c5-ae2b-04ed844c0859 req-0cb40505-22c6-4e4c-99d1-cf7b9f24a455 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.424 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp376k2ijp" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.457 226310 DEBUG nova.storage.rbd_utils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 2e1fe562-114c-4c94-99f0-592bfab32b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.463 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88/disk.config 2e1fe562-114c-4c94-99f0-592bfab32b88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.662 226310 DEBUG oslo_concurrency.processutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88/disk.config 2e1fe562-114c-4c94-99f0-592bfab32b88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.663 226310 INFO nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Deleting local config drive /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88/disk.config because it was imported into RBD.#033[00m
Nov 29 03:52:49 np0005539564 kernel: tap96696820-c0: entered promiscuous mode
Nov 29 03:52:49 np0005539564 NetworkManager[48997]: <info>  [1764406369.7379] manager: (tap96696820-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Nov 29 03:52:49 np0005539564 ovn_controller[130591]: 2025-11-29T08:52:49Z|00817|binding|INFO|Claiming lport 96696820-c014-432e-b309-68908aae6b1f for this chassis.
Nov 29 03:52:49 np0005539564 ovn_controller[130591]: 2025-11-29T08:52:49Z|00818|binding|INFO|96696820-c014-432e-b309-68908aae6b1f: Claiming fa:16:3e:37:90:57 10.100.0.13
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.742 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.752 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:90:57 10.100.0.13'], port_security=['fa:16:3e:37:90:57 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2e1fe562-114c-4c94-99f0-592bfab32b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4229292-80a4-45ff-9b7c-102752939760', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f98e592-3a71-4abe-9790-9c2408b2a547', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88f9c966-7d41-46f1-8106-7095d470a6ef, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=96696820-c014-432e-b309-68908aae6b1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.754 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 96696820-c014-432e-b309-68908aae6b1f in datapath d4229292-80a4-45ff-9b7c-102752939760 bound to our chassis#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.757 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d4229292-80a4-45ff-9b7c-102752939760#033[00m
Nov 29 03:52:49 np0005539564 ovn_controller[130591]: 2025-11-29T08:52:49Z|00819|binding|INFO|Setting lport 96696820-c014-432e-b309-68908aae6b1f ovn-installed in OVS
Nov 29 03:52:49 np0005539564 ovn_controller[130591]: 2025-11-29T08:52:49Z|00820|binding|INFO|Setting lport 96696820-c014-432e-b309-68908aae6b1f up in Southbound
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539564 nova_compute[226295]: 2025-11-29 08:52:49.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.774 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4d4f03-855f-4d8b-b24e-25f2bcff9fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.776 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd4229292-81 in ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.778 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd4229292-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.778 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f114c784-d56a-439f-9bb6-bb31c19de807]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.781 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b080c964-6966-4099-a6de-31b3140d7107]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 systemd-udevd[306164]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:52:49 np0005539564 systemd-machined[190128]: New machine qemu-95-instance-000000cc.
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.801 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d4691d05-e758-4583-b6ca-273c7a717c1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 NetworkManager[48997]: <info>  [1764406369.8080] device (tap96696820-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:52:49 np0005539564 NetworkManager[48997]: <info>  [1764406369.8095] device (tap96696820-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:52:49 np0005539564 systemd[1]: Started Virtual Machine qemu-95-instance-000000cc.
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.829 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[40f7f0b0-1805-4624-8d2f-d3bf4b518422]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.864 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e0032a-6870-493b-8dd8-f06dfdfc8437]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 NetworkManager[48997]: <info>  [1764406369.8739] manager: (tapd4229292-80): new Veth device (/org/freedesktop/NetworkManager/Devices/384)
Nov 29 03:52:49 np0005539564 systemd-udevd[306167]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.873 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c75c73a5-db25-4a57-a432-63f34b000bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.913 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[05f31fff-bd14-45b4-ba32-50c28447dc59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.916 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[02c7f3ac-f3cb-42c5-9e43-4f39e60621a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 NetworkManager[48997]: <info>  [1764406369.9428] device (tapd4229292-80): carrier: link connected
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.953 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c508ee4d-04e3-4fbf-872a-5d57351185fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.974 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b78275-e98b-4bfe-a114-efe7b1ae12d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4229292-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:00:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934462, 'reachable_time': 38790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306195, 'error': None, 'target': 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:49.993 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c308a534-7a17-4be7-a460-61fe39136f64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:90'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 934462, 'tstamp': 934462}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306196, 'error': None, 'target': 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.013 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7b00e263-9f8c-4f29-8e72-cf43e4dd550b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4229292-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:00:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934462, 'reachable_time': 38790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306197, 'error': None, 'target': 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.056 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a8119262-380c-490f-8770-129f276585bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.068 226310 DEBUG nova.compute.manager [req-72f1e55e-6751-49de-ad66-cc463e25988a req-728d0faf-3431-430d-94f6-38037b23b983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received event network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.068 226310 DEBUG oslo_concurrency.lockutils [req-72f1e55e-6751-49de-ad66-cc463e25988a req-728d0faf-3431-430d-94f6-38037b23b983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.069 226310 DEBUG oslo_concurrency.lockutils [req-72f1e55e-6751-49de-ad66-cc463e25988a req-728d0faf-3431-430d-94f6-38037b23b983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.069 226310 DEBUG oslo_concurrency.lockutils [req-72f1e55e-6751-49de-ad66-cc463e25988a req-728d0faf-3431-430d-94f6-38037b23b983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.069 226310 DEBUG nova.compute.manager [req-72f1e55e-6751-49de-ad66-cc463e25988a req-728d0faf-3431-430d-94f6-38037b23b983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Processing event network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.284 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406370.2841551, 2e1fe562-114c-4c94-99f0-592bfab32b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.285 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] VM Started (Lifecycle Event)#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.290 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.296 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.302 226310 INFO nova.virt.libvirt.driver [-] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Instance spawned successfully.#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.302 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.319 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.324 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.335 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.335 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.336 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.336 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.337 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.338 226310 DEBUG nova.virt.libvirt.driver [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.360 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.360 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406370.2865787, 2e1fe562-114c-4c94-99f0-592bfab32b88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.361 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.380 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[94458272-b5c7-449a-b1cf-c325b54ef4d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.381 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4229292-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.381 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.382 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4229292-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:50 np0005539564 kernel: tapd4229292-80: entered promiscuous mode
Nov 29 03:52:50 np0005539564 NetworkManager[48997]: <info>  [1764406370.3844] manager: (tapd4229292-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.385 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.387 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd4229292-80, col_values=(('external_ids', {'iface-id': '6864f002-8984-4f83-802c-5987f0f90af9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.388 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:50 np0005539564 ovn_controller[130591]: 2025-11-29T08:52:50Z|00821|binding|INFO|Releasing lport 6864f002-8984-4f83-802c-5987f0f90af9 from this chassis (sb_readonly=0)
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.398 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.410 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.412 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d4229292-80a4-45ff-9b7c-102752939760.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d4229292-80a4-45ff-9b7c-102752939760.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.413 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[50c6ff1e-0099-4410-ac22-6d686de4d184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.414 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-d4229292-80a4-45ff-9b7c-102752939760
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/d4229292-80a4-45ff-9b7c-102752939760.pid.haproxy
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID d4229292-80a4-45ff-9b7c-102752939760
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:52:50 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:52:50.415 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'env', 'PROCESS_TAG=haproxy-d4229292-80a4-45ff-9b7c-102752939760', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d4229292-80a4-45ff-9b7c-102752939760.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.416 226310 INFO nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Took 7.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.417 226310 DEBUG nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.419 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406370.2948303, 2e1fe562-114c-4c94-99f0-592bfab32b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.419 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.446 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.449 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.475 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:52:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:50.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.490 226310 INFO nova.compute.manager [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Took 8.14 seconds to build instance.#033[00m
Nov 29 03:52:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:50.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.515 226310 DEBUG oslo_concurrency.lockutils [None req-781adff0-e198-4c7b-b557-0698646eaa60 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:50 np0005539564 podman[306271]: 2025-11-29 08:52:50.789401755 +0000 UTC m=+0.061891356 container create 09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:52:50 np0005539564 systemd[1]: Started libpod-conmon-09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85.scope.
Nov 29 03:52:50 np0005539564 podman[306271]: 2025-11-29 08:52:50.756764442 +0000 UTC m=+0.029254073 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:52:50 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:52:50 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c20af9a9a7ebd9de6e3ab2551b11e13585ff5dc84f8449f189f7b68a852c4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:50 np0005539564 podman[306271]: 2025-11-29 08:52:50.902865305 +0000 UTC m=+0.175354926 container init 09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:52:50 np0005539564 podman[306271]: 2025-11-29 08:52:50.913247566 +0000 UTC m=+0.185737177 container start 09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:52:50 np0005539564 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[306287]: [NOTICE]   (306291) : New worker (306293) forked
Nov 29 03:52:50 np0005539564 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[306287]: [NOTICE]   (306291) : Loading success.
Nov 29 03:52:50 np0005539564 nova_compute[226295]: 2025-11-29 08:52:50.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:52 np0005539564 nova_compute[226295]: 2025-11-29 08:52:52.200 226310 DEBUG nova.compute.manager [req-11cf1b8d-9c4c-4961-a202-96302a0c274c req-cf962fee-0992-47fe-9d28-c90b2b83ba2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received event network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:52 np0005539564 nova_compute[226295]: 2025-11-29 08:52:52.201 226310 DEBUG oslo_concurrency.lockutils [req-11cf1b8d-9c4c-4961-a202-96302a0c274c req-cf962fee-0992-47fe-9d28-c90b2b83ba2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:52 np0005539564 nova_compute[226295]: 2025-11-29 08:52:52.201 226310 DEBUG oslo_concurrency.lockutils [req-11cf1b8d-9c4c-4961-a202-96302a0c274c req-cf962fee-0992-47fe-9d28-c90b2b83ba2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:52 np0005539564 nova_compute[226295]: 2025-11-29 08:52:52.202 226310 DEBUG oslo_concurrency.lockutils [req-11cf1b8d-9c4c-4961-a202-96302a0c274c req-cf962fee-0992-47fe-9d28-c90b2b83ba2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:52 np0005539564 nova_compute[226295]: 2025-11-29 08:52:52.202 226310 DEBUG nova.compute.manager [req-11cf1b8d-9c4c-4961-a202-96302a0c274c req-cf962fee-0992-47fe-9d28-c90b2b83ba2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] No waiting events found dispatching network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:52 np0005539564 nova_compute[226295]: 2025-11-29 08:52:52.203 226310 WARNING nova.compute.manager [req-11cf1b8d-9c4c-4961-a202-96302a0c274c req-cf962fee-0992-47fe-9d28-c90b2b83ba2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received unexpected event network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:52:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:52.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:52.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 03:52:53 np0005539564 nova_compute[226295]: 2025-11-29 08:52:53.579 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:54.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:54.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:54 np0005539564 nova_compute[226295]: 2025-11-29 08:52:54.765 226310 DEBUG nova.compute.manager [req-f919f4fa-375c-4192-b4e9-14645e8d08f0 req-69191ae5-f12b-4eb1-9c87-1eaf30e73244 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received event network-changed-96696820-c014-432e-b309-68908aae6b1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:54 np0005539564 nova_compute[226295]: 2025-11-29 08:52:54.765 226310 DEBUG nova.compute.manager [req-f919f4fa-375c-4192-b4e9-14645e8d08f0 req-69191ae5-f12b-4eb1-9c87-1eaf30e73244 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Refreshing instance network info cache due to event network-changed-96696820-c014-432e-b309-68908aae6b1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:54 np0005539564 nova_compute[226295]: 2025-11-29 08:52:54.765 226310 DEBUG oslo_concurrency.lockutils [req-f919f4fa-375c-4192-b4e9-14645e8d08f0 req-69191ae5-f12b-4eb1-9c87-1eaf30e73244 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:54 np0005539564 nova_compute[226295]: 2025-11-29 08:52:54.765 226310 DEBUG oslo_concurrency.lockutils [req-f919f4fa-375c-4192-b4e9-14645e8d08f0 req-69191ae5-f12b-4eb1-9c87-1eaf30e73244 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:54 np0005539564 nova_compute[226295]: 2025-11-29 08:52:54.766 226310 DEBUG nova.network.neutron [req-f919f4fa-375c-4192-b4e9-14645e8d08f0 req-69191ae5-f12b-4eb1-9c87-1eaf30e73244 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Refreshing network info cache for port 96696820-c014-432e-b309-68908aae6b1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:55 np0005539564 nova_compute[226295]: 2025-11-29 08:52:55.388 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:55 np0005539564 nova_compute[226295]: 2025-11-29 08:52:55.389 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:52:55 np0005539564 nova_compute[226295]: 2025-11-29 08:52:55.435 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:52:56 np0005539564 nova_compute[226295]: 2025-11-29 08:52:56.008 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.385979) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376386238, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 881, "num_deletes": 252, "total_data_size": 1668135, "memory_usage": 1693584, "flush_reason": "Manual Compaction"}
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376394722, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 1099793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76996, "largest_seqno": 77872, "table_properties": {"data_size": 1095705, "index_size": 1803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9360, "raw_average_key_size": 19, "raw_value_size": 1087379, "raw_average_value_size": 2298, "num_data_blocks": 79, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406318, "oldest_key_time": 1764406318, "file_creation_time": 1764406376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 8762 microseconds, and 3443 cpu microseconds.
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.394759) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 1099793 bytes OK
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.394773) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.399039) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.399053) EVENT_LOG_v1 {"time_micros": 1764406376399049, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.399083) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1663648, prev total WAL file size 1663648, number of live WAL files 2.
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.400140) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(1074KB)], [156(11MB)]
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376400186, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 12837317, "oldest_snapshot_seqno": -1}
Nov 29 03:52:56 np0005539564 nova_compute[226295]: 2025-11-29 08:52:56.437 226310 DEBUG nova.network.neutron [req-f919f4fa-375c-4192-b4e9-14645e8d08f0 req-69191ae5-f12b-4eb1-9c87-1eaf30e73244 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Updated VIF entry in instance network info cache for port 96696820-c014-432e-b309-68908aae6b1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:56 np0005539564 nova_compute[226295]: 2025-11-29 08:52:56.437 226310 DEBUG nova.network.neutron [req-f919f4fa-375c-4192-b4e9-14645e8d08f0 req-69191ae5-f12b-4eb1-9c87-1eaf30e73244 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Updating instance_info_cache with network_info: [{"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:56 np0005539564 nova_compute[226295]: 2025-11-29 08:52:56.459 226310 DEBUG oslo_concurrency.lockutils [req-f919f4fa-375c-4192-b4e9-14645e8d08f0 req-69191ae5-f12b-4eb1-9c87-1eaf30e73244 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2e1fe562-114c-4c94-99f0-592bfab32b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 10141 keys, 10787872 bytes, temperature: kUnknown
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376466582, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10787872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10725402, "index_size": 35989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 269824, "raw_average_key_size": 26, "raw_value_size": 10550666, "raw_average_value_size": 1040, "num_data_blocks": 1349, "num_entries": 10141, "num_filter_entries": 10141, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764406376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.467060) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10787872 bytes
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.469010) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.1 rd, 162.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(21.5) write-amplify(9.8) OK, records in: 10660, records dropped: 519 output_compression: NoCompression
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.469032) EVENT_LOG_v1 {"time_micros": 1764406376469022, "job": 100, "event": "compaction_finished", "compaction_time_micros": 66465, "compaction_time_cpu_micros": 35619, "output_level": 6, "num_output_files": 1, "total_output_size": 10787872, "num_input_records": 10660, "num_output_records": 10141, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376469602, "job": 100, "event": "table_file_deletion", "file_number": 158}
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376472740, "job": 100, "event": "table_file_deletion", "file_number": 156}
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.399879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.472842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.472848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.472852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.472855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:52:56.472857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:56.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:56.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:57 np0005539564 nova_compute[226295]: 2025-11-29 08:52:57.347 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:58.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:52:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:58.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:58 np0005539564 nova_compute[226295]: 2025-11-29 08:52:58.584 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:00.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:00.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:01 np0005539564 nova_compute[226295]: 2025-11-29 08:53:01.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:02 np0005539564 podman[306476]: 2025-11-29 08:53:02.173977456 +0000 UTC m=+0.083536720 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:53:02 np0005539564 podman[306476]: 2025-11-29 08:53:02.295112654 +0000 UTC m=+0.204671868 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:53:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:02.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:02.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:03 np0005539564 nova_compute[226295]: 2025-11-29 08:53:03.587 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:03.771 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:03.772 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:03.774 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:53:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:04 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:53:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:05Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:90:57 10.100.0.13
Nov 29 03:53:05 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:05Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:90:57 10.100.0.13
Nov 29 03:53:06 np0005539564 nova_compute[226295]: 2025-11-29 08:53:06.054 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:06.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:06.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:08.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:08.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:08 np0005539564 nova_compute[226295]: 2025-11-29 08:53:08.640 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e421 e421: 3 total, 3 up, 3 in
Nov 29 03:53:09 np0005539564 podman[306734]: 2025-11-29 08:53:09.519297718 +0000 UTC m=+0.070594050 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, tcib_managed=true)
Nov 29 03:53:09 np0005539564 podman[306735]: 2025-11-29 08:53:09.529522995 +0000 UTC m=+0.068938926 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:53:09 np0005539564 podman[306733]: 2025-11-29 08:53:09.557211504 +0000 UTC m=+0.104192719 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:53:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:10.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:10.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:11 np0005539564 nova_compute[226295]: 2025-11-29 08:53:11.121 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e422 e422: 3 total, 3 up, 3 in
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.132 226310 INFO nova.compute.manager [None req-4ab3568e-493a-4623-bb98-f2574a1642fa 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Get console output#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.139 270504 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:53:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e423 e423: 3 total, 3 up, 3 in
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.479 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "2e1fe562-114c-4c94-99f0-592bfab32b88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.480 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.481 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.481 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.482 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.483 226310 INFO nova.compute.manager [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Terminating instance#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.485 226310 DEBUG nova.compute.manager [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:53:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:12 np0005539564 kernel: tap96696820-c0 (unregistering): left promiscuous mode
Nov 29 03:53:12 np0005539564 NetworkManager[48997]: <info>  [1764406392.5431] device (tap96696820-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.561 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:12Z|00822|binding|INFO|Releasing lport 96696820-c014-432e-b309-68908aae6b1f from this chassis (sb_readonly=0)
Nov 29 03:53:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:12Z|00823|binding|INFO|Setting lport 96696820-c014-432e-b309-68908aae6b1f down in Southbound
Nov 29 03:53:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:12Z|00824|binding|INFO|Removing iface tap96696820-c0 ovn-installed in OVS
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.565 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.571 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:90:57 10.100.0.13'], port_security=['fa:16:3e:37:90:57 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2e1fe562-114c-4c94-99f0-592bfab32b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4229292-80a4-45ff-9b7c-102752939760', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f98e592-3a71-4abe-9790-9c2408b2a547', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88f9c966-7d41-46f1-8106-7095d470a6ef, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=96696820-c014-432e-b309-68908aae6b1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.572 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 96696820-c014-432e-b309-68908aae6b1f in datapath d4229292-80a4-45ff-9b7c-102752939760 unbound from our chassis#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.574 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d4229292-80a4-45ff-9b7c-102752939760, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.575 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[896f4c52-f7fa-4d04-a3f7-a82a077a29b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.576 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 namespace which is not needed anymore#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.591 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:12 np0005539564 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000cc.scope: Deactivated successfully.
Nov 29 03:53:12 np0005539564 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000cc.scope: Consumed 14.802s CPU time.
Nov 29 03:53:12 np0005539564 systemd-machined[190128]: Machine qemu-95-instance-000000cc terminated.
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.731 226310 INFO nova.virt.libvirt.driver [-] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Instance destroyed successfully.#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.733 226310 DEBUG nova.objects.instance [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 2e1fe562-114c-4c94-99f0-592bfab32b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:53:12 np0005539564 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[306287]: [NOTICE]   (306291) : haproxy version is 2.8.14-c23fe91
Nov 29 03:53:12 np0005539564 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[306287]: [NOTICE]   (306291) : path to executable is /usr/sbin/haproxy
Nov 29 03:53:12 np0005539564 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[306287]: [WARNING]  (306291) : Exiting Master process...
Nov 29 03:53:12 np0005539564 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[306287]: [ALERT]    (306291) : Current worker (306293) exited with code 143 (Terminated)
Nov 29 03:53:12 np0005539564 neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760[306287]: [WARNING]  (306291) : All workers exited. Exiting... (0)
Nov 29 03:53:12 np0005539564 systemd[1]: libpod-09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85.scope: Deactivated successfully.
Nov 29 03:53:12 np0005539564 podman[306868]: 2025-11-29 08:53:12.751751199 +0000 UTC m=+0.063966401 container died 09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:53:12 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85-userdata-shm.mount: Deactivated successfully.
Nov 29 03:53:12 np0005539564 systemd[1]: var-lib-containers-storage-overlay-48c20af9a9a7ebd9de6e3ab2551b11e13585ff5dc84f8449f189f7b68a852c4b-merged.mount: Deactivated successfully.
Nov 29 03:53:12 np0005539564 podman[306868]: 2025-11-29 08:53:12.807369334 +0000 UTC m=+0.119584546 container cleanup 09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:53:12 np0005539564 systemd[1]: libpod-conmon-09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85.scope: Deactivated successfully.
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.825 226310 DEBUG nova.virt.libvirt.vif [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:52:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-818791151',display_name='tempest-TestNetworkBasicOps-server-818791151',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-818791151',id=204,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIy96kdU0U2vKT7AneAeYgbiAIwd/yp9NQoL5foiF54ThFbWCcFWMiTO0tCcxGmc97Oe3YZnbK8y6a0yspI7rRwgK3rllEqhoG+/6ZIQ9CWBNyVzmjoE4e73MGCznNkuZg==',key_name='tempest-TestNetworkBasicOps-59251133',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:52:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-8xyt2us5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:52:50Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=2e1fe562-114c-4c94-99f0-592bfab32b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.825 226310 DEBUG nova.network.os_vif_util [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "96696820-c014-432e-b309-68908aae6b1f", "address": "fa:16:3e:37:90:57", "network": {"id": "d4229292-80a4-45ff-9b7c-102752939760", "bridge": "br-int", "label": "tempest-network-smoke--1641570643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96696820-c0", "ovs_interfaceid": "96696820-c014-432e-b309-68908aae6b1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.827 226310 DEBUG nova.network.os_vif_util [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:90:57,bridge_name='br-int',has_traffic_filtering=True,id=96696820-c014-432e-b309-68908aae6b1f,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96696820-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.827 226310 DEBUG os_vif [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:90:57,bridge_name='br-int',has_traffic_filtering=True,id=96696820-c014-432e-b309-68908aae6b1f,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96696820-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.830 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.830 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96696820-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.836 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.839 226310 INFO os_vif [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:90:57,bridge_name='br-int',has_traffic_filtering=True,id=96696820-c014-432e-b309-68908aae6b1f,network=Network(d4229292-80a4-45ff-9b7c-102752939760),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96696820-c0')#033[00m
Nov 29 03:53:12 np0005539564 podman[306910]: 2025-11-29 08:53:12.911636195 +0000 UTC m=+0.073570741 container remove 09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.922 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb0ee15-e438-4d0e-bc59-3cc815163169]: (4, ('Sat Nov 29 08:53:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 (09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85)\n09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85\nSat Nov 29 08:53:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 (09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85)\n09e8750c67de7aa49d90b6f157d3ee341cf58f3832a9e12207b8f9339d7c3a85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.924 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ee821dc8-1099-4057-bf0c-c23e6ef40ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.924 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4229292-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.926 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:12 np0005539564 kernel: tapd4229292-80: left promiscuous mode
Nov 29 03:53:12 np0005539564 nova_compute[226295]: 2025-11-29 08:53:12.944 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.949 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7b0587-9fb0-4848-9fbb-4bf617260e46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.965 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc8886c-b5ad-4d77-abcc-fb3cdda06eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.967 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[14125f6b-64c5-4443-a30b-f476b1da2c77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.994 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[61f02ec9-d017-4d77-8ab3-0abda9dcbde6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934453, 'reachable_time': 39171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306940, 'error': None, 'target': 'ovnmeta-d4229292-80a4-45ff-9b7c-102752939760', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:12 np0005539564 systemd[1]: run-netns-ovnmeta\x2dd4229292\x2d80a4\x2d45ff\x2d9b7c\x2d102752939760.mount: Deactivated successfully.
Nov 29 03:53:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.998 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d4229292-80a4-45ff-9b7c-102752939760 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:53:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:12.999 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[09a8b1a3-2324-4844-b54a-1ffd40ad37ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.146 226310 DEBUG nova.compute.manager [req-0a2615d2-4f90-4cab-99c3-85ca847939f0 req-b722e7ed-5f23-47af-bc23-db739dfb631d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received event network-vif-unplugged-96696820-c014-432e-b309-68908aae6b1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.147 226310 DEBUG oslo_concurrency.lockutils [req-0a2615d2-4f90-4cab-99c3-85ca847939f0 req-b722e7ed-5f23-47af-bc23-db739dfb631d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.147 226310 DEBUG oslo_concurrency.lockutils [req-0a2615d2-4f90-4cab-99c3-85ca847939f0 req-b722e7ed-5f23-47af-bc23-db739dfb631d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.148 226310 DEBUG oslo_concurrency.lockutils [req-0a2615d2-4f90-4cab-99c3-85ca847939f0 req-b722e7ed-5f23-47af-bc23-db739dfb631d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.148 226310 DEBUG nova.compute.manager [req-0a2615d2-4f90-4cab-99c3-85ca847939f0 req-b722e7ed-5f23-47af-bc23-db739dfb631d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] No waiting events found dispatching network-vif-unplugged-96696820-c014-432e-b309-68908aae6b1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.149 226310 DEBUG nova.compute.manager [req-0a2615d2-4f90-4cab-99c3-85ca847939f0 req-b722e7ed-5f23-47af-bc23-db739dfb631d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received event network-vif-unplugged-96696820-c014-432e-b309-68908aae6b1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.318 226310 INFO nova.virt.libvirt.driver [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Deleting instance files /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88_del#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.319 226310 INFO nova.virt.libvirt.driver [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Deletion of /var/lib/nova/instances/2e1fe562-114c-4c94-99f0-592bfab32b88_del complete#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.383 226310 INFO nova.compute.manager [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.384 226310 DEBUG oslo.service.loopingcall [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.384 226310 DEBUG nova.compute.manager [-] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:53:13 np0005539564 nova_compute[226295]: 2025-11-29 08:53:13.384 226310 DEBUG nova.network.neutron [-] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:53:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:14.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:14 np0005539564 nova_compute[226295]: 2025-11-29 08:53:14.768 226310 DEBUG nova.network.neutron [-] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:53:14 np0005539564 nova_compute[226295]: 2025-11-29 08:53:14.790 226310 INFO nova.compute.manager [-] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Took 1.41 seconds to deallocate network for instance.#033[00m
Nov 29 03:53:14 np0005539564 nova_compute[226295]: 2025-11-29 08:53:14.842 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:14 np0005539564 nova_compute[226295]: 2025-11-29 08:53:14.843 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:14 np0005539564 nova_compute[226295]: 2025-11-29 08:53:14.861 226310 DEBUG nova.compute.manager [req-e67b6cd8-adf6-41d8-a377-e6868d6efb0e req-61495467-a838-4d33-b1ac-08e1f5098320 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received event network-vif-deleted-96696820-c014-432e-b309-68908aae6b1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:14 np0005539564 nova_compute[226295]: 2025-11-29 08:53:14.942 226310 DEBUG oslo_concurrency.processutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e424 e424: 3 total, 3 up, 3 in
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.268 226310 DEBUG nova.compute.manager [req-816cd5df-ec45-445d-8e15-fe8e336cebe5 req-6688b532-3ec6-4300-811c-9ceeb27b08a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received event network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.269 226310 DEBUG oslo_concurrency.lockutils [req-816cd5df-ec45-445d-8e15-fe8e336cebe5 req-6688b532-3ec6-4300-811c-9ceeb27b08a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.270 226310 DEBUG oslo_concurrency.lockutils [req-816cd5df-ec45-445d-8e15-fe8e336cebe5 req-6688b532-3ec6-4300-811c-9ceeb27b08a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.270 226310 DEBUG oslo_concurrency.lockutils [req-816cd5df-ec45-445d-8e15-fe8e336cebe5 req-6688b532-3ec6-4300-811c-9ceeb27b08a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.271 226310 DEBUG nova.compute.manager [req-816cd5df-ec45-445d-8e15-fe8e336cebe5 req-6688b532-3ec6-4300-811c-9ceeb27b08a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] No waiting events found dispatching network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.271 226310 WARNING nova.compute.manager [req-816cd5df-ec45-445d-8e15-fe8e336cebe5 req-6688b532-3ec6-4300-811c-9ceeb27b08a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Received unexpected event network-vif-plugged-96696820-c014-432e-b309-68908aae6b1f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:53:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:15 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/42309310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.419 226310 DEBUG oslo_concurrency.processutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.426 226310 DEBUG nova.compute.provider_tree [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.450 226310 DEBUG nova.scheduler.client.report [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.490 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.527 226310 INFO nova.scheduler.client.report [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 2e1fe562-114c-4c94-99f0-592bfab32b88#033[00m
Nov 29 03:53:15 np0005539564 nova_compute[226295]: 2025-11-29 08:53:15.594 226310 DEBUG oslo_concurrency.lockutils [None req-a28ceef6-0b5d-4aef-b6c7-39b7abce047f 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "2e1fe562-114c-4c94-99f0-592bfab32b88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:16 np0005539564 nova_compute[226295]: 2025-11-29 08:53:16.124 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:16.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e425 e425: 3 total, 3 up, 3 in
Nov 29 03:53:17 np0005539564 nova_compute[226295]: 2025-11-29 08:53:17.833 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:18.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:18.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:19 np0005539564 nova_compute[226295]: 2025-11-29 08:53:19.352 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:19 np0005539564 nova_compute[226295]: 2025-11-29 08:53:19.352 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:20.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:20.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:21 np0005539564 nova_compute[226295]: 2025-11-29 08:53:21.129 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e426 e426: 3 total, 3 up, 3 in
Nov 29 03:53:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:22.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:22.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:22 np0005539564 nova_compute[226295]: 2025-11-29 08:53:22.871 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e427 e427: 3 total, 3 up, 3 in
Nov 29 03:53:23 np0005539564 nova_compute[226295]: 2025-11-29 08:53:23.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:23 np0005539564 nova_compute[226295]: 2025-11-29 08:53:23.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:53:24 np0005539564 nova_compute[226295]: 2025-11-29 08:53:24.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:24.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:24.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:24 np0005539564 nova_compute[226295]: 2025-11-29 08:53:24.697 226310 DEBUG nova.compute.manager [req-a20d5749-a9cb-46ff-9bf8-f69fe76c662a req-b12cdf43-442b-42aa-ac52-1cbe4aa3ae7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-changed-fecb7ef7-1d7d-446e-a531-15713ec4c8ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:24 np0005539564 nova_compute[226295]: 2025-11-29 08:53:24.698 226310 DEBUG nova.compute.manager [req-a20d5749-a9cb-46ff-9bf8-f69fe76c662a req-b12cdf43-442b-42aa-ac52-1cbe4aa3ae7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Refreshing instance network info cache due to event network-changed-fecb7ef7-1d7d-446e-a531-15713ec4c8ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:53:24 np0005539564 nova_compute[226295]: 2025-11-29 08:53:24.698 226310 DEBUG oslo_concurrency.lockutils [req-a20d5749-a9cb-46ff-9bf8-f69fe76c662a req-b12cdf43-442b-42aa-ac52-1cbe4aa3ae7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:53:24 np0005539564 nova_compute[226295]: 2025-11-29 08:53:24.698 226310 DEBUG oslo_concurrency.lockutils [req-a20d5749-a9cb-46ff-9bf8-f69fe76c662a req-b12cdf43-442b-42aa-ac52-1cbe4aa3ae7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:53:24 np0005539564 nova_compute[226295]: 2025-11-29 08:53:24.698 226310 DEBUG nova.network.neutron [req-a20d5749-a9cb-46ff-9bf8-f69fe76c662a req-b12cdf43-442b-42aa-ac52-1cbe4aa3ae7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Refreshing network info cache for port fecb7ef7-1d7d-446e-a531-15713ec4c8ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:53:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.174 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "defc87c3-85a5-47bb-8d50-3121d5d780c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.175 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.175 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.176 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.176 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.178 226310 INFO nova.compute.manager [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Terminating instance#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.180 226310 DEBUG nova.compute.manager [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:53:25 np0005539564 kernel: tapfecb7ef7-1d (unregistering): left promiscuous mode
Nov 29 03:53:25 np0005539564 NetworkManager[48997]: <info>  [1764406405.2543] device (tapfecb7ef7-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:53:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:25Z|00825|binding|INFO|Releasing lport fecb7ef7-1d7d-446e-a531-15713ec4c8ce from this chassis (sb_readonly=0)
Nov 29 03:53:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:25Z|00826|binding|INFO|Setting lport fecb7ef7-1d7d-446e-a531-15713ec4c8ce down in Southbound
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.268 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:25Z|00827|binding|INFO|Removing iface tapfecb7ef7-1d ovn-installed in OVS
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.272 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.279 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:79:4a 10.100.0.8'], port_security=['fa:16:3e:5b:79:4a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'defc87c3-85a5-47bb-8d50-3121d5d780c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0596f9d1e5a5444ca2640f6e8244d53f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49368169-f673-45da-b454-bf6c8bb93b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46809892-ffee-4015-b7f0-51515653f0e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=fecb7ef7-1d7d-446e-a531-15713ec4c8ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.280 139780 INFO neutron.agent.ovn.metadata.agent [-] Port fecb7ef7-1d7d-446e-a531-15713ec4c8ce in datapath 9094c67b-5d6f-4130-9ec6-7da5c871a564 unbound from our chassis#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.281 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9094c67b-5d6f-4130-9ec6-7da5c871a564, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.283 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a077c6-8af7-443a-87b2-d982c9b0725f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.284 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 namespace which is not needed anymore#033[00m
Nov 29 03:53:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:25Z|00828|binding|INFO|Releasing lport 52cea514-684d-4e12-87ec-eee5c187481b from this chassis (sb_readonly=0)
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.384 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.386 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000c9.scope: Deactivated successfully.
Nov 29 03:53:25 np0005539564 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000c9.scope: Consumed 20.669s CPU time.
Nov 29 03:53:25 np0005539564 systemd-machined[190128]: Machine qemu-94-instance-000000c9 terminated.
Nov 29 03:53:25 np0005539564 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[305220]: [NOTICE]   (305224) : haproxy version is 2.8.14-c23fe91
Nov 29 03:53:25 np0005539564 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[305220]: [NOTICE]   (305224) : path to executable is /usr/sbin/haproxy
Nov 29 03:53:25 np0005539564 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[305220]: [WARNING]  (305224) : Exiting Master process...
Nov 29 03:53:25 np0005539564 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[305220]: [WARNING]  (305224) : Exiting Master process...
Nov 29 03:53:25 np0005539564 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[305220]: [ALERT]    (305224) : Current worker (305226) exited with code 143 (Terminated)
Nov 29 03:53:25 np0005539564 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[305220]: [WARNING]  (305224) : All workers exited. Exiting... (0)
Nov 29 03:53:25 np0005539564 systemd[1]: libpod-af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33.scope: Deactivated successfully.
Nov 29 03:53:25 np0005539564 conmon[305220]: conmon af4d5cb584808f60f522 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33.scope/container/memory.events
Nov 29 03:53:25 np0005539564 podman[306989]: 2025-11-29 08:53:25.483735514 +0000 UTC m=+0.045436090 container died af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:53:25 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33-userdata-shm.mount: Deactivated successfully.
Nov 29 03:53:25 np0005539564 ovn_controller[130591]: 2025-11-29T08:53:25Z|00829|binding|INFO|Releasing lport 52cea514-684d-4e12-87ec-eee5c187481b from this chassis (sb_readonly=0)
Nov 29 03:53:25 np0005539564 systemd[1]: var-lib-containers-storage-overlay-478af4f59d37dd5e6dcfa6abdfb8babf30b93a50d5ff5e4afb080c45c838890d-merged.mount: Deactivated successfully.
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.548 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 podman[306989]: 2025-11-29 08:53:25.550992454 +0000 UTC m=+0.112693030 container cleanup af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:53:25 np0005539564 systemd[1]: libpod-conmon-af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33.scope: Deactivated successfully.
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.616 226310 INFO nova.virt.libvirt.driver [-] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Instance destroyed successfully.#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.618 226310 DEBUG nova.objects.instance [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lazy-loading 'resources' on Instance uuid defc87c3-85a5-47bb-8d50-3121d5d780c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:53:25 np0005539564 podman[307019]: 2025-11-29 08:53:25.634689288 +0000 UTC m=+0.056633903 container remove af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.641 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[80570d1b-09d1-49bc-856a-33db5063b3e8]: (4, ('Sat Nov 29 08:53:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 (af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33)\naf4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33\nSat Nov 29 08:53:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 (af4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33)\naf4d5cb584808f60f52264ea292833d1293337d46b29976dfe616c1015403e33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.643 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[56dff328-9ffa-48ba-b257-c2a30d3d605d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.643 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9094c67b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.645 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 kernel: tap9094c67b-50: left promiscuous mode
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.660 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.664 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[09fdeda3-3d11-495d-ad9c-08fbe4224b22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.677 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[04f19696-d532-4408-aa6a-1d7bbf8d14f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.678 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f455a0d1-76bd-4ad7-81b3-f1a8bdd7fd7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.699 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[466a603d-e66a-4125-bcb2-db0af2c32e4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926462, 'reachable_time': 36795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307049, 'error': None, 'target': 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:25 np0005539564 systemd[1]: run-netns-ovnmeta\x2d9094c67b\x2d5d6f\x2d4130\x2d9ec6\x2d7da5c871a564.mount: Deactivated successfully.
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.706 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:53:25 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:25.706 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b63aee-f0b9-4504-a4da-2e7367895bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.708 226310 DEBUG nova.virt.libvirt.vif [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:51:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-254958693',display_name='tempest-TestSnapshotPattern-server-254958693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-254958693',id=201,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBImEx8+jhsxRFNI/zXiqCIp6lKyzrmzXueICkOx8YGb02aphTL5Mlw1+YiMaTW8XLhYmBtqvqII/hnTIhC95ctb8YpefMaS6Qv1/vv9QrNRmuoy5csFiSCQsYM34gKdoxw==',key_name='tempest-TestSnapshotPattern-299175359',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:51:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0596f9d1e5a5444ca2640f6e8244d53f',ramdisk_id='',reservation_id='r-whquhgc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-32695225',owner_user_name='tempest-TestSnapshotPattern-32695225-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:52:13Z,user_data=None,user_id='7b36e3f2406043c2a741c24fb14de7df',uuid=defc87c3-85a5-47bb-8d50-3121d5d780c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.708 226310 DEBUG nova.network.os_vif_util [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converting VIF {"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.709 226310 DEBUG nova.network.os_vif_util [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:79:4a,bridge_name='br-int',has_traffic_filtering=True,id=fecb7ef7-1d7d-446e-a531-15713ec4c8ce,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfecb7ef7-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.709 226310 DEBUG os_vif [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:79:4a,bridge_name='br-int',has_traffic_filtering=True,id=fecb7ef7-1d7d-446e-a531-15713ec4c8ce,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfecb7ef7-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.711 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.711 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfecb7ef7-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.714 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539564 nova_compute[226295]: 2025-11-29 08:53:25.716 226310 INFO os_vif [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:79:4a,bridge_name='br-int',has_traffic_filtering=True,id=fecb7ef7-1d7d-446e-a531-15713ec4c8ce,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfecb7ef7-1d')#033[00m
Nov 29 03:53:26 np0005539564 nova_compute[226295]: 2025-11-29 08:53:26.131 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:26 np0005539564 nova_compute[226295]: 2025-11-29 08:53:26.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:26 np0005539564 nova_compute[226295]: 2025-11-29 08:53:26.404 226310 DEBUG nova.network.neutron [req-a20d5749-a9cb-46ff-9bf8-f69fe76c662a req-b12cdf43-442b-42aa-ac52-1cbe4aa3ae7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updated VIF entry in instance network info cache for port fecb7ef7-1d7d-446e-a531-15713ec4c8ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:53:26 np0005539564 nova_compute[226295]: 2025-11-29 08:53:26.405 226310 DEBUG nova.network.neutron [req-a20d5749-a9cb-46ff-9bf8-f69fe76c662a req-b12cdf43-442b-42aa-ac52-1cbe4aa3ae7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updating instance_info_cache with network_info: [{"id": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "address": "fa:16:3e:5b:79:4a", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfecb7ef7-1d", "ovs_interfaceid": "fecb7ef7-1d7d-446e-a531-15713ec4c8ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:53:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:26.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:26 np0005539564 nova_compute[226295]: 2025-11-29 08:53:26.791 226310 DEBUG oslo_concurrency.lockutils [req-a20d5749-a9cb-46ff-9bf8-f69fe76c662a req-b12cdf43-442b-42aa-ac52-1cbe4aa3ae7c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-defc87c3-85a5-47bb-8d50-3121d5d780c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.331 226310 INFO nova.virt.libvirt.driver [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Deleting instance files /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1_del#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.332 226310 INFO nova.virt.libvirt.driver [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Deletion of /var/lib/nova/instances/defc87c3-85a5-47bb-8d50-3121d5d780c1_del complete#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.346 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.346 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.347 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.514 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.515 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.613 226310 INFO nova.compute.manager [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Took 2.43 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.614 226310 DEBUG oslo.service.loopingcall [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.615 226310 DEBUG nova.compute.manager [-] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.615 226310 DEBUG nova.network.neutron [-] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.728 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406392.7276695, 2e1fe562-114c-4c94-99f0-592bfab32b88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.728 226310 INFO nova.compute.manager [-] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:53:27 np0005539564 nova_compute[226295]: 2025-11-29 08:53:27.803 226310 DEBUG nova.compute.manager [None req-8ad8af03-3b58-4a6b-b58e-471090261392 - - - - - -] [instance: 2e1fe562-114c-4c94-99f0-592bfab32b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:53:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:28.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:28.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.095 226310 DEBUG nova.compute.manager [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-vif-unplugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.096 226310 DEBUG oslo_concurrency.lockutils [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.097 226310 DEBUG oslo_concurrency.lockutils [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.097 226310 DEBUG oslo_concurrency.lockutils [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.098 226310 DEBUG nova.compute.manager [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] No waiting events found dispatching network-vif-unplugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.098 226310 DEBUG nova.compute.manager [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-vif-unplugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.098 226310 DEBUG nova.compute.manager [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.099 226310 DEBUG oslo_concurrency.lockutils [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.099 226310 DEBUG oslo_concurrency.lockutils [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.100 226310 DEBUG oslo_concurrency.lockutils [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.100 226310 DEBUG nova.compute.manager [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] No waiting events found dispatching network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.100 226310 WARNING nova.compute.manager [req-bf9c901a-2930-40d3-a70d-9a29ac655da2 req-a5e0c318-e357-43c3-93f7-26cf91fb786b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received unexpected event network-vif-plugged-fecb7ef7-1d7d-446e-a531-15713ec4c8ce for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.653 226310 DEBUG nova.network.neutron [-] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.681 226310 INFO nova.compute.manager [-] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Took 2.07 seconds to deallocate network for instance.#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.723 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.724 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:29 np0005539564 nova_compute[226295]: 2025-11-29 08:53:29.786 226310 DEBUG oslo_concurrency.processutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4030366184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:30 np0005539564 nova_compute[226295]: 2025-11-29 08:53:30.303 226310 DEBUG oslo_concurrency.processutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:30 np0005539564 nova_compute[226295]: 2025-11-29 08:53:30.309 226310 DEBUG nova.compute.provider_tree [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:30 np0005539564 nova_compute[226295]: 2025-11-29 08:53:30.328 226310 DEBUG nova.scheduler.client.report [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:30 np0005539564 nova_compute[226295]: 2025-11-29 08:53:30.359 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:30 np0005539564 nova_compute[226295]: 2025-11-29 08:53:30.428 226310 INFO nova.scheduler.client.report [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Deleted allocations for instance defc87c3-85a5-47bb-8d50-3121d5d780c1#033[00m
Nov 29 03:53:30 np0005539564 nova_compute[226295]: 2025-11-29 08:53:30.514 226310 DEBUG oslo_concurrency.lockutils [None req-bfe63b60-0e11-4931-88b0-7fc44d01e009 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "defc87c3-85a5-47bb-8d50-3121d5d780c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:30.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:30.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:30 np0005539564 nova_compute[226295]: 2025-11-29 08:53:30.713 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:31 np0005539564 nova_compute[226295]: 2025-11-29 08:53:31.134 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:31 np0005539564 nova_compute[226295]: 2025-11-29 08:53:31.181 226310 DEBUG nova.compute.manager [req-6f552a91-838d-4e6b-9ad0-dbd7673b5657 req-32052ebd-db77-45af-96ad-2b592979e2a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Received event network-vif-deleted-fecb7ef7-1d7d-446e-a531-15713ec4c8ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 e428: 3 total, 3 up, 3 in
Nov 29 03:53:32 np0005539564 nova_compute[226295]: 2025-11-29 08:53:32.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:33 np0005539564 nova_compute[226295]: 2025-11-29 08:53:33.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:34.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:34.967 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:53:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:34.968 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:53:34 np0005539564 nova_compute[226295]: 2025-11-29 08:53:34.968 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:35 np0005539564 nova_compute[226295]: 2025-11-29 08:53:35.715 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:36 np0005539564 nova_compute[226295]: 2025-11-29 08:53:36.137 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:36.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:53:36.969 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.489 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.490 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.490 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.490 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.490 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:40 np0005539564 podman[307093]: 2025-11-29 08:53:40.511648353 +0000 UTC m=+0.062957184 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:53:40 np0005539564 podman[307092]: 2025-11-29 08:53:40.5329976 +0000 UTC m=+0.088957548 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 03:53:40 np0005539564 podman[307091]: 2025-11-29 08:53:40.548430628 +0000 UTC m=+0.102643518 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:53:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:40.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.615 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406405.6137974, defc87c3-85a5-47bb-8d50-3121d5d780c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.615 226310 INFO nova.compute.manager [-] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.703 226310 DEBUG nova.compute.manager [None req-2b71f05b-2cb4-4d9a-a705-fbe5422c37d3 - - - - - -] [instance: defc87c3-85a5-47bb-8d50-3121d5d780c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.717 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2740549270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:40 np0005539564 nova_compute[226295]: 2025-11-29 08:53:40.965 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.138 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.249 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.251 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4145MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.252 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.252 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.926 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.927 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.959 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.991 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:53:41 np0005539564 nova_compute[226295]: 2025-11-29 08:53:41.992 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:53:42 np0005539564 nova_compute[226295]: 2025-11-29 08:53:42.011 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:53:42 np0005539564 nova_compute[226295]: 2025-11-29 08:53:42.039 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:53:42 np0005539564 nova_compute[226295]: 2025-11-29 08:53:42.056 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:42 np0005539564 nova_compute[226295]: 2025-11-29 08:53:42.515 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:42 np0005539564 nova_compute[226295]: 2025-11-29 08:53:42.524 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:42 np0005539564 nova_compute[226295]: 2025-11-29 08:53:42.542 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:42 np0005539564 nova_compute[226295]: 2025-11-29 08:53:42.570 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:53:42 np0005539564 nova_compute[226295]: 2025-11-29 08:53:42.571 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:42.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:45 np0005539564 nova_compute[226295]: 2025-11-29 08:53:45.719 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:46 np0005539564 nova_compute[226295]: 2025-11-29 08:53:46.140 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:46.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:46.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:48.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:50.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:50.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:50 np0005539564 nova_compute[226295]: 2025-11-29 08:53:50.722 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:51 np0005539564 nova_compute[226295]: 2025-11-29 08:53:51.142 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:53:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:52 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:54.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:55 np0005539564 nova_compute[226295]: 2025-11-29 08:53:55.723 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:56 np0005539564 nova_compute[226295]: 2025-11-29 08:53:56.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:53:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:56 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:56.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:53:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:53:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:58.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:58 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:58.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:00 np0005539564 nova_compute[226295]: 2025-11-29 08:54:00.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:00 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:01 np0005539564 nova_compute[226295]: 2025-11-29 08:54:01.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:02.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:02 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:02.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:54:03.772 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:54:03.773 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:54:03.773 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:04.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:04 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:04.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:05 np0005539564 nova_compute[226295]: 2025-11-29 08:54:05.726 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:06 np0005539564 nova_compute[226295]: 2025-11-29 08:54:06.150 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:06.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:08 np0005539564 ovn_controller[130591]: 2025-11-29T08:54:08Z|00830|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 29 03:54:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:08.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:10 np0005539564 nova_compute[226295]: 2025-11-29 08:54:10.729 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:10 np0005539564 podman[307225]: 2025-11-29 08:54:10.879811122 +0000 UTC m=+0.070369455 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:54:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:10.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:10 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:10 np0005539564 podman[307226]: 2025-11-29 08:54:10.939958238 +0000 UTC m=+0.113523801 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:54:10 np0005539564 podman[307224]: 2025-11-29 08:54:10.959984961 +0000 UTC m=+0.156348501 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:54:11 np0005539564 nova_compute[226295]: 2025-11-29 08:54:11.152 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:54:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:54:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:12.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:12 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:12.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:14 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:15 np0005539564 nova_compute[226295]: 2025-11-29 08:54:15.731 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:16 np0005539564 nova_compute[226295]: 2025-11-29 08:54:16.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:16 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:18 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:19 np0005539564 nova_compute[226295]: 2025-11-29 08:54:19.566 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:20 np0005539564 nova_compute[226295]: 2025-11-29 08:54:20.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:20 np0005539564 nova_compute[226295]: 2025-11-29 08:54:20.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:20.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:20 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:20.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:21 np0005539564 nova_compute[226295]: 2025-11-29 08:54:21.157 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:21 np0005539564 nova_compute[226295]: 2025-11-29 08:54:21.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:22.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:22 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:22.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:24 np0005539564 nova_compute[226295]: 2025-11-29 08:54:24.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:24.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:24.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:25 np0005539564 nova_compute[226295]: 2025-11-29 08:54:25.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:25 np0005539564 nova_compute[226295]: 2025-11-29 08:54:25.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:54:25 np0005539564 nova_compute[226295]: 2025-11-29 08:54:25.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:26 np0005539564 nova_compute[226295]: 2025-11-29 08:54:26.159 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:26.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:26.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:27 np0005539564 nova_compute[226295]: 2025-11-29 08:54:27.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:28 np0005539564 nova_compute[226295]: 2025-11-29 08:54:28.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:28 np0005539564 nova_compute[226295]: 2025-11-29 08:54:28.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:54:28 np0005539564 nova_compute[226295]: 2025-11-29 08:54:28.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:54:28 np0005539564 nova_compute[226295]: 2025-11-29 08:54:28.403 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:54:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:28.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:28.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:30 np0005539564 nova_compute[226295]: 2025-11-29 08:54:30.738 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:30.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:31 np0005539564 nova_compute[226295]: 2025-11-29 08:54:31.163 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:32.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:32 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:32.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:33 np0005539564 nova_compute[226295]: 2025-11-29 08:54:33.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:33 np0005539564 nova_compute[226295]: 2025-11-29 08:54:33.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:34.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:34 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:34.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:35 np0005539564 nova_compute[226295]: 2025-11-29 08:54:35.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:36 np0005539564 nova_compute[226295]: 2025-11-29 08:54:36.197 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:36 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:38 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:40 np0005539564 nova_compute[226295]: 2025-11-29 08:54:40.742 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:40 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:54:41.169 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:54:41 np0005539564 nova_compute[226295]: 2025-11-29 08:54:41.170 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:54:41.170 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:54:41 np0005539564 nova_compute[226295]: 2025-11-29 08:54:41.200 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:41 np0005539564 podman[307441]: 2025-11-29 08:54:41.567484584 +0000 UTC m=+0.107964722 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:54:41 np0005539564 podman[307440]: 2025-11-29 08:54:41.573690823 +0000 UTC m=+0.121180481 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:54:41 np0005539564 podman[307442]: 2025-11-29 08:54:41.577441264 +0000 UTC m=+0.113410520 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:54:42 np0005539564 nova_compute[226295]: 2025-11-29 08:54:42.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:42 np0005539564 nova_compute[226295]: 2025-11-29 08:54:42.379 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:42 np0005539564 nova_compute[226295]: 2025-11-29 08:54:42.379 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:42 np0005539564 nova_compute[226295]: 2025-11-29 08:54:42.380 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:42 np0005539564 nova_compute[226295]: 2025-11-29 08:54:42.380 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:54:42 np0005539564 nova_compute[226295]: 2025-11-29 08:54:42.381 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3527041133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:42 np0005539564 nova_compute[226295]: 2025-11-29 08:54:42.869 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:42.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:42 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:42.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.060 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.062 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4151MB free_disk=20.942718505859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.063 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.063 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.259 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.260 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.347 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1187488511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.787 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.795 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.820 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.822 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:54:43 np0005539564 nova_compute[226295]: 2025-11-29 08:54:43.822 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:44.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:44 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:44.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:45 np0005539564 nova_compute[226295]: 2025-11-29 08:54:45.743 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:46 np0005539564 nova_compute[226295]: 2025-11-29 08:54:46.203 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:46.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:54:47.173 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:54:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:48.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:49 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:48.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:50 np0005539564 nova_compute[226295]: 2025-11-29 08:54:50.746 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:51 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:51 np0005539564 nova_compute[226295]: 2025-11-29 08:54:51.206 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:53.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:53 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:53.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:55 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:55.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:55.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:55 np0005539564 nova_compute[226295]: 2025-11-29 08:54:55.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:56 np0005539564 nova_compute[226295]: 2025-11-29 08:54:56.208 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:57.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:54:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:57 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:54:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:00 np0005539564 nova_compute[226295]: 2025-11-29 08:55:00.750 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:01 np0005539564 nova_compute[226295]: 2025-11-29 08:55:01.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:03.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:55:03.774 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:55:03.775 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:55:03.775 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:55:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:05.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:05 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:05.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:05 np0005539564 nova_compute[226295]: 2025-11-29 08:55:05.752 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:06 np0005539564 nova_compute[226295]: 2025-11-29 08:55:06.217 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:55:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:07 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:55:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:09 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:10 np0005539564 nova_compute[226295]: 2025-11-29 08:55:10.753 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:55:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:11.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:11 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:11.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:11 np0005539564 nova_compute[226295]: 2025-11-29 08:55:11.220 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:12 np0005539564 podman[307548]: 2025-11-29 08:55:12.52540863 +0000 UTC m=+0.078592617 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:55:12 np0005539564 podman[307549]: 2025-11-29 08:55:12.543556912 +0000 UTC m=+0.098822805 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:55:12 np0005539564 podman[307547]: 2025-11-29 08:55:12.569476163 +0000 UTC m=+0.123166004 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:55:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:13.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475161e6f0 =====
Nov 29 03:55:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475161e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:13 np0005539564 radosgw[83777]: beast: 0x7f475161e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:13.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:15.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:15.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:15 np0005539564 nova_compute[226295]: 2025-11-29 08:55:15.756 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:16 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Nov 29 03:55:16 np0005539564 nova_compute[226295]: 2025-11-29 08:55:16.259 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:17.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:19.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:19.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:20 np0005539564 nova_compute[226295]: 2025-11-29 08:55:20.760 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:21.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:21 np0005539564 nova_compute[226295]: 2025-11-29 08:55:21.261 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:22 np0005539564 nova_compute[226295]: 2025-11-29 08:55:22.823 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:55:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:55:23 np0005539564 nova_compute[226295]: 2025-11-29 08:55:23.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:25.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:25.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:25 np0005539564 nova_compute[226295]: 2025-11-29 08:55:25.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:26 np0005539564 nova_compute[226295]: 2025-11-29 08:55:26.294 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:26 np0005539564 nova_compute[226295]: 2025-11-29 08:55:26.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:27.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:27.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:27 np0005539564 nova_compute[226295]: 2025-11-29 08:55:27.346 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:27 np0005539564 nova_compute[226295]: 2025-11-29 08:55:27.347 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:55:28 np0005539564 nova_compute[226295]: 2025-11-29 08:55:28.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:29.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:29.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:30 np0005539564 nova_compute[226295]: 2025-11-29 08:55:30.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:30 np0005539564 nova_compute[226295]: 2025-11-29 08:55:30.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:55:30 np0005539564 nova_compute[226295]: 2025-11-29 08:55:30.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:55:30 np0005539564 nova_compute[226295]: 2025-11-29 08:55:30.373 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:55:30 np0005539564 nova_compute[226295]: 2025-11-29 08:55:30.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:31.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:31 np0005539564 nova_compute[226295]: 2025-11-29 08:55:31.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:33.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:33.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:33 np0005539564 nova_compute[226295]: 2025-11-29 08:55:33.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:35.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:35 np0005539564 nova_compute[226295]: 2025-11-29 08:55:35.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:35 np0005539564 nova_compute[226295]: 2025-11-29 08:55:35.769 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:36 np0005539564 nova_compute[226295]: 2025-11-29 08:55:36.298 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:37.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:37.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:39.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:40 np0005539564 nova_compute[226295]: 2025-11-29 08:55:40.771 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:41.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:41.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:41 np0005539564 nova_compute[226295]: 2025-11-29 08:55:41.302 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:43.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:43.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:43 np0005539564 podman[307911]: 2025-11-29 08:55:43.542852846 +0000 UTC m=+0.093760808 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:55:43 np0005539564 podman[307912]: 2025-11-29 08:55:43.546437403 +0000 UTC m=+0.090169160 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:55:43 np0005539564 podman[307910]: 2025-11-29 08:55:43.631251818 +0000 UTC m=+0.182301684 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 03:55:44 np0005539564 nova_compute[226295]: 2025-11-29 08:55:44.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:44 np0005539564 nova_compute[226295]: 2025-11-29 08:55:44.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:44 np0005539564 nova_compute[226295]: 2025-11-29 08:55:44.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:44 np0005539564 nova_compute[226295]: 2025-11-29 08:55:44.379 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:44 np0005539564 nova_compute[226295]: 2025-11-29 08:55:44.379 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:55:44 np0005539564 nova_compute[226295]: 2025-11-29 08:55:44.379 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1078295006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:44 np0005539564 nova_compute[226295]: 2025-11-29 08:55:44.902 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:45.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.155 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.156 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4167MB free_disk=20.896976470947266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.157 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.157 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.447 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.448 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.467 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.817 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1167076756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.964 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:45 np0005539564 nova_compute[226295]: 2025-11-29 08:55:45.972 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:55:46 np0005539564 nova_compute[226295]: 2025-11-29 08:55:46.304 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:46 np0005539564 nova_compute[226295]: 2025-11-29 08:55:46.334 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:55:46 np0005539564 nova_compute[226295]: 2025-11-29 08:55:46.335 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:55:46 np0005539564 nova_compute[226295]: 2025-11-29 08:55:46.336 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:47.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:49.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:50 np0005539564 nova_compute[226295]: 2025-11-29 08:55:50.819 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:51.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:51 np0005539564 nova_compute[226295]: 2025-11-29 08:55:51.308 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:53.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:55.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:55 np0005539564 nova_compute[226295]: 2025-11-29 08:55:55.823 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:56 np0005539564 nova_compute[226295]: 2025-11-29 08:55:56.309 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:57.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:57.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:55:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:59.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:00 np0005539564 nova_compute[226295]: 2025-11-29 08:56:00.827 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:01.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:01.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:01 np0005539564 nova_compute[226295]: 2025-11-29 08:56:01.344 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:56:03.776 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:56:03.776 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:56:03.776 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:05.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:05 np0005539564 nova_compute[226295]: 2025-11-29 08:56:05.830 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:06 np0005539564 nova_compute[226295]: 2025-11-29 08:56:06.347 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:07.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:07.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:09.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:09.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:56:09.787 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:56:09 np0005539564 nova_compute[226295]: 2025-11-29 08:56:09.788 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:56:09.789 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:56:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:10 np0005539564 nova_compute[226295]: 2025-11-29 08:56:10.833 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:11.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:11.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:11 np0005539564 nova_compute[226295]: 2025-11-29 08:56:11.384 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:13.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:13.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:13 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:56:13.792 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:56:14 np0005539564 podman[308020]: 2025-11-29 08:56:14.552552243 +0000 UTC m=+0.096504722 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:56:14 np0005539564 podman[308019]: 2025-11-29 08:56:14.574881467 +0000 UTC m=+0.124730966 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:56:14 np0005539564 podman[308021]: 2025-11-29 08:56:14.587044186 +0000 UTC m=+0.125605570 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:56:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:15.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:15 np0005539564 nova_compute[226295]: 2025-11-29 08:56:15.836 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:16 np0005539564 nova_compute[226295]: 2025-11-29 08:56:16.441 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:17.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:19.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:20 np0005539564 nova_compute[226295]: 2025-11-29 08:56:20.838 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:21.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:21.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:21 np0005539564 nova_compute[226295]: 2025-11-29 08:56:21.444 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:23.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:23.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:24 np0005539564 nova_compute[226295]: 2025-11-29 08:56:24.329 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:24 np0005539564 nova_compute[226295]: 2025-11-29 08:56:24.330 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:24 np0005539564 nova_compute[226295]: 2025-11-29 08:56:24.350 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:25.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:25.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:25 np0005539564 nova_compute[226295]: 2025-11-29 08:56:25.842 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:26 np0005539564 nova_compute[226295]: 2025-11-29 08:56:26.487 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:27.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:27 np0005539564 nova_compute[226295]: 2025-11-29 08:56:27.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:27.924741) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587924820, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2406, "num_deletes": 254, "total_data_size": 5912114, "memory_usage": 6006424, "flush_reason": "Manual Compaction"}
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587951056, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3857718, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77878, "largest_seqno": 80278, "table_properties": {"data_size": 3847777, "index_size": 6370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20040, "raw_average_key_size": 20, "raw_value_size": 3828203, "raw_average_value_size": 3930, "num_data_blocks": 277, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406376, "oldest_key_time": 1764406376, "file_creation_time": 1764406587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 26366 microseconds, and 11618 cpu microseconds.
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:27.951112) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3857718 bytes OK
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:27.951136) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:27.953038) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:27.953055) EVENT_LOG_v1 {"time_micros": 1764406587953050, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:27.953075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5901654, prev total WAL file size 5901654, number of live WAL files 2.
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:27.954499) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3767KB)], [159(10MB)]
Nov 29 03:56:27 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587954535, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 14645590, "oldest_snapshot_seqno": -1}
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10587 keys, 12705808 bytes, temperature: kUnknown
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588059122, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12705808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12638546, "index_size": 39693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26501, "raw_key_size": 279843, "raw_average_key_size": 26, "raw_value_size": 12454145, "raw_average_value_size": 1176, "num_data_blocks": 1502, "num_entries": 10587, "num_filter_entries": 10587, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764406587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:28.059405) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12705808 bytes
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:28.061350) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.9 rd, 121.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 10.3 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 11115, records dropped: 528 output_compression: NoCompression
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:28.061370) EVENT_LOG_v1 {"time_micros": 1764406588061360, "job": 102, "event": "compaction_finished", "compaction_time_micros": 104694, "compaction_time_cpu_micros": 30939, "output_level": 6, "num_output_files": 1, "total_output_size": 12705808, "num_input_records": 11115, "num_output_records": 10587, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588062712, "job": 102, "event": "table_file_deletion", "file_number": 161}
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588065549, "job": 102, "event": "table_file_deletion", "file_number": 159}
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:27.954385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:28.065844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:28.065850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:28.065853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:28.065856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:56:28.065859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539564 nova_compute[226295]: 2025-11-29 08:56:28.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:29.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:29 np0005539564 nova_compute[226295]: 2025-11-29 08:56:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:29 np0005539564 nova_compute[226295]: 2025-11-29 08:56:29.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:56:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:30 np0005539564 nova_compute[226295]: 2025-11-29 08:56:30.844 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:31.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:31 np0005539564 nova_compute[226295]: 2025-11-29 08:56:31.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:31 np0005539564 nova_compute[226295]: 2025-11-29 08:56:31.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:56:31 np0005539564 nova_compute[226295]: 2025-11-29 08:56:31.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:56:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:56:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:31 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:56:31 np0005539564 nova_compute[226295]: 2025-11-29 08:56:31.531 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:33.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:33 np0005539564 nova_compute[226295]: 2025-11-29 08:56:33.796 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:56:33 np0005539564 nova_compute[226295]: 2025-11-29 08:56:33.796 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:35 np0005539564 nova_compute[226295]: 2025-11-29 08:56:35.886 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:36 np0005539564 nova_compute[226295]: 2025-11-29 08:56:36.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:36 np0005539564 nova_compute[226295]: 2025-11-29 08:56:36.533 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:37.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:39.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:40 np0005539564 nova_compute[226295]: 2025-11-29 08:56:40.891 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:41.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:41.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:41 np0005539564 nova_compute[226295]: 2025-11-29 08:56:41.563 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:43.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:43.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:45.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:45.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:45 np0005539564 podman[308266]: 2025-11-29 08:56:45.555389551 +0000 UTC m=+0.091440865 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:56:45 np0005539564 podman[308265]: 2025-11-29 08:56:45.556353187 +0000 UTC m=+0.101386064 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:56:45 np0005539564 podman[308264]: 2025-11-29 08:56:45.601427027 +0000 UTC m=+0.152858756 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:56:45 np0005539564 nova_compute[226295]: 2025-11-29 08:56:45.892 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:46 np0005539564 nova_compute[226295]: 2025-11-29 08:56:46.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:46 np0005539564 nova_compute[226295]: 2025-11-29 08:56:46.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:46 np0005539564 nova_compute[226295]: 2025-11-29 08:56:46.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:46 np0005539564 nova_compute[226295]: 2025-11-29 08:56:46.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:46 np0005539564 nova_compute[226295]: 2025-11-29 08:56:46.383 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:56:46 np0005539564 nova_compute[226295]: 2025-11-29 08:56:46.383 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:46 np0005539564 nova_compute[226295]: 2025-11-29 08:56:46.598 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/560779530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:46 np0005539564 nova_compute[226295]: 2025-11-29 08:56:46.878 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:47 np0005539564 nova_compute[226295]: 2025-11-29 08:56:47.112 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:56:47 np0005539564 nova_compute[226295]: 2025-11-29 08:56:47.115 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4188MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:56:47 np0005539564 nova_compute[226295]: 2025-11-29 08:56:47.115 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:47 np0005539564 nova_compute[226295]: 2025-11-29 08:56:47.115 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:47.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:47.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:48 np0005539564 nova_compute[226295]: 2025-11-29 08:56:48.274 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:56:48 np0005539564 nova_compute[226295]: 2025-11-29 08:56:48.275 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:56:48 np0005539564 nova_compute[226295]: 2025-11-29 08:56:48.289 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/812855005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:48 np0005539564 nova_compute[226295]: 2025-11-29 08:56:48.735 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:48 np0005539564 nova_compute[226295]: 2025-11-29 08:56:48.742 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:56:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:49.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:49.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.764 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.768 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.768 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.770 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.770 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.771 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.772 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.773 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.773 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.773 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:50 np0005539564 nova_compute[226295]: 2025-11-29 08:56:50.895 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.119 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.119 226310 WARNING nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.119 226310 WARNING nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.120 226310 WARNING nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.120 226310 WARNING nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.120 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Removable base files: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9 /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.121 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.121 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.121 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/fd30d21c6e0d4e83bf9e777a8ff91c9ac77f58c9#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.121 226310 INFO nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/bf5c4d7c97f9d868dc1070f113a186600eb4ee72#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.121 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.122 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.122 226310 DEBUG nova.virt.libvirt.imagecache [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 03:56:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:51.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:51.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:51 np0005539564 nova_compute[226295]: 2025-11-29 08:56:51.601 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:53.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:53.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:55.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:55.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:55 np0005539564 nova_compute[226295]: 2025-11-29 08:56:55.897 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:56 np0005539564 nova_compute[226295]: 2025-11-29 08:56:56.641 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:57.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:57.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:56:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:59.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:00 np0005539564 nova_compute[226295]: 2025-11-29 08:57:00.901 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:01.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:01.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:01 np0005539564 nova_compute[226295]: 2025-11-29 08:57:01.644 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:03.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:03.777 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:03.778 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:03.778 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:05.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:05.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:05 np0005539564 nova_compute[226295]: 2025-11-29 08:57:05.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:06 np0005539564 nova_compute[226295]: 2025-11-29 08:57:06.665 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:07.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:07.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:07 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 03:57:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:09.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:10 np0005539564 nova_compute[226295]: 2025-11-29 08:57:10.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:11.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:11 np0005539564 nova_compute[226295]: 2025-11-29 08:57:11.668 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:13.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:14 np0005539564 nova_compute[226295]: 2025-11-29 08:57:14.771 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:14 np0005539564 nova_compute[226295]: 2025-11-29 08:57:14.771 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:14 np0005539564 nova_compute[226295]: 2025-11-29 08:57:14.800 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:57:14 np0005539564 nova_compute[226295]: 2025-11-29 08:57:14.903 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:14 np0005539564 nova_compute[226295]: 2025-11-29 08:57:14.904 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:14 np0005539564 nova_compute[226295]: 2025-11-29 08:57:14.922 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:57:14 np0005539564 nova_compute[226295]: 2025-11-29 08:57:14.923 226310 INFO nova.compute.claims [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:57:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:15 np0005539564 nova_compute[226295]: 2025-11-29 08:57:15.166 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:15.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:15 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2027208412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:15 np0005539564 nova_compute[226295]: 2025-11-29 08:57:15.732 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:15 np0005539564 nova_compute[226295]: 2025-11-29 08:57:15.742 226310 DEBUG nova.compute.provider_tree [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:15 np0005539564 nova_compute[226295]: 2025-11-29 08:57:15.759 226310 DEBUG nova.scheduler.client.report [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:15 np0005539564 nova_compute[226295]: 2025-11-29 08:57:15.785 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:15 np0005539564 nova_compute[226295]: 2025-11-29 08:57:15.786 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:15.871 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.009 226310 DEBUG nova.network.neutron [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.011 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.159 226310 INFO nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.189 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.476 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.478 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.478 226310 INFO nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Creating image(s)#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.526 226310 DEBUG nova.storage.rbd_utils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:16 np0005539564 podman[308395]: 2025-11-29 08:57:16.538416374 +0000 UTC m=+0.082360890 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:57:16 np0005539564 podman[308394]: 2025-11-29 08:57:16.557086469 +0000 UTC m=+0.098596599 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.573 226310 DEBUG nova.storage.rbd_utils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:16 np0005539564 podman[308393]: 2025-11-29 08:57:16.62179327 +0000 UTC m=+0.175829689 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.630 226310 DEBUG nova.storage.rbd_utils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.634 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.669 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.711 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.713 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.714 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.714 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.758 226310 DEBUG nova.storage.rbd_utils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:16 np0005539564 nova_compute[226295]: 2025-11-29 08:57:16.764 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:17 np0005539564 nova_compute[226295]: 2025-11-29 08:57:17.127 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:17 np0005539564 nova_compute[226295]: 2025-11-29 08:57:17.241 226310 DEBUG nova.storage.rbd_utils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:57:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:17.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:17 np0005539564 nova_compute[226295]: 2025-11-29 08:57:17.383 226310 DEBUG nova.objects.instance [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 95d548c1-cd0d-42c9-b769-4f5c5ae19415 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:57:17 np0005539564 nova_compute[226295]: 2025-11-29 08:57:17.398 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:57:17 np0005539564 nova_compute[226295]: 2025-11-29 08:57:17.398 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Ensure instance console log exists: /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:57:17 np0005539564 nova_compute[226295]: 2025-11-29 08:57:17.399 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:17 np0005539564 nova_compute[226295]: 2025-11-29 08:57:17.399 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:17 np0005539564 nova_compute[226295]: 2025-11-29 08:57:17.400 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:18 np0005539564 nova_compute[226295]: 2025-11-29 08:57:18.447 226310 DEBUG nova.policy [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:57:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:19.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:19.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.013 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:21.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:21.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.464 226310 DEBUG nova.network.neutron [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Successfully updated port: df6fdca5-d93b-4b2f-9836-ec2a95857ae7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.488 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.488 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.489 226310 DEBUG nova.network.neutron [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.570 226310 DEBUG nova.compute.manager [req-bcf7391c-ef15-4ec4-9f17-00395fd562a1 req-be037a12-f217-4b35-80cc-d91212974e95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received event network-changed-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.571 226310 DEBUG nova.compute.manager [req-bcf7391c-ef15-4ec4-9f17-00395fd562a1 req-be037a12-f217-4b35-80cc-d91212974e95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Refreshing instance network info cache due to event network-changed-df6fdca5-d93b-4b2f-9836-ec2a95857ae7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.572 226310 DEBUG oslo_concurrency.lockutils [req-bcf7391c-ef15-4ec4-9f17-00395fd562a1 req-be037a12-f217-4b35-80cc-d91212974e95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.674 226310 DEBUG nova.network.neutron [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:57:21 np0005539564 nova_compute[226295]: 2025-11-29 08:57:21.678 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:22 np0005539564 nova_compute[226295]: 2025-11-29 08:57:22.990 226310 DEBUG nova.network.neutron [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Updating instance_info_cache with network_info: [{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.092 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.093 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Instance network_info: |[{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.094 226310 DEBUG oslo_concurrency.lockutils [req-bcf7391c-ef15-4ec4-9f17-00395fd562a1 req-be037a12-f217-4b35-80cc-d91212974e95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.094 226310 DEBUG nova.network.neutron [req-bcf7391c-ef15-4ec4-9f17-00395fd562a1 req-be037a12-f217-4b35-80cc-d91212974e95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Refreshing network info cache for port df6fdca5-d93b-4b2f-9836-ec2a95857ae7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.098 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Start _get_guest_xml network_info=[{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.106 226310 WARNING nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.113 226310 DEBUG nova.virt.libvirt.host [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.113 226310 DEBUG nova.virt.libvirt.host [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.121 226310 DEBUG nova.virt.libvirt.host [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.122 226310 DEBUG nova.virt.libvirt.host [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.123 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.123 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.124 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.124 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.124 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.124 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.124 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.125 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.125 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.125 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.125 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.125 226310 DEBUG nova.virt.hardware [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.128 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:23.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:57:23 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3857322970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.597 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.646 226310 DEBUG nova.storage.rbd_utils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:23 np0005539564 nova_compute[226295]: 2025-11-29 08:57:23.652 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:24 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:57:24 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2076290668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.186 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.189 226310 DEBUG nova.virt.libvirt.vif [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:57:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-705219530',display_name='tempest-TestNetworkBasicOps-server-705219530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-705219530',id=207,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD74Aqles6kC0TTgc2WzAea9cqnIZbz3M6pwPVrheMoiSZA+10GtItD+uQODkSUuS1qJnFA5Jk1UKZYA5zxvHCWJax3qiG3YFnBbPdhZ0N2QhIdQ2hOlRyvXmmBXaZ3bLw==',key_name='tempest-TestNetworkBasicOps-1695982850',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-08ch0775',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:57:16Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=95d548c1-cd0d-42c9-b769-4f5c5ae19415,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.189 226310 DEBUG nova.network.os_vif_util [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.190 226310 DEBUG nova.network.os_vif_util [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.192 226310 DEBUG nova.objects.instance [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95d548c1-cd0d-42c9-b769-4f5c5ae19415 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.212 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <uuid>95d548c1-cd0d-42c9-b769-4f5c5ae19415</uuid>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <name>instance-000000cf</name>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestNetworkBasicOps-server-705219530</nova:name>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:57:23</nova:creationTime>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <nova:port uuid="df6fdca5-d93b-4b2f-9836-ec2a95857ae7">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <entry name="serial">95d548c1-cd0d-42c9-b769-4f5c5ae19415</entry>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <entry name="uuid">95d548c1-cd0d-42c9-b769-4f5c5ae19415</entry>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk.config">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:61:3c:80"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <target dev="tapdf6fdca5-d9"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415/console.log" append="off"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:57:24 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:57:24 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:57:24 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:57:24 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.214 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Preparing to wait for external event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.216 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.216 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.217 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.218 226310 DEBUG nova.virt.libvirt.vif [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:57:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-705219530',display_name='tempest-TestNetworkBasicOps-server-705219530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-705219530',id=207,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD74Aqles6kC0TTgc2WzAea9cqnIZbz3M6pwPVrheMoiSZA+10GtItD+uQODkSUuS1qJnFA5Jk1UKZYA5zxvHCWJax3qiG3YFnBbPdhZ0N2QhIdQ2hOlRyvXmmBXaZ3bLw==',key_name='tempest-TestNetworkBasicOps-1695982850',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-08ch0775',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:57:16Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=95d548c1-cd0d-42c9-b769-4f5c5ae19415,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.219 226310 DEBUG nova.network.os_vif_util [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.220 226310 DEBUG nova.network.os_vif_util [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.221 226310 DEBUG os_vif [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.222 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.223 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.224 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.229 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.229 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf6fdca5-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.230 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf6fdca5-d9, col_values=(('external_ids', {'iface-id': 'df6fdca5-d93b-4b2f-9836-ec2a95857ae7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:3c:80', 'vm-uuid': '95d548c1-cd0d-42c9-b769-4f5c5ae19415'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.272 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:24 np0005539564 NetworkManager[48997]: <info>  [1764406644.2743] manager: (tapdf6fdca5-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.284 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.285 226310 INFO os_vif [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9')#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.370 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.371 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.371 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:61:3c:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.372 226310 INFO nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Using config drive#033[00m
Nov 29 03:57:24 np0005539564 nova_compute[226295]: 2025-11-29 08:57:24.415 226310 DEBUG nova.storage.rbd_utils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:25.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.039 226310 INFO nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Creating config drive at /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415/disk.config#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.051 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplmd2mh92 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.097 226310 DEBUG nova.network.neutron [req-bcf7391c-ef15-4ec4-9f17-00395fd562a1 req-be037a12-f217-4b35-80cc-d91212974e95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Updated VIF entry in instance network info cache for port df6fdca5-d93b-4b2f-9836-ec2a95857ae7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.098 226310 DEBUG nova.network.neutron [req-bcf7391c-ef15-4ec4-9f17-00395fd562a1 req-be037a12-f217-4b35-80cc-d91212974e95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Updating instance_info_cache with network_info: [{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.141 226310 DEBUG oslo_concurrency.lockutils [req-bcf7391c-ef15-4ec4-9f17-00395fd562a1 req-be037a12-f217-4b35-80cc-d91212974e95 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.202 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplmd2mh92" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.249 226310 DEBUG nova.storage.rbd_utils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.255 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415/disk.config 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.509 226310 DEBUG oslo_concurrency.processutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415/disk.config 95d548c1-cd0d-42c9-b769-4f5c5ae19415_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.511 226310 INFO nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Deleting local config drive /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415/disk.config because it was imported into RBD.#033[00m
Nov 29 03:57:26 np0005539564 kernel: tapdf6fdca5-d9: entered promiscuous mode
Nov 29 03:57:26 np0005539564 NetworkManager[48997]: <info>  [1764406646.5988] manager: (tapdf6fdca5-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/387)
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.640 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:26Z|00831|binding|INFO|Claiming lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 for this chassis.
Nov 29 03:57:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:26Z|00832|binding|INFO|df6fdca5-d93b-4b2f-9836-ec2a95857ae7: Claiming fa:16:3e:61:3c:80 10.100.0.9
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.651 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:26 np0005539564 systemd-udevd[308755]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.667 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:3c:80 10.100.0.9'], port_security=['fa:16:3e:61:3c:80 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1382759112', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '95d548c1-cd0d-42c9-b769-4f5c5ae19415', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1382759112', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '609a93e2-6e8e-4542-856e-8879513dfb81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17861533-9bce-4d6b-b9db-7033aae334db, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=df6fdca5-d93b-4b2f-9836-ec2a95857ae7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.668 139780 INFO neutron.agent.ovn.metadata.agent [-] Port df6fdca5-d93b-4b2f-9836-ec2a95857ae7 in datapath 8601263e-32ba-44f8-aef7-66d3b518c9d4 bound to our chassis#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.670 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8601263e-32ba-44f8-aef7-66d3b518c9d4#033[00m
Nov 29 03:57:26 np0005539564 NetworkManager[48997]: <info>  [1764406646.6788] device (tapdf6fdca5-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:57:26 np0005539564 NetworkManager[48997]: <info>  [1764406646.6796] device (tapdf6fdca5-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:57:26 np0005539564 systemd-machined[190128]: New machine qemu-96-instance-000000cf.
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.689 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ba50109e-7642-4bd4-b7e2-e40610b49ead]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.690 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8601263e-31 in ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.692 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8601263e-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.693 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[dd566b35-b0bf-44c5-b217-a9fbc283e026]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.694 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9569da-ae82-4360-b412-9e950ac9a3cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.716 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b11a2a-f693-446c-aa51-656a823846d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.721 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:26Z|00833|binding|INFO|Setting lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 ovn-installed in OVS
Nov 29 03:57:26 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:26Z|00834|binding|INFO|Setting lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 up in Southbound
Nov 29 03:57:26 np0005539564 nova_compute[226295]: 2025-11-29 08:57:26.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:26 np0005539564 systemd[1]: Started Virtual Machine qemu-96-instance-000000cf.
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.750 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a57a3429-d933-4a31-8f73-e7316884d40f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.798 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bb723fa9-a0cf-4751-ad11-0fb72e6127ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.804 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2952e970-cdbf-46e8-a628-a7891f621142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 systemd-udevd[308761]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:57:26 np0005539564 NetworkManager[48997]: <info>  [1764406646.8053] manager: (tap8601263e-30): new Veth device (/org/freedesktop/NetworkManager/Devices/388)
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.846 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[79526788-34e0-43c9-bf38-37dd2179e0c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.850 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[8b266735-0c3b-4797-8341-61d298100085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 NetworkManager[48997]: <info>  [1764406646.8810] device (tap8601263e-30): carrier: link connected
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.887 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[5989f48d-0e9c-4c29-ba2c-cce3326143af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.908 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9c3a90-443a-4707-8786-2278f019a2bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8601263e-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:1e:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962156, 'reachable_time': 28671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308791, 'error': None, 'target': 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.929 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[21164f55-c0b8-4f29-a2a0-0316ff3042c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:1e9b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962156, 'tstamp': 962156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308807, 'error': None, 'target': 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.950 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[354bc3da-4681-4fb9-8d27-9256924914b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8601263e-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:1e:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962156, 'reachable_time': 28671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308811, 'error': None, 'target': 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:26 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:26.993 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6e952913-f288-4bca-846e-9b779b8b2772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.094 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f90e66aa-e83f-45ba-8825-b8eeddd140db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.096 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8601263e-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.096 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.097 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8601263e-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.098 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:27 np0005539564 kernel: tap8601263e-30: entered promiscuous mode
Nov 29 03:57:27 np0005539564 NetworkManager[48997]: <info>  [1764406647.0998] manager: (tap8601263e-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.101 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.105 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8601263e-30, col_values=(('external_ids', {'iface-id': '7d2fd210-17a1-4ed9-a018-d28f3f167dc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.107 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:27 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:27Z|00835|binding|INFO|Releasing lport 7d2fd210-17a1-4ed9-a018-d28f3f167dc8 from this chassis (sb_readonly=0)
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.130 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406647.130017, 95d548c1-cd0d-42c9-b769-4f5c5ae19415 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.131 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] VM Started (Lifecycle Event)#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.136 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.137 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8601263e-32ba-44f8-aef7-66d3b518c9d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8601263e-32ba-44f8-aef7-66d3b518c9d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.138 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[361f710a-7e16-4da5-b4e1-e91edc9d02fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.140 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-8601263e-32ba-44f8-aef7-66d3b518c9d4
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/8601263e-32ba-44f8-aef7-66d3b518c9d4.pid.haproxy
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 8601263e-32ba-44f8-aef7-66d3b518c9d4
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.141 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'env', 'PROCESS_TAG=haproxy-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8601263e-32ba-44f8-aef7-66d3b518c9d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.180 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.187 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406647.1302207, 95d548c1-cd0d-42c9-b769-4f5c5ae19415 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.188 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:57:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:27.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.211 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.217 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.252 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:57:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:27.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:27 np0005539564 podman[308867]: 2025-11-29 08:57:27.593204113 +0000 UTC m=+0.065030300 container create 291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:57:27 np0005539564 systemd[1]: Started libpod-conmon-291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0.scope.
Nov 29 03:57:27 np0005539564 podman[308867]: 2025-11-29 08:57:27.558248148 +0000 UTC m=+0.030074345 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.653 226310 DEBUG nova.compute.manager [req-5071d9cd-e4da-4903-9182-a4f4eb7839ab req-d2815974-7da9-4a3c-bbda-0d0a06120e4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.654 226310 DEBUG oslo_concurrency.lockutils [req-5071d9cd-e4da-4903-9182-a4f4eb7839ab req-d2815974-7da9-4a3c-bbda-0d0a06120e4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.655 226310 DEBUG oslo_concurrency.lockutils [req-5071d9cd-e4da-4903-9182-a4f4eb7839ab req-d2815974-7da9-4a3c-bbda-0d0a06120e4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.655 226310 DEBUG oslo_concurrency.lockutils [req-5071d9cd-e4da-4903-9182-a4f4eb7839ab req-d2815974-7da9-4a3c-bbda-0d0a06120e4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.656 226310 DEBUG nova.compute.manager [req-5071d9cd-e4da-4903-9182-a4f4eb7839ab req-d2815974-7da9-4a3c-bbda-0d0a06120e4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Processing event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.657 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.662 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406647.6625748, 95d548c1-cd0d-42c9-b769-4f5c5ae19415 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.663 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.667 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.672 226310 INFO nova.virt.libvirt.driver [-] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Instance spawned successfully.#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.673 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:57:27 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:57:27 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8d1d3bd4fcc4451a3342384b38fba56f69948557ce30eea10679704d072cc7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.699 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.707 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.709 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.710 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.711 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:27 np0005539564 podman[308867]: 2025-11-29 08:57:27.711695908 +0000 UTC m=+0.183522125 container init 291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.711 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.712 226310 DEBUG nova.virt.libvirt.driver [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.717 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:57:27 np0005539564 podman[308867]: 2025-11-29 08:57:27.723694824 +0000 UTC m=+0.195521011 container start 291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:57:27 np0005539564 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[308882]: [NOTICE]   (308886) : New worker (308888) forked
Nov 29 03:57:27 np0005539564 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[308882]: [NOTICE]   (308886) : Loading success.
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.785 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.786 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.785 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.801 226310 INFO nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Took 11.32 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.802 226310 DEBUG nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:27 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:27.818 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.860 226310 INFO nova.compute.manager [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Took 13.00 seconds to build instance.#033[00m
Nov 29 03:57:27 np0005539564 nova_compute[226295]: 2025-11-29 08:57:27.881 226310 DEBUG oslo_concurrency.lockutils [None req-d197808a-23c8-447c-ad38-885a9fb2814d 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.116 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.117 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.117 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:29.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.273 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:29.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.810 226310 DEBUG nova.compute.manager [req-e70b5d29-e7e6-4a18-bd16-fe9626eb311f req-b744c878-e7f5-46d4-9b2d-7089d6d2e566 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.810 226310 DEBUG oslo_concurrency.lockutils [req-e70b5d29-e7e6-4a18-bd16-fe9626eb311f req-b744c878-e7f5-46d4-9b2d-7089d6d2e566 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.810 226310 DEBUG oslo_concurrency.lockutils [req-e70b5d29-e7e6-4a18-bd16-fe9626eb311f req-b744c878-e7f5-46d4-9b2d-7089d6d2e566 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.811 226310 DEBUG oslo_concurrency.lockutils [req-e70b5d29-e7e6-4a18-bd16-fe9626eb311f req-b744c878-e7f5-46d4-9b2d-7089d6d2e566 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.811 226310 DEBUG nova.compute.manager [req-e70b5d29-e7e6-4a18-bd16-fe9626eb311f req-b744c878-e7f5-46d4-9b2d-7089d6d2e566 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] No waiting events found dispatching network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:57:29 np0005539564 nova_compute[226295]: 2025-11-29 08:57:29.811 226310 WARNING nova.compute.manager [req-e70b5d29-e7e6-4a18-bd16-fe9626eb311f req-b744c878-e7f5-46d4-9b2d-7089d6d2e566 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received unexpected event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:57:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:30 np0005539564 nova_compute[226295]: 2025-11-29 08:57:30.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:31.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:31.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:31 np0005539564 nova_compute[226295]: 2025-11-29 08:57:31.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:31 np0005539564 nova_compute[226295]: 2025-11-29 08:57:31.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:57:31 np0005539564 nova_compute[226295]: 2025-11-29 08:57:31.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:57:31 np0005539564 nova_compute[226295]: 2025-11-29 08:57:31.531 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:57:31 np0005539564 nova_compute[226295]: 2025-11-29 08:57:31.531 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:57:31 np0005539564 nova_compute[226295]: 2025-11-29 08:57:31.532 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:57:31 np0005539564 nova_compute[226295]: 2025-11-29 08:57:31.532 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 95d548c1-cd0d-42c9-b769-4f5c5ae19415 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:57:31 np0005539564 nova_compute[226295]: 2025-11-29 08:57:31.722 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539564 NetworkManager[48997]: <info>  [1764406652.0284] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Nov 29 03:57:32 np0005539564 NetworkManager[48997]: <info>  [1764406652.0300] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.031 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:32Z|00836|binding|INFO|Releasing lport 7d2fd210-17a1-4ed9-a018-d28f3f167dc8 from this chassis (sb_readonly=0)
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.054 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:32Z|00837|binding|INFO|Releasing lport 7d2fd210-17a1-4ed9-a018-d28f3f167dc8 from this chassis (sb_readonly=0)
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.615 226310 DEBUG nova.compute.manager [req-9df60e7e-eb92-4fcb-a5d7-93f706cf1036 req-dc79a05a-b72e-4ace-9b70-f00f19d430cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received event network-changed-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.616 226310 DEBUG nova.compute.manager [req-9df60e7e-eb92-4fcb-a5d7-93f706cf1036 req-dc79a05a-b72e-4ace-9b70-f00f19d430cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Refreshing instance network info cache due to event network-changed-df6fdca5-d93b-4b2f-9836-ec2a95857ae7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.616 226310 DEBUG oslo_concurrency.lockutils [req-9df60e7e-eb92-4fcb-a5d7-93f706cf1036 req-dc79a05a-b72e-4ace-9b70-f00f19d430cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.845 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.848 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.850 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.850 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.851 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.852 226310 INFO nova.compute.manager [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Terminating instance#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.856 226310 DEBUG nova.compute.manager [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:57:32 np0005539564 kernel: tapdf6fdca5-d9 (unregistering): left promiscuous mode
Nov 29 03:57:32 np0005539564 NetworkManager[48997]: <info>  [1764406652.9095] device (tapdf6fdca5-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:57:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:32Z|00838|binding|INFO|Releasing lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 from this chassis (sb_readonly=0)
Nov 29 03:57:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:32Z|00839|binding|INFO|Setting lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 down in Southbound
Nov 29 03:57:32 np0005539564 ovn_controller[130591]: 2025-11-29T08:57:32Z|00840|binding|INFO|Removing iface tapdf6fdca5-d9 ovn-installed in OVS
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.966 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.970 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:32.976 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:3c:80 10.100.0.9'], port_security=['fa:16:3e:61:3c:80 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1382759112', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '95d548c1-cd0d-42c9-b769-4f5c5ae19415', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1382759112', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '609a93e2-6e8e-4542-856e-8879513dfb81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17861533-9bce-4d6b-b9db-7033aae334db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=df6fdca5-d93b-4b2f-9836-ec2a95857ae7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:57:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:32.979 139780 INFO neutron.agent.ovn.metadata.agent [-] Port df6fdca5-d93b-4b2f-9836-ec2a95857ae7 in datapath 8601263e-32ba-44f8-aef7-66d3b518c9d4 unbound from our chassis#033[00m
Nov 29 03:57:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:32.981 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8601263e-32ba-44f8-aef7-66d3b518c9d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:57:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:32.983 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee0d8d4-0c25-4b46-bec7-466193c4d296]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:32 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:32.985 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 namespace which is not needed anymore#033[00m
Nov 29 03:57:32 np0005539564 nova_compute[226295]: 2025-11-29 08:57:32.991 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.005 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Updating instance_info_cache with network_info: [{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.029 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.029 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.030 226310 DEBUG oslo_concurrency.lockutils [req-9df60e7e-eb92-4fcb-a5d7-93f706cf1036 req-dc79a05a-b72e-4ace-9b70-f00f19d430cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.030 226310 DEBUG nova.network.neutron [req-9df60e7e-eb92-4fcb-a5d7-93f706cf1036 req-dc79a05a-b72e-4ace-9b70-f00f19d430cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Refreshing network info cache for port df6fdca5-d93b-4b2f-9836-ec2a95857ae7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:57:33 np0005539564 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000cf.scope: Deactivated successfully.
Nov 29 03:57:33 np0005539564 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000cf.scope: Consumed 5.846s CPU time.
Nov 29 03:57:33 np0005539564 systemd-machined[190128]: Machine qemu-96-instance-000000cf terminated.
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.103 226310 INFO nova.virt.libvirt.driver [-] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Instance destroyed successfully.#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.103 226310 DEBUG nova.objects.instance [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 95d548c1-cd0d-42c9-b769-4f5c5ae19415 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.124 226310 DEBUG nova.virt.libvirt.vif [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:57:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-705219530',display_name='tempest-TestNetworkBasicOps-server-705219530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-705219530',id=207,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD74Aqles6kC0TTgc2WzAea9cqnIZbz3M6pwPVrheMoiSZA+10GtItD+uQODkSUuS1qJnFA5Jk1UKZYA5zxvHCWJax3qiG3YFnBbPdhZ0N2QhIdQ2hOlRyvXmmBXaZ3bLw==',key_name='tempest-TestNetworkBasicOps-1695982850',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:57:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-08ch0775',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:57:27Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=95d548c1-cd0d-42c9-b769-4f5c5ae19415,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.125 226310 DEBUG nova.network.os_vif_util [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.127 226310 DEBUG nova.network.os_vif_util [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.128 226310 DEBUG os_vif [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.132 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.133 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf6fdca5-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.137 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.139 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.142 226310 INFO os_vif [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9')#033[00m
Nov 29 03:57:33 np0005539564 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[308882]: [NOTICE]   (308886) : haproxy version is 2.8.14-c23fe91
Nov 29 03:57:33 np0005539564 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[308882]: [NOTICE]   (308886) : path to executable is /usr/sbin/haproxy
Nov 29 03:57:33 np0005539564 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[308882]: [WARNING]  (308886) : Exiting Master process...
Nov 29 03:57:33 np0005539564 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[308882]: [WARNING]  (308886) : Exiting Master process...
Nov 29 03:57:33 np0005539564 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[308882]: [ALERT]    (308886) : Current worker (308888) exited with code 143 (Terminated)
Nov 29 03:57:33 np0005539564 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[308882]: [WARNING]  (308886) : All workers exited. Exiting... (0)
Nov 29 03:57:33 np0005539564 systemd[1]: libpod-291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0.scope: Deactivated successfully.
Nov 29 03:57:33 np0005539564 podman[308930]: 2025-11-29 08:57:33.188329485 +0000 UTC m=+0.064861286 container died 291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:57:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:33.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:33 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0-userdata-shm.mount: Deactivated successfully.
Nov 29 03:57:33 np0005539564 systemd[1]: var-lib-containers-storage-overlay-7c8d1d3bd4fcc4451a3342384b38fba56f69948557ce30eea10679704d072cc7-merged.mount: Deactivated successfully.
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.236 226310 DEBUG nova.compute.manager [req-92d6bb6d-43b8-4425-add1-c0487e31af17 req-ba06bfca-9fb4-4a90-a6cb-88cb8af50828 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received event network-vif-unplugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.237 226310 DEBUG oslo_concurrency.lockutils [req-92d6bb6d-43b8-4425-add1-c0487e31af17 req-ba06bfca-9fb4-4a90-a6cb-88cb8af50828 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.238 226310 DEBUG oslo_concurrency.lockutils [req-92d6bb6d-43b8-4425-add1-c0487e31af17 req-ba06bfca-9fb4-4a90-a6cb-88cb8af50828 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.239 226310 DEBUG oslo_concurrency.lockutils [req-92d6bb6d-43b8-4425-add1-c0487e31af17 req-ba06bfca-9fb4-4a90-a6cb-88cb8af50828 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.240 226310 DEBUG nova.compute.manager [req-92d6bb6d-43b8-4425-add1-c0487e31af17 req-ba06bfca-9fb4-4a90-a6cb-88cb8af50828 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] No waiting events found dispatching network-vif-unplugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.240 226310 DEBUG nova.compute.manager [req-92d6bb6d-43b8-4425-add1-c0487e31af17 req-ba06bfca-9fb4-4a90-a6cb-88cb8af50828 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received event network-vif-unplugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:57:33 np0005539564 podman[308930]: 2025-11-29 08:57:33.248287448 +0000 UTC m=+0.124819239 container cleanup 291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:57:33 np0005539564 systemd[1]: libpod-conmon-291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0.scope: Deactivated successfully.
Nov 29 03:57:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:33.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:33 np0005539564 podman[308981]: 2025-11-29 08:57:33.335117497 +0000 UTC m=+0.047518267 container remove 291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.345 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b673f187-62c1-4a71-9b7e-f0a95897f68b]: (4, ('Sat Nov 29 08:57:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 (291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0)\n291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0\nSat Nov 29 08:57:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 (291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0)\n291b51306876aeede1eabe6c303b5f3ce2ce0268ae76c563b5d427708fd7f3e0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.347 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[919def16-fc6a-4a4f-951a-bab67f11a0a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.348 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8601263e-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.349 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:33 np0005539564 kernel: tap8601263e-30: left promiscuous mode
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.365 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.367 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.373 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c59323-4299-492a-bd4c-d90112d10278]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.391 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a85d5ede-6c78-41de-927b-943d20166037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.393 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2782e3-4ff0-4918-9fc2-e97ba81cae96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.416 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[18f86dfe-a03b-4445-8c03-8117c56d16d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962146, 'reachable_time': 42812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308996, 'error': None, 'target': 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:33 np0005539564 systemd[1]: run-netns-ovnmeta\x2d8601263e\x2d32ba\x2d44f8\x2daef7\x2d66d3b518c9d4.mount: Deactivated successfully.
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.420 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:57:33 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:33.421 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[46dd8746-567d-454b-9748-90bff98d0cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.668 226310 INFO nova.virt.libvirt.driver [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Deleting instance files /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415_del#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.670 226310 INFO nova.virt.libvirt.driver [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Deletion of /var/lib/nova/instances/95d548c1-cd0d-42c9-b769-4f5c5ae19415_del complete#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.753 226310 INFO nova.compute.manager [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.754 226310 DEBUG oslo.service.loopingcall [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.754 226310 DEBUG nova.compute.manager [-] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:57:33 np0005539564 nova_compute[226295]: 2025-11-29 08:57:33.755 226310 DEBUG nova.network.neutron [-] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:57:34 np0005539564 nova_compute[226295]: 2025-11-29 08:57:34.841 226310 DEBUG nova.network.neutron [req-9df60e7e-eb92-4fcb-a5d7-93f706cf1036 req-dc79a05a-b72e-4ace-9b70-f00f19d430cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Updated VIF entry in instance network info cache for port df6fdca5-d93b-4b2f-9836-ec2a95857ae7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:57:34 np0005539564 nova_compute[226295]: 2025-11-29 08:57:34.842 226310 DEBUG nova.network.neutron [req-9df60e7e-eb92-4fcb-a5d7-93f706cf1036 req-dc79a05a-b72e-4ace-9b70-f00f19d430cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Updating instance_info_cache with network_info: [{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:57:34 np0005539564 nova_compute[226295]: 2025-11-29 08:57:34.875 226310 DEBUG oslo_concurrency.lockutils [req-9df60e7e-eb92-4fcb-a5d7-93f706cf1036 req-dc79a05a-b72e-4ace-9b70-f00f19d430cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-95d548c1-cd0d-42c9-b769-4f5c5ae19415" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:57:34 np0005539564 nova_compute[226295]: 2025-11-29 08:57:34.978 226310 DEBUG nova.network.neutron [-] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:57:34 np0005539564 nova_compute[226295]: 2025-11-29 08:57:34.998 226310 INFO nova.compute.manager [-] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Took 1.24 seconds to deallocate network for instance.#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.049 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.049 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.114 226310 DEBUG oslo_concurrency.processutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:35.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:35.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.347 226310 DEBUG nova.compute.manager [req-c894ab48-5f36-44f0-9301-0cf55dd43adc req-14e52b85-6e6c-411f-9462-7799aba06533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.348 226310 DEBUG oslo_concurrency.lockutils [req-c894ab48-5f36-44f0-9301-0cf55dd43adc req-14e52b85-6e6c-411f-9462-7799aba06533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.348 226310 DEBUG oslo_concurrency.lockutils [req-c894ab48-5f36-44f0-9301-0cf55dd43adc req-14e52b85-6e6c-411f-9462-7799aba06533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.349 226310 DEBUG oslo_concurrency.lockutils [req-c894ab48-5f36-44f0-9301-0cf55dd43adc req-14e52b85-6e6c-411f-9462-7799aba06533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.350 226310 DEBUG nova.compute.manager [req-c894ab48-5f36-44f0-9301-0cf55dd43adc req-14e52b85-6e6c-411f-9462-7799aba06533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] No waiting events found dispatching network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.350 226310 WARNING nova.compute.manager [req-c894ab48-5f36-44f0-9301-0cf55dd43adc req-14e52b85-6e6c-411f-9462-7799aba06533 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Received unexpected event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:57:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/562004726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.600 226310 DEBUG oslo_concurrency.processutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.612 226310 DEBUG nova.compute.provider_tree [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.631 226310 DEBUG nova.scheduler.client.report [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.656 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.693 226310 INFO nova.scheduler.client.report [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 95d548c1-cd0d-42c9-b769-4f5c5ae19415#033[00m
Nov 29 03:57:35 np0005539564 nova_compute[226295]: 2025-11-29 08:57:35.786 226310 DEBUG oslo_concurrency.lockutils [None req-39c7a2f1-f0c1-43bf-a082-0d3ddc86b479 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "95d548c1-cd0d-42c9-b769-4f5c5ae19415" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:36 np0005539564 nova_compute[226295]: 2025-11-29 08:57:36.726 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:57:36.821 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:37.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:37.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:38 np0005539564 nova_compute[226295]: 2025-11-29 08:57:38.138 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:38 np0005539564 nova_compute[226295]: 2025-11-29 08:57:38.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:57:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:57:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:39.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:39.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:57:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:57:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:41.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:41.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:41 np0005539564 nova_compute[226295]: 2025-11-29 08:57:41.729 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:43 np0005539564 nova_compute[226295]: 2025-11-29 08:57:43.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:43.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:45.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:45.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:45 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:46 np0005539564 nova_compute[226295]: 2025-11-29 08:57:46.741 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:47.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:47.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:47 np0005539564 nova_compute[226295]: 2025-11-29 08:57:47.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:47 np0005539564 nova_compute[226295]: 2025-11-29 08:57:47.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:47 np0005539564 nova_compute[226295]: 2025-11-29 08:57:47.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:47 np0005539564 nova_compute[226295]: 2025-11-29 08:57:47.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:47 np0005539564 nova_compute[226295]: 2025-11-29 08:57:47.377 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:57:47 np0005539564 nova_compute[226295]: 2025-11-29 08:57:47.378 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:47 np0005539564 podman[309203]: 2025-11-29 08:57:47.53299029 +0000 UTC m=+0.081263700 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 03:57:47 np0005539564 podman[309206]: 2025-11-29 08:57:47.551790699 +0000 UTC m=+0.086026419 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:57:47 np0005539564 podman[309202]: 2025-11-29 08:57:47.563122435 +0000 UTC m=+0.117321195 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 29 03:57:47 np0005539564 nova_compute[226295]: 2025-11-29 08:57:47.857 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.101 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406653.1009796, 95d548c1-cd0d-42c9-b769-4f5c5ae19415 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.102 226310 INFO nova.compute.manager [-] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.127 226310 DEBUG nova.compute.manager [None req-7a751c0e-1b75-4f68-b58c-a49bcaff4011 - - - - - -] [instance: 95d548c1-cd0d-42c9-b769-4f5c5ae19415] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.144 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.147 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.148 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4168MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.148 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.148 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.233 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.233 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.253 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1213015958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.747 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.756 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.777 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.805 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:57:48 np0005539564 nova_compute[226295]: 2025-11-29 08:57:48.805 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:49.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:49.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:50 np0005539564 nova_compute[226295]: 2025-11-29 08:57:50.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:50 np0005539564 nova_compute[226295]: 2025-11-29 08:57:50.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:57:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:51.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:51.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:51 np0005539564 nova_compute[226295]: 2025-11-29 08:57:51.746 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:53 np0005539564 nova_compute[226295]: 2025-11-29 08:57:53.148 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:53.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:53.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:55.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:56 np0005539564 nova_compute[226295]: 2025-11-29 08:57:56.748 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:57.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:57.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:58 np0005539564 nova_compute[226295]: 2025-11-29 08:57:58.151 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:59.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:57:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:59.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:59 np0005539564 nova_compute[226295]: 2025-11-29 08:57:59.359 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:59 np0005539564 nova_compute[226295]: 2025-11-29 08:57:59.360 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:57:59 np0005539564 nova_compute[226295]: 2025-11-29 08:57:59.396 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:58:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:01.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:01 np0005539564 nova_compute[226295]: 2025-11-29 08:58:01.750 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:03 np0005539564 nova_compute[226295]: 2025-11-29 08:58:03.155 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:58:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:03.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:58:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:03.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:03.778 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:03.779 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:03.779 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:58:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:05.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:58:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:06.245 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:58:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:06.246 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:58:06 np0005539564 nova_compute[226295]: 2025-11-29 08:58:06.246 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:06 np0005539564 nova_compute[226295]: 2025-11-29 08:58:06.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:06 np0005539564 nova_compute[226295]: 2025-11-29 08:58:06.752 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:07.248 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:07.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:07.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:08 np0005539564 nova_compute[226295]: 2025-11-29 08:58:08.158 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:09.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:09.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:11.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:11.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:11 np0005539564 nova_compute[226295]: 2025-11-29 08:58:11.755 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:13 np0005539564 nova_compute[226295]: 2025-11-29 08:58:13.162 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:13.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:13.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:13 np0005539564 nova_compute[226295]: 2025-11-29 08:58:13.927 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:14 np0005539564 nova_compute[226295]: 2025-11-29 08:58:14.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:15.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:15.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:16 np0005539564 nova_compute[226295]: 2025-11-29 08:58:16.757 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:17.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:17.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:18 np0005539564 nova_compute[226295]: 2025-11-29 08:58:18.166 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:18 np0005539564 podman[309310]: 2025-11-29 08:58:18.546328954 +0000 UTC m=+0.086405299 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:58:18 np0005539564 podman[309316]: 2025-11-29 08:58:18.553377974 +0000 UTC m=+0.092606916 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:58:18 np0005539564 podman[309309]: 2025-11-29 08:58:18.576873561 +0000 UTC m=+0.131573291 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:58:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:19.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:19.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:21.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:21.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:21 np0005539564 nova_compute[226295]: 2025-11-29 08:58:21.760 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:23 np0005539564 nova_compute[226295]: 2025-11-29 08:58:23.171 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:23.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:24 np0005539564 nova_compute[226295]: 2025-11-29 08:58:24.349 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:25.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:26 np0005539564 nova_compute[226295]: 2025-11-29 08:58:26.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:26 np0005539564 nova_compute[226295]: 2025-11-29 08:58:26.762 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:27.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:27 np0005539564 nova_compute[226295]: 2025-11-29 08:58:27.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:27 np0005539564 nova_compute[226295]: 2025-11-29 08:58:27.359 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:27.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:28 np0005539564 nova_compute[226295]: 2025-11-29 08:58:28.174 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:29.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:29.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:31.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:31 np0005539564 nova_compute[226295]: 2025-11-29 08:58:31.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:31 np0005539564 nova_compute[226295]: 2025-11-29 08:58:31.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:58:31 np0005539564 nova_compute[226295]: 2025-11-29 08:58:31.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:58:31 np0005539564 nova_compute[226295]: 2025-11-29 08:58:31.363 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:58:31 np0005539564 nova_compute[226295]: 2025-11-29 08:58:31.364 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:31 np0005539564 nova_compute[226295]: 2025-11-29 08:58:31.365 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:31 np0005539564 nova_compute[226295]: 2025-11-29 08:58:31.366 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:58:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:31.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:31 np0005539564 nova_compute[226295]: 2025-11-29 08:58:31.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.177 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:33.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:33.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.504 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.504 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.525 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.622 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.622 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.634 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.635 226310 INFO nova.compute.claims [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:58:33 np0005539564 nova_compute[226295]: 2025-11-29 08:58:33.798 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/851681182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.259 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.266 226310 DEBUG nova.compute.provider_tree [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.296 226310 DEBUG nova.scheduler.client.report [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.335 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.336 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.397 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.398 226310 DEBUG nova.network.neutron [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.429 226310 INFO nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.449 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.535 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.537 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.537 226310 INFO nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Creating image(s)#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.573 226310 DEBUG nova.storage.rbd_utils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 248e436d-b395-42ee-b487-5d8336f62ebc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.611 226310 DEBUG nova.storage.rbd_utils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 248e436d-b395-42ee-b487-5d8336f62ebc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.649 226310 DEBUG nova.storage.rbd_utils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 248e436d-b395-42ee-b487-5d8336f62ebc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.655 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.734 226310 DEBUG nova.policy [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.762 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.764 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.764 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.765 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.804 226310 DEBUG nova.storage.rbd_utils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 248e436d-b395-42ee-b487-5d8336f62ebc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:58:34 np0005539564 nova_compute[226295]: 2025-11-29 08:58:34.811 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 248e436d-b395-42ee-b487-5d8336f62ebc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:35 np0005539564 nova_compute[226295]: 2025-11-29 08:58:35.180 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 248e436d-b395-42ee-b487-5d8336f62ebc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:35 np0005539564 nova_compute[226295]: 2025-11-29 08:58:35.284 226310 DEBUG nova.storage.rbd_utils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 248e436d-b395-42ee-b487-5d8336f62ebc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:58:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:35.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:35 np0005539564 nova_compute[226295]: 2025-11-29 08:58:35.422 226310 DEBUG nova.objects.instance [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 248e436d-b395-42ee-b487-5d8336f62ebc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:58:35 np0005539564 nova_compute[226295]: 2025-11-29 08:58:35.438 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:58:35 np0005539564 nova_compute[226295]: 2025-11-29 08:58:35.438 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Ensure instance console log exists: /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:58:35 np0005539564 nova_compute[226295]: 2025-11-29 08:58:35.439 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:35 np0005539564 nova_compute[226295]: 2025-11-29 08:58:35.439 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:35 np0005539564 nova_compute[226295]: 2025-11-29 08:58:35.440 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:36 np0005539564 nova_compute[226295]: 2025-11-29 08:58:36.170 226310 DEBUG nova.network.neutron [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Successfully created port: a22ea9cb-3479-4928-a0b9-996401fa64ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:58:36 np0005539564 nova_compute[226295]: 2025-11-29 08:58:36.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:36 np0005539564 nova_compute[226295]: 2025-11-29 08:58:36.985 226310 DEBUG nova.network.neutron [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Successfully updated port: a22ea9cb-3479-4928-a0b9-996401fa64ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:58:37 np0005539564 nova_compute[226295]: 2025-11-29 08:58:37.010 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:58:37 np0005539564 nova_compute[226295]: 2025-11-29 08:58:37.011 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:58:37 np0005539564 nova_compute[226295]: 2025-11-29 08:58:37.011 226310 DEBUG nova.network.neutron [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:58:37 np0005539564 nova_compute[226295]: 2025-11-29 08:58:37.160 226310 DEBUG nova.compute.manager [req-9188f231-a750-4815-8323-d45ee4201b9b req-59f2f9b9-4f99-4b3f-ac2e-33c58429bd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-changed-a22ea9cb-3479-4928-a0b9-996401fa64ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:58:37 np0005539564 nova_compute[226295]: 2025-11-29 08:58:37.160 226310 DEBUG nova.compute.manager [req-9188f231-a750-4815-8323-d45ee4201b9b req-59f2f9b9-4f99-4b3f-ac2e-33c58429bd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Refreshing instance network info cache due to event network-changed-a22ea9cb-3479-4928-a0b9-996401fa64ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:58:37 np0005539564 nova_compute[226295]: 2025-11-29 08:58:37.161 226310 DEBUG oslo_concurrency.lockutils [req-9188f231-a750-4815-8323-d45ee4201b9b req-59f2f9b9-4f99-4b3f-ac2e-33c58429bd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:58:37 np0005539564 nova_compute[226295]: 2025-11-29 08:58:37.234 226310 DEBUG nova.network.neutron [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:58:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:37.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:37 np0005539564 nova_compute[226295]: 2025-11-29 08:58:37.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:37.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.065 226310 DEBUG nova.network.neutron [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Updating instance_info_cache with network_info: [{"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.085 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.085 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Instance network_info: |[{"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.086 226310 DEBUG oslo_concurrency.lockutils [req-9188f231-a750-4815-8323-d45ee4201b9b req-59f2f9b9-4f99-4b3f-ac2e-33c58429bd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.087 226310 DEBUG nova.network.neutron [req-9188f231-a750-4815-8323-d45ee4201b9b req-59f2f9b9-4f99-4b3f-ac2e-33c58429bd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Refreshing network info cache for port a22ea9cb-3479-4928-a0b9-996401fa64ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.093 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Start _get_guest_xml network_info=[{"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.101 226310 WARNING nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.110 226310 DEBUG nova.virt.libvirt.host [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.111 226310 DEBUG nova.virt.libvirt.host [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.117 226310 DEBUG nova.virt.libvirt.host [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.118 226310 DEBUG nova.virt.libvirt.host [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.119 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.120 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.120 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.120 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.121 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.121 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.121 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.122 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.122 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.122 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.122 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.123 226310 DEBUG nova.virt.hardware [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.126 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.179 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:58:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/266316634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.611 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.658 226310 DEBUG nova.storage.rbd_utils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 248e436d-b395-42ee-b487-5d8336f62ebc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:58:38 np0005539564 nova_compute[226295]: 2025-11-29 08:58:38.663 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:58:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/798511832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.107 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.110 226310 DEBUG nova.virt.libvirt.vif [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:58:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1229925520',display_name='tempest-TestNetworkBasicOps-server-1229925520',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1229925520',id=209,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAkfIgw/ZaoBzZdpsYUr4kfiHc0ww/b5s1kmwWDka4qlUN88/1cleEmBLDimlpCx+jCEGylJ/s++JKE6YAYl1jd2OXD5hhhFOd7A4u5Na3bZ2unx1kWY4ybQHm3R36ZhXw==',key_name='tempest-TestNetworkBasicOps-1192430178',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-4kzs0lq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:58:34Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=248e436d-b395-42ee-b487-5d8336f62ebc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.110 226310 DEBUG nova.network.os_vif_util [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.111 226310 DEBUG nova.network.os_vif_util [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:6b:0e,bridge_name='br-int',has_traffic_filtering=True,id=a22ea9cb-3479-4928-a0b9-996401fa64ca,network=Network(5495d9f7-028e-40a5-be0d-02dd42ede587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa22ea9cb-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.113 226310 DEBUG nova.objects.instance [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 248e436d-b395-42ee-b487-5d8336f62ebc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.140 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <uuid>248e436d-b395-42ee-b487-5d8336f62ebc</uuid>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <name>instance-000000d1</name>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestNetworkBasicOps-server-1229925520</nova:name>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 08:58:38</nova:creationTime>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <nova:port uuid="a22ea9cb-3479-4928-a0b9-996401fa64ca">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <system>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <entry name="serial">248e436d-b395-42ee-b487-5d8336f62ebc</entry>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <entry name="uuid">248e436d-b395-42ee-b487-5d8336f62ebc</entry>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </system>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <os>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  </os>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <features>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  </features>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  </clock>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  <devices>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/248e436d-b395-42ee-b487-5d8336f62ebc_disk">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/248e436d-b395-42ee-b487-5d8336f62ebc_disk.config">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      </source>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      </auth>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </disk>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:77:6b:0e"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <target dev="tapa22ea9cb-34"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </interface>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc/console.log" append="off"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </serial>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <video>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </video>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </rng>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 03:58:39 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 03:58:39 np0005539564 nova_compute[226295]:  </devices>
Nov 29 03:58:39 np0005539564 nova_compute[226295]: </domain>
Nov 29 03:58:39 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.142 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Preparing to wait for external event network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.142 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.143 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.143 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.144 226310 DEBUG nova.virt.libvirt.vif [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:58:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1229925520',display_name='tempest-TestNetworkBasicOps-server-1229925520',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1229925520',id=209,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAkfIgw/ZaoBzZdpsYUr4kfiHc0ww/b5s1kmwWDka4qlUN88/1cleEmBLDimlpCx+jCEGylJ/s++JKE6YAYl1jd2OXD5hhhFOd7A4u5Na3bZ2unx1kWY4ybQHm3R36ZhXw==',key_name='tempest-TestNetworkBasicOps-1192430178',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-4kzs0lq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:58:34Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=248e436d-b395-42ee-b487-5d8336f62ebc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.144 226310 DEBUG nova.network.os_vif_util [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.145 226310 DEBUG nova.network.os_vif_util [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:6b:0e,bridge_name='br-int',has_traffic_filtering=True,id=a22ea9cb-3479-4928-a0b9-996401fa64ca,network=Network(5495d9f7-028e-40a5-be0d-02dd42ede587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa22ea9cb-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.145 226310 DEBUG os_vif [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:6b:0e,bridge_name='br-int',has_traffic_filtering=True,id=a22ea9cb-3479-4928-a0b9-996401fa64ca,network=Network(5495d9f7-028e-40a5-be0d-02dd42ede587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa22ea9cb-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.146 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.147 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.151 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.151 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa22ea9cb-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.152 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa22ea9cb-34, col_values=(('external_ids', {'iface-id': 'a22ea9cb-3479-4928-a0b9-996401fa64ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:6b:0e', 'vm-uuid': '248e436d-b395-42ee-b487-5d8336f62ebc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.154 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:39 np0005539564 NetworkManager[48997]: <info>  [1764406719.1552] manager: (tapa22ea9cb-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.157 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.162 226310 INFO os_vif [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:6b:0e,bridge_name='br-int',has_traffic_filtering=True,id=a22ea9cb-3479-4928-a0b9-996401fa64ca,network=Network(5495d9f7-028e-40a5-be0d-02dd42ede587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa22ea9cb-34')#033[00m
Nov 29 03:58:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:39.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.359 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.359 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.360 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:77:6b:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.360 226310 INFO nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Using config drive#033[00m
Nov 29 03:58:39 np0005539564 nova_compute[226295]: 2025-11-29 08:58:39.401 226310 DEBUG nova.storage.rbd_utils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 248e436d-b395-42ee-b487-5d8336f62ebc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:58:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:40 np0005539564 nova_compute[226295]: 2025-11-29 08:58:40.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:40 np0005539564 nova_compute[226295]: 2025-11-29 08:58:40.730 226310 INFO nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Creating config drive at /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc/disk.config#033[00m
Nov 29 03:58:40 np0005539564 nova_compute[226295]: 2025-11-29 08:58:40.738 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_j5son4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:40 np0005539564 nova_compute[226295]: 2025-11-29 08:58:40.901 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_j5son4" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:40 np0005539564 nova_compute[226295]: 2025-11-29 08:58:40.949 226310 DEBUG nova.storage.rbd_utils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 248e436d-b395-42ee-b487-5d8336f62ebc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:58:40 np0005539564 nova_compute[226295]: 2025-11-29 08:58:40.955 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc/disk.config 248e436d-b395-42ee-b487-5d8336f62ebc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.174 226310 DEBUG oslo_concurrency.processutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc/disk.config 248e436d-b395-42ee-b487-5d8336f62ebc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.176 226310 INFO nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Deleting local config drive /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:58:41 np0005539564 kernel: tapa22ea9cb-34: entered promiscuous mode
Nov 29 03:58:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:41Z|00841|binding|INFO|Claiming lport a22ea9cb-3479-4928-a0b9-996401fa64ca for this chassis.
Nov 29 03:58:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:41Z|00842|binding|INFO|a22ea9cb-3479-4928-a0b9-996401fa64ca: Claiming fa:16:3e:77:6b:0e 10.100.0.8
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.266 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 NetworkManager[48997]: <info>  [1764406721.2673] manager: (tapa22ea9cb-34): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.289 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:6b:0e 10.100.0.8'], port_security=['fa:16:3e:77:6b:0e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '248e436d-b395-42ee-b487-5d8336f62ebc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5495d9f7-028e-40a5-be0d-02dd42ede587', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17d0914d-837e-4b83-a856-fde79ecab611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49eb8326-9a87-4555-8a63-5d46a7abe8d9, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a22ea9cb-3479-4928-a0b9-996401fa64ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.291 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a22ea9cb-3479-4928-a0b9-996401fa64ca in datapath 5495d9f7-028e-40a5-be0d-02dd42ede587 bound to our chassis#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.293 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5495d9f7-028e-40a5-be0d-02dd42ede587#033[00m
Nov 29 03:58:41 np0005539564 systemd-udevd[309696]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:58:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:41.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.312 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[544e6589-79a2-41c2-8c14-e2670ea04a2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.313 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5495d9f7-01 in ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.316 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5495d9f7-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.316 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[57c4fcd7-252a-446a-9c58-32d84ec907d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 systemd-machined[190128]: New machine qemu-97-instance-000000d1.
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.318 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5913fb87-e6f0-4a8b-a045-5ade9b332abf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 NetworkManager[48997]: <info>  [1764406721.3206] device (tapa22ea9cb-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:58:41 np0005539564 NetworkManager[48997]: <info>  [1764406721.3215] device (tapa22ea9cb-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:58:41 np0005539564 systemd[1]: Started Virtual Machine qemu-97-instance-000000d1.
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.336 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[4904f28d-27d4-40e3-854c-88a20c0d5ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.359 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.364 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:41Z|00843|binding|INFO|Setting lport a22ea9cb-3479-4928-a0b9-996401fa64ca ovn-installed in OVS
Nov 29 03:58:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:41Z|00844|binding|INFO|Setting lport a22ea9cb-3479-4928-a0b9-996401fa64ca up in Southbound
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.365 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f974baf6-7082-4e57-b37a-6f9fb5fb088d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.367 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.400 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[bf66c414-a86a-4669-b1ba-6057c0ae8028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:41 np0005539564 systemd-udevd[309700]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:58:41 np0005539564 NetworkManager[48997]: <info>  [1764406721.4113] manager: (tap5495d9f7-00): new Veth device (/org/freedesktop/NetworkManager/Devices/394)
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.412 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e865ac09-34a6-4a68-85ee-ed2c381b7bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.453 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[029c4884-57ae-46fb-8f34-fd7e8f945923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.458 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[eccf2a2c-871e-4b98-a42d-7d0a2a52570c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 NetworkManager[48997]: <info>  [1764406721.4885] device (tap5495d9f7-00): carrier: link connected
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.496 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[6121cc24-4454-4598-9755-99134a8a6e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.520 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[17da2ad3-0027-4e32-b1dd-bcc3beead80d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5495d9f7-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:b9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 969617, 'reachable_time': 20699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309730, 'error': None, 'target': 'ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.549 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[071e7bcf-f943-4bd3-8f7a-d28a3713a7e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:b9a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 969617, 'tstamp': 969617}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309731, 'error': None, 'target': 'ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.573 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fa205f-5c43-46da-95d3-1faafc91d273]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5495d9f7-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:b9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 969617, 'reachable_time': 20699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309739, 'error': None, 'target': 'ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.612 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[89048aa7-5260-4199-8f8f-f2979c28eb22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.687 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[59390da9-f6ef-4b55-9123-0942c9e0eb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.688 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5495d9f7-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.689 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.690 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5495d9f7-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.692 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 NetworkManager[48997]: <info>  [1764406721.6927] manager: (tap5495d9f7-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Nov 29 03:58:41 np0005539564 kernel: tap5495d9f7-00: entered promiscuous mode
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.695 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.696 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5495d9f7-00, col_values=(('external_ids', {'iface-id': '8d394f96-315e-4ade-9c22-28dc59ebaf77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.697 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:41Z|00845|binding|INFO|Releasing lport 8d394f96-315e-4ade-9c22-28dc59ebaf77 from this chassis (sb_readonly=0)
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.776 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5495d9f7-028e-40a5-be0d-02dd42ede587.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5495d9f7-028e-40a5-be0d-02dd42ede587.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.777 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[99f44170-542d-4b6c-a4f2-9ffe72a37e13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.778 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-5495d9f7-028e-40a5-be0d-02dd42ede587
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/5495d9f7-028e-40a5-be0d-02dd42ede587.pid.haproxy
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 5495d9f7-028e-40a5-be0d-02dd42ede587
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:58:41 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:58:41.778 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587', 'env', 'PROCESS_TAG=haproxy-5495d9f7-028e-40a5-be0d-02dd42ede587', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5495d9f7-028e-40a5-be0d-02dd42ede587.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.779 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.897 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406721.8972495, 248e436d-b395-42ee-b487-5d8336f62ebc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.898 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.983 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.993 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406721.8985648, 248e436d-b395-42ee-b487-5d8336f62ebc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:58:41 np0005539564 nova_compute[226295]: 2025-11-29 08:58:41.994 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.013 226310 DEBUG nova.compute.manager [req-fad2baed-7a13-48a9-9361-e47b7c0acae9 req-5bfc26fb-273a-48cd-ab5d-9a53f124d0c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.014 226310 DEBUG oslo_concurrency.lockutils [req-fad2baed-7a13-48a9-9361-e47b7c0acae9 req-5bfc26fb-273a-48cd-ab5d-9a53f124d0c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.015 226310 DEBUG oslo_concurrency.lockutils [req-fad2baed-7a13-48a9-9361-e47b7c0acae9 req-5bfc26fb-273a-48cd-ab5d-9a53f124d0c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.015 226310 DEBUG oslo_concurrency.lockutils [req-fad2baed-7a13-48a9-9361-e47b7c0acae9 req-5bfc26fb-273a-48cd-ab5d-9a53f124d0c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.016 226310 DEBUG nova.compute.manager [req-fad2baed-7a13-48a9-9361-e47b7c0acae9 req-5bfc26fb-273a-48cd-ab5d-9a53f124d0c7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Processing event network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.017 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.022 226310 DEBUG nova.network.neutron [req-9188f231-a750-4815-8323-d45ee4201b9b req-59f2f9b9-4f99-4b3f-ac2e-33c58429bd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Updated VIF entry in instance network info cache for port a22ea9cb-3479-4928-a0b9-996401fa64ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.023 226310 DEBUG nova.network.neutron [req-9188f231-a750-4815-8323-d45ee4201b9b req-59f2f9b9-4f99-4b3f-ac2e-33c58429bd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Updating instance_info_cache with network_info: [{"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.027 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.028 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.034 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406722.0236704, 248e436d-b395-42ee-b487-5d8336f62ebc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.035 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.039 226310 INFO nova.virt.libvirt.driver [-] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Instance spawned successfully.#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.040 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.107 226310 DEBUG oslo_concurrency.lockutils [req-9188f231-a750-4815-8323-d45ee4201b9b req-59f2f9b9-4f99-4b3f-ac2e-33c58429bd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.110 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.117 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.118 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.119 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.120 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.121 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.121 226310 DEBUG nova.virt.libvirt.driver [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.129 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.216 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.286 226310 INFO nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Took 7.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.287 226310 DEBUG nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:58:42 np0005539564 podman[309806]: 2025-11-29 08:58:42.295682865 +0000 UTC m=+0.091172767 container create 1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:58:42 np0005539564 systemd[1]: Started libpod-conmon-1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125.scope.
Nov 29 03:58:42 np0005539564 podman[309806]: 2025-11-29 08:58:42.24522885 +0000 UTC m=+0.040718782 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:58:42 np0005539564 systemd[1]: Started libcrun container.
Nov 29 03:58:42 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1b64cf5c12fff6b6dd2bcaef52a61087aad7605da55bb0c049cb0a8285d2b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:42 np0005539564 podman[309806]: 2025-11-29 08:58:42.401252972 +0000 UTC m=+0.196743004 container init 1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:58:42 np0005539564 podman[309806]: 2025-11-29 08:58:42.408040865 +0000 UTC m=+0.203530797 container start 1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:58:42 np0005539564 neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587[309821]: [NOTICE]   (309825) : New worker (309827) forked
Nov 29 03:58:42 np0005539564 neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587[309821]: [NOTICE]   (309825) : Loading success.
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.463 226310 INFO nova.compute.manager [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Took 8.88 seconds to build instance.#033[00m
Nov 29 03:58:42 np0005539564 nova_compute[226295]: 2025-11-29 08:58:42.518 226310 DEBUG oslo_concurrency.lockutils [None req-74458e20-dfdb-494d-971f-1e0828d94e1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:43.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:43.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:44 np0005539564 nova_compute[226295]: 2025-11-29 08:58:44.170 226310 DEBUG nova.compute.manager [req-9c4dad39-9422-4409-b6a4-a1c02c6fc090 req-fd86e8e5-174a-46cf-891e-8f6c43ec7915 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:58:44 np0005539564 nova_compute[226295]: 2025-11-29 08:58:44.171 226310 DEBUG oslo_concurrency.lockutils [req-9c4dad39-9422-4409-b6a4-a1c02c6fc090 req-fd86e8e5-174a-46cf-891e-8f6c43ec7915 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:44 np0005539564 nova_compute[226295]: 2025-11-29 08:58:44.171 226310 DEBUG oslo_concurrency.lockutils [req-9c4dad39-9422-4409-b6a4-a1c02c6fc090 req-fd86e8e5-174a-46cf-891e-8f6c43ec7915 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:44 np0005539564 nova_compute[226295]: 2025-11-29 08:58:44.172 226310 DEBUG oslo_concurrency.lockutils [req-9c4dad39-9422-4409-b6a4-a1c02c6fc090 req-fd86e8e5-174a-46cf-891e-8f6c43ec7915 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:44 np0005539564 nova_compute[226295]: 2025-11-29 08:58:44.173 226310 DEBUG nova.compute.manager [req-9c4dad39-9422-4409-b6a4-a1c02c6fc090 req-fd86e8e5-174a-46cf-891e-8f6c43ec7915 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] No waiting events found dispatching network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:58:44 np0005539564 nova_compute[226295]: 2025-11-29 08:58:44.173 226310 WARNING nova.compute.manager [req-9c4dad39-9422-4409-b6a4-a1c02c6fc090 req-fd86e8e5-174a-46cf-891e-8f6c43ec7915 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received unexpected event network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca for instance with vm_state active and task_state None.#033[00m
Nov 29 03:58:44 np0005539564 nova_compute[226295]: 2025-11-29 08:58:44.191 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:45.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:58:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:45.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.507 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.531 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Triggering sync for uuid 248e436d-b395-42ee-b487-5d8336f62ebc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.532 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.532 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "248e436d-b395-42ee-b487-5d8336f62ebc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.565 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "248e436d-b395-42ee-b487-5d8336f62ebc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.778 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.905 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:46 np0005539564 NetworkManager[48997]: <info>  [1764406726.9059] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 29 03:58:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:46Z|00846|binding|INFO|Releasing lport 8d394f96-315e-4ade-9c22-28dc59ebaf77 from this chassis (sb_readonly=0)
Nov 29 03:58:46 np0005539564 NetworkManager[48997]: <info>  [1764406726.9071] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Nov 29 03:58:46 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:46Z|00847|binding|INFO|Releasing lport 8d394f96-315e-4ade-9c22-28dc59ebaf77 from this chassis (sb_readonly=0)
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:46 np0005539564 nova_compute[226295]: 2025-11-29 08:58:46.973 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:47.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:47 np0005539564 nova_compute[226295]: 2025-11-29 08:58:47.314 226310 DEBUG nova.compute.manager [req-1a652b1b-bb15-43dc-a79c-cd136a4d14a5 req-566a10e9-ea0b-49c9-984a-11a4b9a98882 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-changed-a22ea9cb-3479-4928-a0b9-996401fa64ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:58:47 np0005539564 nova_compute[226295]: 2025-11-29 08:58:47.315 226310 DEBUG nova.compute.manager [req-1a652b1b-bb15-43dc-a79c-cd136a4d14a5 req-566a10e9-ea0b-49c9-984a-11a4b9a98882 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Refreshing instance network info cache due to event network-changed-a22ea9cb-3479-4928-a0b9-996401fa64ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:58:47 np0005539564 nova_compute[226295]: 2025-11-29 08:58:47.315 226310 DEBUG oslo_concurrency.lockutils [req-1a652b1b-bb15-43dc-a79c-cd136a4d14a5 req-566a10e9-ea0b-49c9-984a-11a4b9a98882 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:58:47 np0005539564 nova_compute[226295]: 2025-11-29 08:58:47.315 226310 DEBUG oslo_concurrency.lockutils [req-1a652b1b-bb15-43dc-a79c-cd136a4d14a5 req-566a10e9-ea0b-49c9-984a-11a4b9a98882 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:58:47 np0005539564 nova_compute[226295]: 2025-11-29 08:58:47.316 226310 DEBUG nova.network.neutron [req-1a652b1b-bb15-43dc-a79c-cd136a4d14a5 req-566a10e9-ea0b-49c9-984a-11a4b9a98882 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Refreshing network info cache for port a22ea9cb-3479-4928-a0b9-996401fa64ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:58:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:47.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:58:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.370 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.370 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.371 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.372 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/155518144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.828 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.919 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:58:48 np0005539564 nova_compute[226295]: 2025-11-29 08:58:48.920 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:58:48 np0005539564 podman[309992]: 2025-11-29 08:58:48.943038165 +0000 UTC m=+0.059324526 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:58:48 np0005539564 podman[309991]: 2025-11-29 08:58:48.94582956 +0000 UTC m=+0.062752698 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true)
Nov 29 03:58:48 np0005539564 podman[309990]: 2025-11-29 08:58:48.991231608 +0000 UTC m=+0.107581931 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.087 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.088 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4038MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.088 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.089 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.183 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 248e436d-b395-42ee-b487-5d8336f62ebc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.183 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.184 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.193 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.201 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.232 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.233 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.247 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.268 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.310 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:49.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:49 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/695085484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.789 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.795 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.818 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.848 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:58:49 np0005539564 nova_compute[226295]: 2025-11-29 08:58:49.849 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:50 np0005539564 nova_compute[226295]: 2025-11-29 08:58:50.740 226310 DEBUG nova.network.neutron [req-1a652b1b-bb15-43dc-a79c-cd136a4d14a5 req-566a10e9-ea0b-49c9-984a-11a4b9a98882 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Updated VIF entry in instance network info cache for port a22ea9cb-3479-4928-a0b9-996401fa64ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:58:50 np0005539564 nova_compute[226295]: 2025-11-29 08:58:50.741 226310 DEBUG nova.network.neutron [req-1a652b1b-bb15-43dc-a79c-cd136a4d14a5 req-566a10e9-ea0b-49c9-984a-11a4b9a98882 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Updating instance_info_cache with network_info: [{"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:58:50 np0005539564 nova_compute[226295]: 2025-11-29 08:58:50.766 226310 DEBUG oslo_concurrency.lockutils [req-1a652b1b-bb15-43dc-a79c-cd136a4d14a5 req-566a10e9-ea0b-49c9-984a-11a4b9a98882 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:58:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:51.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:51.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:51 np0005539564 nova_compute[226295]: 2025-11-29 08:58:51.781 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:52.251859) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732251889, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1673, "num_deletes": 258, "total_data_size": 3859627, "memory_usage": 3928080, "flush_reason": "Manual Compaction"}
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732270204, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2535139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80283, "largest_seqno": 81951, "table_properties": {"data_size": 2528172, "index_size": 4037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14735, "raw_average_key_size": 19, "raw_value_size": 2514096, "raw_average_value_size": 3392, "num_data_blocks": 177, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406588, "oldest_key_time": 1764406588, "file_creation_time": 1764406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 18393 microseconds, and 5838 cpu microseconds.
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:52.270252) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2535139 bytes OK
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:52.270267) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:52.272329) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:52.272339) EVENT_LOG_v1 {"time_micros": 1764406732272335, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:52.272355) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3852014, prev total WAL file size 3852014, number of live WAL files 2.
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:52.273305) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303230' seq:72057594037927935, type:22 .. '6C6F676D0033323734' seq:0, type:0; will stop at (end)
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2475KB)], [162(12MB)]
Nov 29 03:58:52 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732273376, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 15240947, "oldest_snapshot_seqno": -1}
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10797 keys, 15101067 bytes, temperature: kUnknown
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406733116098, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 15101067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15029816, "index_size": 43170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27013, "raw_key_size": 285223, "raw_average_key_size": 26, "raw_value_size": 14839169, "raw_average_value_size": 1374, "num_data_blocks": 1649, "num_entries": 10797, "num_filter_entries": 10797, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:53.116615) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 15101067 bytes
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:53.118065) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 18.1 rd, 17.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.1 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(12.0) write-amplify(6.0) OK, records in: 11328, records dropped: 531 output_compression: NoCompression
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:53.118102) EVENT_LOG_v1 {"time_micros": 1764406733118084, "job": 104, "event": "compaction_finished", "compaction_time_micros": 842921, "compaction_time_cpu_micros": 47768, "output_level": 6, "num_output_files": 1, "total_output_size": 15101067, "num_input_records": 11328, "num_output_records": 10797, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406733119099, "job": 104, "event": "table_file_deletion", "file_number": 164}
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406733124091, "job": 104, "event": "table_file_deletion", "file_number": 162}
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:52.273103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:53.124183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:53.124191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:53.124194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:53.124197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:53 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-08:58:53.124200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:53.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:53.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:54 np0005539564 nova_compute[226295]: 2025-11-29 08:58:54.235 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:54 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:55.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:55.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:56 np0005539564 nova_compute[226295]: 2025-11-29 08:58:56.784 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:57.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:57.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:57 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:57Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:6b:0e 10.100.0.8
Nov 29 03:58:57 np0005539564 ovn_controller[130591]: 2025-11-29T08:58:57Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:6b:0e 10.100.0.8
Nov 29 03:58:59 np0005539564 nova_compute[226295]: 2025-11-29 08:58:59.237 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:59.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:58:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:01.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:01 np0005539564 nova_compute[226295]: 2025-11-29 08:59:01.786 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:03.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:03.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:03.779 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:03.781 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:03.782 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:04 np0005539564 nova_compute[226295]: 2025-11-29 08:59:04.242 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:04 np0005539564 nova_compute[226295]: 2025-11-29 08:59:04.748 226310 INFO nova.compute.manager [None req-74081a42-d83d-443c-a029-c5929b012e2e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Get console output#033[00m
Nov 29 03:59:04 np0005539564 nova_compute[226295]: 2025-11-29 08:59:04.755 270504 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:59:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:05.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:05.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:06 np0005539564 nova_compute[226295]: 2025-11-29 08:59:06.130 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:06.130 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:59:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:06.131 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:59:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:06.132 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:59:06 np0005539564 nova_compute[226295]: 2025-11-29 08:59:06.788 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:07.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:07.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:07 np0005539564 ovn_controller[130591]: 2025-11-29T08:59:07Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:6b:0e 10.100.0.8
Nov 29 03:59:09 np0005539564 nova_compute[226295]: 2025-11-29 08:59:09.283 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:09.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:09 np0005539564 ovn_controller[130591]: 2025-11-29T08:59:09Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:6b:0e 10.100.0.8
Nov 29 03:59:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:11.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:11 np0005539564 nova_compute[226295]: 2025-11-29 08:59:11.790 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.163 226310 DEBUG nova.compute.manager [req-ff01edd4-f976-471e-877b-f69b3b622134 req-f7468b2e-ce44-4407-9e53-522af5c1dc29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-changed-a22ea9cb-3479-4928-a0b9-996401fa64ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.163 226310 DEBUG nova.compute.manager [req-ff01edd4-f976-471e-877b-f69b3b622134 req-f7468b2e-ce44-4407-9e53-522af5c1dc29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Refreshing instance network info cache due to event network-changed-a22ea9cb-3479-4928-a0b9-996401fa64ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.164 226310 DEBUG oslo_concurrency.lockutils [req-ff01edd4-f976-471e-877b-f69b3b622134 req-f7468b2e-ce44-4407-9e53-522af5c1dc29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.164 226310 DEBUG oslo_concurrency.lockutils [req-ff01edd4-f976-471e-877b-f69b3b622134 req-f7468b2e-ce44-4407-9e53-522af5c1dc29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.164 226310 DEBUG nova.network.neutron [req-ff01edd4-f976-471e-877b-f69b3b622134 req-f7468b2e-ce44-4407-9e53-522af5c1dc29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Refreshing network info cache for port a22ea9cb-3479-4928-a0b9-996401fa64ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.250 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.250 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.251 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.251 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.251 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.253 226310 INFO nova.compute.manager [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Terminating instance#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.254 226310 DEBUG nova.compute.manager [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:59:12 np0005539564 kernel: tapa22ea9cb-34 (unregistering): left promiscuous mode
Nov 29 03:59:12 np0005539564 NetworkManager[48997]: <info>  [1764406752.3149] device (tapa22ea9cb-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:59:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:59:12Z|00848|binding|INFO|Releasing lport a22ea9cb-3479-4928-a0b9-996401fa64ca from this chassis (sb_readonly=0)
Nov 29 03:59:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:59:12Z|00849|binding|INFO|Setting lport a22ea9cb-3479-4928-a0b9-996401fa64ca down in Southbound
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.324 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 ovn_controller[130591]: 2025-11-29T08:59:12Z|00850|binding|INFO|Removing iface tapa22ea9cb-34 ovn-installed in OVS
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.327 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.336 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:6b:0e 10.100.0.8'], port_security=['fa:16:3e:77:6b:0e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '248e436d-b395-42ee-b487-5d8336f62ebc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5495d9f7-028e-40a5-be0d-02dd42ede587', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17d0914d-837e-4b83-a856-fde79ecab611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49eb8326-9a87-4555-8a63-5d46a7abe8d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=a22ea9cb-3479-4928-a0b9-996401fa64ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.337 139780 INFO neutron.agent.ovn.metadata.agent [-] Port a22ea9cb-3479-4928-a0b9-996401fa64ca in datapath 5495d9f7-028e-40a5-be0d-02dd42ede587 unbound from our chassis#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.338 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5495d9f7-028e-40a5-be0d-02dd42ede587, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.340 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcf6c51-a1c0-434f-a202-0a31e9ad28d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.341 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587 namespace which is not needed anymore#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.357 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d1.scope: Deactivated successfully.
Nov 29 03:59:12 np0005539564 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d1.scope: Consumed 15.654s CPU time.
Nov 29 03:59:12 np0005539564 systemd-machined[190128]: Machine qemu-97-instance-000000d1 terminated.
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.478 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.488 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.499 226310 INFO nova.virt.libvirt.driver [-] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Instance destroyed successfully.#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.500 226310 DEBUG nova.objects.instance [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 248e436d-b395-42ee-b487-5d8336f62ebc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:59:12 np0005539564 neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587[309821]: [NOTICE]   (309825) : haproxy version is 2.8.14-c23fe91
Nov 29 03:59:12 np0005539564 neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587[309821]: [NOTICE]   (309825) : path to executable is /usr/sbin/haproxy
Nov 29 03:59:12 np0005539564 neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587[309821]: [WARNING]  (309825) : Exiting Master process...
Nov 29 03:59:12 np0005539564 neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587[309821]: [ALERT]    (309825) : Current worker (309827) exited with code 143 (Terminated)
Nov 29 03:59:12 np0005539564 neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587[309821]: [WARNING]  (309825) : All workers exited. Exiting... (0)
Nov 29 03:59:12 np0005539564 systemd[1]: libpod-1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125.scope: Deactivated successfully.
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.517 226310 DEBUG nova.virt.libvirt.vif [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:58:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1229925520',display_name='tempest-TestNetworkBasicOps-server-1229925520',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1229925520',id=209,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAkfIgw/ZaoBzZdpsYUr4kfiHc0ww/b5s1kmwWDka4qlUN88/1cleEmBLDimlpCx+jCEGylJ/s++JKE6YAYl1jd2OXD5hhhFOd7A4u5Na3bZ2unx1kWY4ybQHm3R36ZhXw==',key_name='tempest-TestNetworkBasicOps-1192430178',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:58:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-4kzs0lq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:58:42Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=248e436d-b395-42ee-b487-5d8336f62ebc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.518 226310 DEBUG nova.network.os_vif_util [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.519 226310 DEBUG nova.network.os_vif_util [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:6b:0e,bridge_name='br-int',has_traffic_filtering=True,id=a22ea9cb-3479-4928-a0b9-996401fa64ca,network=Network(5495d9f7-028e-40a5-be0d-02dd42ede587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa22ea9cb-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.519 226310 DEBUG os_vif [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:6b:0e,bridge_name='br-int',has_traffic_filtering=True,id=a22ea9cb-3479-4928-a0b9-996401fa64ca,network=Network(5495d9f7-028e-40a5-be0d-02dd42ede587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa22ea9cb-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:59:12 np0005539564 podman[310149]: 2025-11-29 08:59:12.521122931 +0000 UTC m=+0.069237254 container died 1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.526 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.527 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa22ea9cb-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.530 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.531 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.535 226310 INFO os_vif [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:6b:0e,bridge_name='br-int',has_traffic_filtering=True,id=a22ea9cb-3479-4928-a0b9-996401fa64ca,network=Network(5495d9f7-028e-40a5-be0d-02dd42ede587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa22ea9cb-34')#033[00m
Nov 29 03:59:12 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125-userdata-shm.mount: Deactivated successfully.
Nov 29 03:59:12 np0005539564 systemd[1]: var-lib-containers-storage-overlay-bf1b64cf5c12fff6b6dd2bcaef52a61087aad7605da55bb0c049cb0a8285d2b1-merged.mount: Deactivated successfully.
Nov 29 03:59:12 np0005539564 podman[310149]: 2025-11-29 08:59:12.569409248 +0000 UTC m=+0.117523561 container cleanup 1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:59:12 np0005539564 systemd[1]: libpod-conmon-1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125.scope: Deactivated successfully.
Nov 29 03:59:12 np0005539564 podman[310196]: 2025-11-29 08:59:12.674518862 +0000 UTC m=+0.075693949 container remove 1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.681 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[049927c0-29ce-448b-9368-d76145cf5f2f]: (4, ('Sat Nov 29 08:59:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587 (1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125)\n1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125\nSat Nov 29 08:59:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587 (1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125)\n1fd76852648426288a03f25595f882ffa57725a7c12f4fc4e4ef56648fff3125\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.684 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4c394b8e-7554-48d1-b92a-3da1d06ac80f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.686 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5495d9f7-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.689 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 kernel: tap5495d9f7-00: left promiscuous mode
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.718 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.722 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[aa730379-428e-48a4-ace1-d32b4e398d0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.736 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb69abf-b34b-467c-9daa-06e56eb43665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.738 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[64e3114e-6639-4791-86a9-a2a51e9a04e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.759 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[93ea33c7-32ad-426b-b497-a03f92acdff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 969607, 'reachable_time': 32650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310213, 'error': None, 'target': 'ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:59:12 np0005539564 systemd[1]: run-netns-ovnmeta\x2d5495d9f7\x2d028e\x2d40a5\x2dbe0d\x2d02dd42ede587.mount: Deactivated successfully.
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.763 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5495d9f7-028e-40a5-be0d-02dd42ede587 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:59:12 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 08:59:12.763 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d156d6fe-4fb0-4cc6-9cc2-69e84b5e6e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.980 226310 INFO nova.virt.libvirt.driver [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Deleting instance files /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc_del#033[00m
Nov 29 03:59:12 np0005539564 nova_compute[226295]: 2025-11-29 08:59:12.982 226310 INFO nova.virt.libvirt.driver [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Deletion of /var/lib/nova/instances/248e436d-b395-42ee-b487-5d8336f62ebc_del complete#033[00m
Nov 29 03:59:13 np0005539564 nova_compute[226295]: 2025-11-29 08:59:13.069 226310 INFO nova.compute.manager [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:59:13 np0005539564 nova_compute[226295]: 2025-11-29 08:59:13.070 226310 DEBUG oslo.service.loopingcall [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:59:13 np0005539564 nova_compute[226295]: 2025-11-29 08:59:13.071 226310 DEBUG nova.compute.manager [-] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:59:13 np0005539564 nova_compute[226295]: 2025-11-29 08:59:13.071 226310 DEBUG nova.network.neutron [-] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:59:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:13 np0005539564 nova_compute[226295]: 2025-11-29 08:59:13.830 226310 DEBUG nova.network.neutron [req-ff01edd4-f976-471e-877b-f69b3b622134 req-f7468b2e-ce44-4407-9e53-522af5c1dc29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Updated VIF entry in instance network info cache for port a22ea9cb-3479-4928-a0b9-996401fa64ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:59:13 np0005539564 nova_compute[226295]: 2025-11-29 08:59:13.831 226310 DEBUG nova.network.neutron [req-ff01edd4-f976-471e-877b-f69b3b622134 req-f7468b2e-ce44-4407-9e53-522af5c1dc29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Updating instance_info_cache with network_info: [{"id": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "address": "fa:16:3e:77:6b:0e", "network": {"id": "5495d9f7-028e-40a5-be0d-02dd42ede587", "bridge": "br-int", "label": "tempest-network-smoke--581542197", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa22ea9cb-34", "ovs_interfaceid": "a22ea9cb-3479-4928-a0b9-996401fa64ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:59:13 np0005539564 nova_compute[226295]: 2025-11-29 08:59:13.859 226310 DEBUG oslo_concurrency.lockutils [req-ff01edd4-f976-471e-877b-f69b3b622134 req-f7468b2e-ce44-4407-9e53-522af5c1dc29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-248e436d-b395-42ee-b487-5d8336f62ebc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.252 226310 DEBUG nova.network.neutron [-] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.277 226310 INFO nova.compute.manager [-] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Took 1.21 seconds to deallocate network for instance.#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.348 226310 DEBUG nova.compute.manager [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-vif-unplugged-a22ea9cb-3479-4928-a0b9-996401fa64ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.349 226310 DEBUG oslo_concurrency.lockutils [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.349 226310 DEBUG oslo_concurrency.lockutils [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.350 226310 DEBUG oslo_concurrency.lockutils [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.350 226310 DEBUG nova.compute.manager [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] No waiting events found dispatching network-vif-unplugged-a22ea9cb-3479-4928-a0b9-996401fa64ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.351 226310 DEBUG nova.compute.manager [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-vif-unplugged-a22ea9cb-3479-4928-a0b9-996401fa64ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.351 226310 DEBUG nova.compute.manager [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.352 226310 DEBUG oslo_concurrency.lockutils [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.352 226310 DEBUG oslo_concurrency.lockutils [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.353 226310 DEBUG oslo_concurrency.lockutils [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.353 226310 DEBUG nova.compute.manager [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] No waiting events found dispatching network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.353 226310 WARNING nova.compute.manager [req-5b96283b-9d47-4c79-bdb5-199b2827afe0 req-8ef8ef6e-bdba-4971-a205-922896c3186a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received unexpected event network-vif-plugged-a22ea9cb-3479-4928-a0b9-996401fa64ca for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.358 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.358 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.410 226310 DEBUG nova.compute.manager [req-9fe682d0-40d0-42b0-ae51-2dd51b304f1b req-e10e8a52-9609-4b47-beb7-6b4d49c25b05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Received event network-vif-deleted-a22ea9cb-3479-4928-a0b9-996401fa64ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.412 226310 DEBUG oslo_concurrency.processutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:14 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:14 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/173704970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.867 226310 DEBUG oslo_concurrency.processutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.876 226310 DEBUG nova.compute.provider_tree [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.894 226310 DEBUG nova.scheduler.client.report [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.921 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:14 np0005539564 nova_compute[226295]: 2025-11-29 08:59:14.969 226310 INFO nova.scheduler.client.report [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 248e436d-b395-42ee-b487-5d8336f62ebc#033[00m
Nov 29 03:59:15 np0005539564 nova_compute[226295]: 2025-11-29 08:59:15.032 226310 DEBUG oslo_concurrency.lockutils [None req-7d2e6e60-a89e-4f10-9135-101f62c2582b 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "248e436d-b395-42ee-b487-5d8336f62ebc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:15.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:15.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:16 np0005539564 nova_compute[226295]: 2025-11-29 08:59:16.793 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:17.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:17.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:17 np0005539564 nova_compute[226295]: 2025-11-29 08:59:17.530 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:19.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:19.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:19 np0005539564 podman[310241]: 2025-11-29 08:59:19.529023945 +0000 UTC m=+0.069262754 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:59:19 np0005539564 podman[310240]: 2025-11-29 08:59:19.539842198 +0000 UTC m=+0.091276310 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:59:19 np0005539564 podman[310239]: 2025-11-29 08:59:19.574250389 +0000 UTC m=+0.122241008 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Nov 29 03:59:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:21 np0005539564 nova_compute[226295]: 2025-11-29 08:59:21.030 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:21 np0005539564 nova_compute[226295]: 2025-11-29 08:59:21.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:21.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:21.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:21 np0005539564 nova_compute[226295]: 2025-11-29 08:59:21.796 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:22 np0005539564 nova_compute[226295]: 2025-11-29 08:59:22.532 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:23.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:23.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:25.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:25.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:26 np0005539564 nova_compute[226295]: 2025-11-29 08:59:26.798 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:26 np0005539564 nova_compute[226295]: 2025-11-29 08:59:26.842 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:27 np0005539564 nova_compute[226295]: 2025-11-29 08:59:27.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:27.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:27.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:27 np0005539564 nova_compute[226295]: 2025-11-29 08:59:27.496 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406752.494755, 248e436d-b395-42ee-b487-5d8336f62ebc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:59:27 np0005539564 nova_compute[226295]: 2025-11-29 08:59:27.496 226310 INFO nova.compute.manager [-] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:59:27 np0005539564 nova_compute[226295]: 2025-11-29 08:59:27.516 226310 DEBUG nova.compute.manager [None req-b3b08039-9e8e-4853-ae20-19ec5287211a - - - - - -] [instance: 248e436d-b395-42ee-b487-5d8336f62ebc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:59:27 np0005539564 nova_compute[226295]: 2025-11-29 08:59:27.534 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:28 np0005539564 nova_compute[226295]: 2025-11-29 08:59:28.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:29.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:29.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:31.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:31 np0005539564 nova_compute[226295]: 2025-11-29 08:59:31.801 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:32 np0005539564 nova_compute[226295]: 2025-11-29 08:59:32.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:32 np0005539564 nova_compute[226295]: 2025-11-29 08:59:32.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:59:32 np0005539564 nova_compute[226295]: 2025-11-29 08:59:32.536 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:33 np0005539564 nova_compute[226295]: 2025-11-29 08:59:33.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:33 np0005539564 nova_compute[226295]: 2025-11-29 08:59:33.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:59:33 np0005539564 nova_compute[226295]: 2025-11-29 08:59:33.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:59:33 np0005539564 nova_compute[226295]: 2025-11-29 08:59:33.364 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:59:33 np0005539564 nova_compute[226295]: 2025-11-29 08:59:33.364 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:33.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:33.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:35.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:35.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:36 np0005539564 nova_compute[226295]: 2025-11-29 08:59:36.802 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:37 np0005539564 nova_compute[226295]: 2025-11-29 08:59:37.538 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:38 np0005539564 nova_compute[226295]: 2025-11-29 08:59:38.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:39.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:41 np0005539564 nova_compute[226295]: 2025-11-29 08:59:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:41.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:41.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:41 np0005539564 nova_compute[226295]: 2025-11-29 08:59:41.804 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:42 np0005539564 nova_compute[226295]: 2025-11-29 08:59:42.541 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:43.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:43.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:45.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:45.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:46 np0005539564 nova_compute[226295]: 2025-11-29 08:59:46.806 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:47.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:47.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:47 np0005539564 nova_compute[226295]: 2025-11-29 08:59:47.543 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:49.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:50 np0005539564 nova_compute[226295]: 2025-11-29 08:59:50.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:50 np0005539564 nova_compute[226295]: 2025-11-29 08:59:50.424 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:50 np0005539564 nova_compute[226295]: 2025-11-29 08:59:50.424 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:50 np0005539564 nova_compute[226295]: 2025-11-29 08:59:50.425 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:50 np0005539564 nova_compute[226295]: 2025-11-29 08:59:50.425 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:59:50 np0005539564 nova_compute[226295]: 2025-11-29 08:59:50.425 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:50 np0005539564 podman[310310]: 2025-11-29 08:59:50.538881595 +0000 UTC m=+0.078078664 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 03:59:50 np0005539564 podman[310309]: 2025-11-29 08:59:50.551790513 +0000 UTC m=+0.102088222 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:59:50 np0005539564 podman[310308]: 2025-11-29 08:59:50.566184594 +0000 UTC m=+0.114475939 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:59:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2471491815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:50 np0005539564 nova_compute[226295]: 2025-11-29 08:59:50.908 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:51 np0005539564 nova_compute[226295]: 2025-11-29 08:59:51.175 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:59:51 np0005539564 nova_compute[226295]: 2025-11-29 08:59:51.178 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4185MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:59:51 np0005539564 nova_compute[226295]: 2025-11-29 08:59:51.178 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:51 np0005539564 nova_compute[226295]: 2025-11-29 08:59:51.179 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:51.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:51.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:51 np0005539564 nova_compute[226295]: 2025-11-29 08:59:51.782 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:59:51 np0005539564 nova_compute[226295]: 2025-11-29 08:59:51.783 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:59:51 np0005539564 nova_compute[226295]: 2025-11-29 08:59:51.809 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:51 np0005539564 nova_compute[226295]: 2025-11-29 08:59:51.816 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3744677558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:52 np0005539564 nova_compute[226295]: 2025-11-29 08:59:52.302 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:52 np0005539564 nova_compute[226295]: 2025-11-29 08:59:52.314 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:59:52 np0005539564 nova_compute[226295]: 2025-11-29 08:59:52.346 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:59:52 np0005539564 nova_compute[226295]: 2025-11-29 08:59:52.426 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:59:52 np0005539564 nova_compute[226295]: 2025-11-29 08:59:52.427 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:52 np0005539564 nova_compute[226295]: 2025-11-29 08:59:52.545 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:53.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:53.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:55.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:55.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:59:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:59:55 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:59:56 np0005539564 nova_compute[226295]: 2025-11-29 08:59:56.811 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:57.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:57.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:57 np0005539564 nova_compute[226295]: 2025-11-29 08:59:57.548 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:58 np0005539564 ovn_controller[130591]: 2025-11-29T08:59:58Z|00851|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 29 03:59:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:59.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 03:59:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:59.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:00 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 04:00:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:01.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:01.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:01 np0005539564 nova_compute[226295]: 2025-11-29 09:00:01.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:00:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:00:02 np0005539564 nova_compute[226295]: 2025-11-29 09:00:02.596 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:02.820 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:00:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:02.822 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:00:02 np0005539564 nova_compute[226295]: 2025-11-29 09:00:02.821 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:03.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:03.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:03.780 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:03.780 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:03.781 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:05.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:05.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.601204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805601291, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 957, "num_deletes": 251, "total_data_size": 1985735, "memory_usage": 2017888, "flush_reason": "Manual Compaction"}
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805614793, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1299245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81956, "largest_seqno": 82908, "table_properties": {"data_size": 1294851, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9708, "raw_average_key_size": 19, "raw_value_size": 1286080, "raw_average_value_size": 2608, "num_data_blocks": 91, "num_entries": 493, "num_filter_entries": 493, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406733, "oldest_key_time": 1764406733, "file_creation_time": 1764406805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 13631 microseconds, and 7428 cpu microseconds.
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.614850) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1299245 bytes OK
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.614875) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.617639) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.617661) EVENT_LOG_v1 {"time_micros": 1764406805617654, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.617684) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1980969, prev total WAL file size 1980969, number of live WAL files 2.
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.619024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1268KB)], [165(14MB)]
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805619093, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 16400312, "oldest_snapshot_seqno": -1}
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10775 keys, 14492917 bytes, temperature: kUnknown
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805791894, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 14492917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14422279, "index_size": 42621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26949, "raw_key_size": 285469, "raw_average_key_size": 26, "raw_value_size": 14232434, "raw_average_value_size": 1320, "num_data_blocks": 1620, "num_entries": 10775, "num_filter_entries": 10775, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764406805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.792410) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 14492917 bytes
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.797253) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.8 rd, 83.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.4 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(23.8) write-amplify(11.2) OK, records in: 11290, records dropped: 515 output_compression: NoCompression
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.797288) EVENT_LOG_v1 {"time_micros": 1764406805797272, "job": 106, "event": "compaction_finished", "compaction_time_micros": 173071, "compaction_time_cpu_micros": 61812, "output_level": 6, "num_output_files": 1, "total_output_size": 14492917, "num_input_records": 11290, "num_output_records": 10775, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805798121, "job": 106, "event": "table_file_deletion", "file_number": 167}
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805803003, "job": 106, "event": "table_file_deletion", "file_number": 165}
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.618827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.803187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.803195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.803200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.803204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:05.803208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:06 np0005539564 nova_compute[226295]: 2025-11-29 09:00:06.816 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:07 np0005539564 nova_compute[226295]: 2025-11-29 09:00:07.598 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:09.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:09.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:11.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:11.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:11 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:11.824 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:11 np0005539564 nova_compute[226295]: 2025-11-29 09:00:11.834 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:12 np0005539564 nova_compute[226295]: 2025-11-29 09:00:12.600 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:13.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:13.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:15.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:16 np0005539564 nova_compute[226295]: 2025-11-29 09:00:16.836 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.288420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817288469, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 357, "num_deletes": 250, "total_data_size": 272159, "memory_usage": 279360, "flush_reason": "Manual Compaction"}
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817293202, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 178336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82913, "largest_seqno": 83265, "table_properties": {"data_size": 176179, "index_size": 320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5928, "raw_average_key_size": 20, "raw_value_size": 171912, "raw_average_value_size": 588, "num_data_blocks": 14, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406806, "oldest_key_time": 1764406806, "file_creation_time": 1764406817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 4836 microseconds, and 1845 cpu microseconds.
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.293257) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 178336 bytes OK
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.293283) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.295586) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.295608) EVENT_LOG_v1 {"time_micros": 1764406817295601, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.295628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 269758, prev total WAL file size 269758, number of live WAL files 2.
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.296207) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373638' seq:72057594037927935, type:22 .. '6D6772737461740033303139' seq:0, type:0; will stop at (end)
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(174KB)], [168(13MB)]
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817296271, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14671253, "oldest_snapshot_seqno": -1}
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10559 keys, 10830038 bytes, temperature: kUnknown
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817432232, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 10830038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10765608, "index_size": 36925, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 281177, "raw_average_key_size": 26, "raw_value_size": 10584486, "raw_average_value_size": 1002, "num_data_blocks": 1382, "num_entries": 10559, "num_filter_entries": 10559, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764406817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:00:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:17.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.432517) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10830038 bytes
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.453397) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.9 rd, 79.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(143.0) write-amplify(60.7) OK, records in: 11067, records dropped: 508 output_compression: NoCompression
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.453444) EVENT_LOG_v1 {"time_micros": 1764406817453426, "job": 108, "event": "compaction_finished", "compaction_time_micros": 136031, "compaction_time_cpu_micros": 57210, "output_level": 6, "num_output_files": 1, "total_output_size": 10830038, "num_input_records": 11067, "num_output_records": 10559, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817453702, "job": 108, "event": "table_file_deletion", "file_number": 170}
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817457306, "job": 108, "event": "table_file_deletion", "file_number": 168}
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.296137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.457395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.457400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.457403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.457405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:00:17.457408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:17 np0005539564 nova_compute[226295]: 2025-11-29 09:00:17.602 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:19.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:19.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:21.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:21 np0005539564 podman[310597]: 2025-11-29 09:00:21.539497845 +0000 UTC m=+0.087846998 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:00:21 np0005539564 podman[310598]: 2025-11-29 09:00:21.540831051 +0000 UTC m=+0.075216046 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:00:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:21.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:21 np0005539564 podman[310596]: 2025-11-29 09:00:21.58958296 +0000 UTC m=+0.140373079 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:00:21 np0005539564 nova_compute[226295]: 2025-11-29 09:00:21.837 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:22 np0005539564 nova_compute[226295]: 2025-11-29 09:00:22.652 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:23.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:26 np0005539564 nova_compute[226295]: 2025-11-29 09:00:26.839 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:27 np0005539564 nova_compute[226295]: 2025-11-29 09:00:27.420 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:27.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:27.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:27 np0005539564 nova_compute[226295]: 2025-11-29 09:00:27.658 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:29 np0005539564 nova_compute[226295]: 2025-11-29 09:00:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:29 np0005539564 nova_compute[226295]: 2025-11-29 09:00:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:29.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:29.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:31.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:31 np0005539564 nova_compute[226295]: 2025-11-29 09:00:31.842 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:32 np0005539564 nova_compute[226295]: 2025-11-29 09:00:32.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:32 np0005539564 nova_compute[226295]: 2025-11-29 09:00:32.708 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:33.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:34 np0005539564 nova_compute[226295]: 2025-11-29 09:00:34.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:34 np0005539564 nova_compute[226295]: 2025-11-29 09:00:34.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:00:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:35 np0005539564 nova_compute[226295]: 2025-11-29 09:00:35.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:35 np0005539564 nova_compute[226295]: 2025-11-29 09:00:35.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:00:35 np0005539564 nova_compute[226295]: 2025-11-29 09:00:35.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:00:35 np0005539564 nova_compute[226295]: 2025-11-29 09:00:35.379 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:00:35 np0005539564 nova_compute[226295]: 2025-11-29 09:00:35.379 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:35.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:36 np0005539564 nova_compute[226295]: 2025-11-29 09:00:36.846 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:37.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:37.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:37 np0005539564 nova_compute[226295]: 2025-11-29 09:00:37.710 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:39 np0005539564 nova_compute[226295]: 2025-11-29 09:00:39.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:39.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:41 np0005539564 nova_compute[226295]: 2025-11-29 09:00:41.846 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:42 np0005539564 nova_compute[226295]: 2025-11-29 09:00:42.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:42 np0005539564 nova_compute[226295]: 2025-11-29 09:00:42.713 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:43.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:43.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:45.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:46.385 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:00:46 np0005539564 nova_compute[226295]: 2025-11-29 09:00:46.385 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:46.388 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:00:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:00:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 68K writes, 266K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 68K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3263 writes, 13K keys, 3263 commit groups, 1.0 writes per commit group, ingest: 13.75 MB, 0.02 MB/s#012Interval WAL: 3263 writes, 1301 syncs, 2.51 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:00:46 np0005539564 nova_compute[226295]: 2025-11-29 09:00:46.892 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:47.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:47.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:47 np0005539564 nova_compute[226295]: 2025-11-29 09:00:47.716 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:51.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:51 np0005539564 nova_compute[226295]: 2025-11-29 09:00:51.894 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:52 np0005539564 nova_compute[226295]: 2025-11-29 09:00:52.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:52 np0005539564 nova_compute[226295]: 2025-11-29 09:00:52.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:52 np0005539564 nova_compute[226295]: 2025-11-29 09:00:52.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:52 np0005539564 nova_compute[226295]: 2025-11-29 09:00:52.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:52 np0005539564 nova_compute[226295]: 2025-11-29 09:00:52.382 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:00:52 np0005539564 nova_compute[226295]: 2025-11-29 09:00:52.383 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:52 np0005539564 podman[310663]: 2025-11-29 09:00:52.538745837 +0000 UTC m=+0.067638351 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 04:00:52 np0005539564 podman[310662]: 2025-11-29 09:00:52.556396995 +0000 UTC m=+0.095129645 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:00:52 np0005539564 podman[310661]: 2025-11-29 09:00:52.597053055 +0000 UTC m=+0.134306435 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:00:52 np0005539564 nova_compute[226295]: 2025-11-29 09:00:52.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:52 np0005539564 nova_compute[226295]: 2025-11-29 09:00:52.852 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.014 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.015 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4180MB free_disk=20.94271469116211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.016 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.016 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.084 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.084 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.101 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/661294477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.604 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.611 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:00:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:53.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.641 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.644 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:00:53 np0005539564 nova_compute[226295]: 2025-11-29 09:00:53.644 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:54 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:00:54.476 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:55.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:56 np0005539564 nova_compute[226295]: 2025-11-29 09:00:56.920 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:57.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:57.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:57 np0005539564 nova_compute[226295]: 2025-11-29 09:00:57.769 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:59.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:00:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:59.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:01.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:01.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:01 np0005539564 nova_compute[226295]: 2025-11-29 09:01:01.924 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:02 np0005539564 nova_compute[226295]: 2025-11-29 09:01:02.771 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:03.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:03.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:01:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:03 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:01:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:01:03.781 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:01:03.781 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:01:03.781 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:05.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:05.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:06 np0005539564 nova_compute[226295]: 2025-11-29 09:01:06.926 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:07.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:07.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:07 np0005539564 nova_compute[226295]: 2025-11-29 09:01:07.809 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:01:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:09.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:01:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:09.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:11.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:11.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:11 np0005539564 nova_compute[226295]: 2025-11-29 09:01:11.929 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:12 np0005539564 nova_compute[226295]: 2025-11-29 09:01:12.830 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:13.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:15.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:01:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:15.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:01:16 np0005539564 nova_compute[226295]: 2025-11-29 09:01:16.932 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:17.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:17.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:17 np0005539564 nova_compute[226295]: 2025-11-29 09:01:17.876 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:19.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:19.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:21.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:21.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:21 np0005539564 nova_compute[226295]: 2025-11-29 09:01:21.934 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:22 np0005539564 nova_compute[226295]: 2025-11-29 09:01:22.878 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:23 np0005539564 podman[310958]: 2025-11-29 09:01:23.533121007 +0000 UTC m=+0.076159283 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 04:01:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:23 np0005539564 podman[310959]: 2025-11-29 09:01:23.549736167 +0000 UTC m=+0.084933340 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:23 np0005539564 podman[310957]: 2025-11-29 09:01:23.567883587 +0000 UTC m=+0.117391467 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 04:01:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:23.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:25.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:25.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:26 np0005539564 nova_compute[226295]: 2025-11-29 09:01:26.937 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:27.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:27 np0005539564 nova_compute[226295]: 2025-11-29 09:01:27.638 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:27 np0005539564 nova_compute[226295]: 2025-11-29 09:01:27.920 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:29 np0005539564 nova_compute[226295]: 2025-11-29 09:01:29.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:29 np0005539564 nova_compute[226295]: 2025-11-29 09:01:29.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:29.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:29.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:31.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:31.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:31 np0005539564 nova_compute[226295]: 2025-11-29 09:01:31.939 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:32 np0005539564 nova_compute[226295]: 2025-11-29 09:01:32.955 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:33.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:33.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:01:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 16K writes, 83K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1542 writes, 7764 keys, 1542 commit groups, 1.0 writes per commit group, ingest: 15.57 MB, 0.03 MB/s#012Interval WAL: 1542 writes, 1542 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     25.9      4.01              0.42        54    0.074       0      0       0.0       0.0#012  L6      1/0   10.33 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.4     53.9     46.2     12.09              1.98        53    0.228    428K    28K       0.0       0.0#012 Sum      1/0   10.33 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.4     40.5     41.1     16.10              2.39       107    0.150    428K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.9     55.1     53.9      1.82              0.37        14    0.130     77K   3641       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     53.9     46.2     12.09              1.98        53    0.228    428K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     25.9      4.01              0.42        53    0.076       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.101, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.65 GB write, 0.10 MB/s write, 0.64 GB read, 0.10 MB/s read, 16.1 seconds#012Interval compaction: 0.10 GB write, 0.16 MB/s write, 0.10 GB read, 0.17 MB/s read, 1.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 72.55 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000571 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3963,69.53 MB,22.8718%) FilterBlock(107,1.14 MB,0.376024%) IndexBlock(107,1.88 MB,0.618147%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 04:01:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:35.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:35.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:36 np0005539564 nova_compute[226295]: 2025-11-29 09:01:36.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:36 np0005539564 nova_compute[226295]: 2025-11-29 09:01:36.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:01:36 np0005539564 nova_compute[226295]: 2025-11-29 09:01:36.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:01:36 np0005539564 nova_compute[226295]: 2025-11-29 09:01:36.359 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:01:36 np0005539564 nova_compute[226295]: 2025-11-29 09:01:36.360 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:36 np0005539564 nova_compute[226295]: 2025-11-29 09:01:36.360 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:01:36 np0005539564 nova_compute[226295]: 2025-11-29 09:01:36.941 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:37 np0005539564 nova_compute[226295]: 2025-11-29 09:01:37.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:37.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:37.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:38 np0005539564 nova_compute[226295]: 2025-11-29 09:01:38.012 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:39.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:41 np0005539564 nova_compute[226295]: 2025-11-29 09:01:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:41.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:41.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:41 np0005539564 nova_compute[226295]: 2025-11-29 09:01:41.945 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:43 np0005539564 nova_compute[226295]: 2025-11-29 09:01:43.014 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:43.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:43.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:44 np0005539564 nova_compute[226295]: 2025-11-29 09:01:44.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:45.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:45.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:46 np0005539564 nova_compute[226295]: 2025-11-29 09:01:46.948 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:47.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:47.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:48 np0005539564 nova_compute[226295]: 2025-11-29 09:01:48.029 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:01:48.908 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:01:48 np0005539564 nova_compute[226295]: 2025-11-29 09:01:48.908 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:01:48.910 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:01:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:49.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:49.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:51.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:51.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:51 np0005539564 nova_compute[226295]: 2025-11-29 09:01:51.948 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:52 np0005539564 nova_compute[226295]: 2025-11-29 09:01:52.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:52 np0005539564 nova_compute[226295]: 2025-11-29 09:01:52.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:52 np0005539564 nova_compute[226295]: 2025-11-29 09:01:52.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:52 np0005539564 nova_compute[226295]: 2025-11-29 09:01:52.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:52 np0005539564 nova_compute[226295]: 2025-11-29 09:01:52.377 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:01:52 np0005539564 nova_compute[226295]: 2025-11-29 09:01:52.378 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:01:52.912 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.071 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.487 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:53.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.694 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.695 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4182MB free_disk=20.9427490234375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.695 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.696 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:53.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.844 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.844 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:01:53 np0005539564 nova_compute[226295]: 2025-11-29 09:01:53.950 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:54 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1600940834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:54 np0005539564 nova_compute[226295]: 2025-11-29 09:01:54.401 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:54 np0005539564 nova_compute[226295]: 2025-11-29 09:01:54.410 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:01:54 np0005539564 nova_compute[226295]: 2025-11-29 09:01:54.436 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:01:54 np0005539564 nova_compute[226295]: 2025-11-29 09:01:54.438 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:01:54 np0005539564 nova_compute[226295]: 2025-11-29 09:01:54.438 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:54 np0005539564 podman[311067]: 2025-11-29 09:01:54.528739892 +0000 UTC m=+0.066489782 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 04:01:54 np0005539564 podman[311066]: 2025-11-29 09:01:54.559630298 +0000 UTC m=+0.097752808 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 04:01:54 np0005539564 podman[311065]: 2025-11-29 09:01:54.607579077 +0000 UTC m=+0.157956579 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 04:01:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:55.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:55.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:56 np0005539564 nova_compute[226295]: 2025-11-29 09:01:56.950 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:57.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:57.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:58 np0005539564 nova_compute[226295]: 2025-11-29 09:01:58.074 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:59.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:01:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:59.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:01.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:01.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:01 np0005539564 nova_compute[226295]: 2025-11-29 09:02:01.953 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:03 np0005539564 nova_compute[226295]: 2025-11-29 09:02:03.109 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:03.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:03.782 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:03.782 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:03.782 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:05.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:05.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:07 np0005539564 nova_compute[226295]: 2025-11-29 09:02:07.000 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:07.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:07.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:08 np0005539564 nova_compute[226295]: 2025-11-29 09:02:08.112 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:09.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:11.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:11.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:12 np0005539564 nova_compute[226295]: 2025-11-29 09:02:12.041 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:02:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:12 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:02:13 np0005539564 nova_compute[226295]: 2025-11-29 09:02:13.157 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:13.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:15.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:15.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:17 np0005539564 nova_compute[226295]: 2025-11-29 09:02:17.044 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:17.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:17.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:18 np0005539564 nova_compute[226295]: 2025-11-29 09:02:18.159 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:19 np0005539564 nova_compute[226295]: 2025-11-29 09:02:19.480 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:19 np0005539564 nova_compute[226295]: 2025-11-29 09:02:19.480 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:19 np0005539564 nova_compute[226295]: 2025-11-29 09:02:19.495 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:02:19 np0005539564 nova_compute[226295]: 2025-11-29 09:02:19.574 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:19 np0005539564 nova_compute[226295]: 2025-11-29 09:02:19.575 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:19 np0005539564 nova_compute[226295]: 2025-11-29 09:02:19.584 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:02:19 np0005539564 nova_compute[226295]: 2025-11-29 09:02:19.584 226310 INFO nova.compute.claims [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 04:02:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:19.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:19 np0005539564 nova_compute[226295]: 2025-11-29 09:02:19.687 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:19.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2248713597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.143 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.151 226310 DEBUG nova.compute.provider_tree [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.176 226310 DEBUG nova.scheduler.client.report [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.211 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.213 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:02:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:20 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.284 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.285 226310 DEBUG nova.network.neutron [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.310 226310 INFO nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.331 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.479 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.481 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.481 226310 INFO nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Creating image(s)#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.518 226310 DEBUG nova.storage.rbd_utils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] rbd image 168e4a72-21ed-43cc-8551-86ce753fecde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.563 226310 DEBUG nova.storage.rbd_utils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] rbd image 168e4a72-21ed-43cc-8551-86ce753fecde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.605 226310 DEBUG nova.storage.rbd_utils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] rbd image 168e4a72-21ed-43cc-8551-86ce753fecde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.611 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.712 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.713 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.715 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.715 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.759 226310 DEBUG nova.storage.rbd_utils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] rbd image 168e4a72-21ed-43cc-8551-86ce753fecde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:02:20 np0005539564 nova_compute[226295]: 2025-11-29 09:02:20.765 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 168e4a72-21ed-43cc-8551-86ce753fecde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:21 np0005539564 nova_compute[226295]: 2025-11-29 09:02:21.126 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 168e4a72-21ed-43cc-8551-86ce753fecde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:21 np0005539564 nova_compute[226295]: 2025-11-29 09:02:21.220 226310 DEBUG nova.storage.rbd_utils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] resizing rbd image 168e4a72-21ed-43cc-8551-86ce753fecde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 04:02:21 np0005539564 nova_compute[226295]: 2025-11-29 09:02:21.352 226310 DEBUG nova.objects.instance [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'migration_context' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:21 np0005539564 nova_compute[226295]: 2025-11-29 09:02:21.395 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 04:02:21 np0005539564 nova_compute[226295]: 2025-11-29 09:02:21.395 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Ensure instance console log exists: /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:02:21 np0005539564 nova_compute[226295]: 2025-11-29 09:02:21.396 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:21 np0005539564 nova_compute[226295]: 2025-11-29 09:02:21.397 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:21 np0005539564 nova_compute[226295]: 2025-11-29 09:02:21.398 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:21.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:21.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:22 np0005539564 nova_compute[226295]: 2025-11-29 09:02:22.046 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:22 np0005539564 nova_compute[226295]: 2025-11-29 09:02:22.076 226310 DEBUG nova.policy [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72d4bc4447574563becfcc44047872c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '68bb7f46e7ed430eaa1d724a2abe3a41', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:02:23 np0005539564 nova_compute[226295]: 2025-11-29 09:02:23.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:23.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:23 np0005539564 nova_compute[226295]: 2025-11-29 09:02:23.631 226310 DEBUG nova.network.neutron [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Successfully created port: 482dbf7c-1611-4a98-973e-b0b5d9581b96 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:02:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:23.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:25 np0005539564 podman[311503]: 2025-11-29 09:02:25.553168917 +0000 UTC m=+0.085399655 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:02:25 np0005539564 podman[311502]: 2025-11-29 09:02:25.560972608 +0000 UTC m=+0.108153600 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:02:25 np0005539564 nova_compute[226295]: 2025-11-29 09:02:25.562 226310 DEBUG nova.network.neutron [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Successfully updated port: 482dbf7c-1611-4a98-973e-b0b5d9581b96 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:02:25 np0005539564 podman[311504]: 2025-11-29 09:02:25.570946228 +0000 UTC m=+0.096692159 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:02:25 np0005539564 nova_compute[226295]: 2025-11-29 09:02:25.577 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:02:25 np0005539564 nova_compute[226295]: 2025-11-29 09:02:25.577 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquired lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:02:25 np0005539564 nova_compute[226295]: 2025-11-29 09:02:25.577 226310 DEBUG nova.network.neutron [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:02:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:25.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:25 np0005539564 nova_compute[226295]: 2025-11-29 09:02:25.715 226310 DEBUG nova.compute.manager [req-fe376202-0607-477b-86f6-16cbcfc6a42d req-f04652c8-d92d-42b8-bf0d-11ad09fcee46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-changed-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:25 np0005539564 nova_compute[226295]: 2025-11-29 09:02:25.716 226310 DEBUG nova.compute.manager [req-fe376202-0607-477b-86f6-16cbcfc6a42d req-f04652c8-d92d-42b8-bf0d-11ad09fcee46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Refreshing instance network info cache due to event network-changed-482dbf7c-1611-4a98-973e-b0b5d9581b96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:02:25 np0005539564 nova_compute[226295]: 2025-11-29 09:02:25.717 226310 DEBUG oslo_concurrency.lockutils [req-fe376202-0607-477b-86f6-16cbcfc6a42d req-f04652c8-d92d-42b8-bf0d-11ad09fcee46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:02:25 np0005539564 nova_compute[226295]: 2025-11-29 09:02:25.748 226310 DEBUG nova.network.neutron [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:02:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:25.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.609 226310 DEBUG nova.network.neutron [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Updating instance_info_cache with network_info: [{"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:02:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:27.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.634 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Releasing lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.635 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance network_info: |[{"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.636 226310 DEBUG oslo_concurrency.lockutils [req-fe376202-0607-477b-86f6-16cbcfc6a42d req-f04652c8-d92d-42b8-bf0d-11ad09fcee46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.636 226310 DEBUG nova.network.neutron [req-fe376202-0607-477b-86f6-16cbcfc6a42d req-f04652c8-d92d-42b8-bf0d-11ad09fcee46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Refreshing network info cache for port 482dbf7c-1611-4a98-973e-b0b5d9581b96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.640 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Start _get_guest_xml network_info=[{"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.647 226310 WARNING nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.654 226310 DEBUG nova.virt.libvirt.host [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.655 226310 DEBUG nova.virt.libvirt.host [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.668 226310 DEBUG nova.virt.libvirt.host [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.670 226310 DEBUG nova.virt.libvirt.host [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.672 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.673 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.674 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.674 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.675 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.675 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.676 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.676 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.677 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.677 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.678 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.678 226310 DEBUG nova.virt.hardware [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:02:27 np0005539564 nova_compute[226295]: 2025-11-29 09:02:27.684 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:27.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:02:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1314036464' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.163 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.166 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.202 226310 DEBUG nova.storage.rbd_utils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] rbd image 168e4a72-21ed-43cc-8551-86ce753fecde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.208 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.431 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:02:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2061222742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.626 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.629 226310 DEBUG nova.virt.libvirt.vif [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1611733281',display_name='tempest-TestServerAdvancedOps-server-1611733281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1611733281',id=213,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='68bb7f46e7ed430eaa1d724a2abe3a41',ramdisk_id='',reservation_id='r-aamfen3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1996398872',owner_user_name='tempest-TestServerAdvancedOps-1996398872-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:02:20Z,user_data=None,user_id='72d4bc4447574563becfcc44047872c6',uuid=168e4a72-21ed-43cc-8551-86ce753fecde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.630 226310 DEBUG nova.network.os_vif_util [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converting VIF {"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.632 226310 DEBUG nova.network.os_vif_util [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.634 226310 DEBUG nova.objects.instance [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'pci_devices' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.661 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <uuid>168e4a72-21ed-43cc-8551-86ce753fecde</uuid>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <name>instance-000000d5</name>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestServerAdvancedOps-server-1611733281</nova:name>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 09:02:27</nova:creationTime>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <nova:user uuid="72d4bc4447574563becfcc44047872c6">tempest-TestServerAdvancedOps-1996398872-project-member</nova:user>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <nova:project uuid="68bb7f46e7ed430eaa1d724a2abe3a41">tempest-TestServerAdvancedOps-1996398872</nova:project>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <nova:port uuid="482dbf7c-1611-4a98-973e-b0b5d9581b96">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <system>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <entry name="serial">168e4a72-21ed-43cc-8551-86ce753fecde</entry>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <entry name="uuid">168e4a72-21ed-43cc-8551-86ce753fecde</entry>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </system>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <os>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  </os>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <features>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  </features>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  </clock>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  <devices>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/168e4a72-21ed-43cc-8551-86ce753fecde_disk">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/168e4a72-21ed-43cc-8551-86ce753fecde_disk.config">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:c1:f5:00"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <target dev="tap482dbf7c-16"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </interface>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde/console.log" append="off"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </serial>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <video>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </video>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </rng>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 04:02:28 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 04:02:28 np0005539564 nova_compute[226295]:  </devices>
Nov 29 04:02:28 np0005539564 nova_compute[226295]: </domain>
Nov 29 04:02:28 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.663 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Preparing to wait for external event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.664 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.664 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.664 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.665 226310 DEBUG nova.virt.libvirt.vif [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1611733281',display_name='tempest-TestServerAdvancedOps-server-1611733281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1611733281',id=213,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='68bb7f46e7ed430eaa1d724a2abe3a41',ramdisk_id='',reservation_id='r-aamfen3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1996398872',owner_user_name='tempest-TestServerAdvancedOps-1996398872-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:02:20Z,user_data=None,user_id='72d4bc4447574563becfcc44047872c6',uuid=168e4a72-21ed-43cc-8551-86ce753fecde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.665 226310 DEBUG nova.network.os_vif_util [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converting VIF {"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.666 226310 DEBUG nova.network.os_vif_util [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.667 226310 DEBUG os_vif [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.667 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.668 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.668 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.673 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.673 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap482dbf7c-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.674 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap482dbf7c-16, col_values=(('external_ids', {'iface-id': '482dbf7c-1611-4a98-973e-b0b5d9581b96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:f5:00', 'vm-uuid': '168e4a72-21ed-43cc-8551-86ce753fecde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.676 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:28 np0005539564 NetworkManager[48997]: <info>  [1764406948.6777] manager: (tap482dbf7c-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.680 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.685 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.688 226310 INFO os_vif [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16')#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.755 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.756 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.757 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] No VIF found with MAC fa:16:3e:c1:f5:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.758 226310 INFO nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Using config drive#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.798 226310 DEBUG nova.storage.rbd_utils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] rbd image 168e4a72-21ed-43cc-8551-86ce753fecde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.916 226310 DEBUG nova.network.neutron [req-fe376202-0607-477b-86f6-16cbcfc6a42d req-f04652c8-d92d-42b8-bf0d-11ad09fcee46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Updated VIF entry in instance network info cache for port 482dbf7c-1611-4a98-973e-b0b5d9581b96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.917 226310 DEBUG nova.network.neutron [req-fe376202-0607-477b-86f6-16cbcfc6a42d req-f04652c8-d92d-42b8-bf0d-11ad09fcee46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Updating instance_info_cache with network_info: [{"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:02:28 np0005539564 nova_compute[226295]: 2025-11-29 09:02:28.936 226310 DEBUG oslo_concurrency.lockutils [req-fe376202-0607-477b-86f6-16cbcfc6a42d req-f04652c8-d92d-42b8-bf0d-11ad09fcee46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.360 226310 INFO nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Creating config drive at /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde/disk.config#033[00m
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.369 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptlw1ik9j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.530 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptlw1ik9j" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.580 226310 DEBUG nova.storage.rbd_utils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] rbd image 168e4a72-21ed-43cc-8551-86ce753fecde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.586 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde/disk.config 168e4a72-21ed-43cc-8551-86ce753fecde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:29.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.795 226310 DEBUG oslo_concurrency.processutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde/disk.config 168e4a72-21ed-43cc-8551-86ce753fecde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.796 226310 INFO nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Deleting local config drive /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde/disk.config because it was imported into RBD.#033[00m
Nov 29 04:02:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:29.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:29 np0005539564 kernel: tap482dbf7c-16: entered promiscuous mode
Nov 29 04:02:29 np0005539564 NetworkManager[48997]: <info>  [1764406949.8708] manager: (tap482dbf7c-16): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Nov 29 04:02:29 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:29Z|00852|binding|INFO|Claiming lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 for this chassis.
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.871 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:29 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:29Z|00853|binding|INFO|482dbf7c-1611-4a98-973e-b0b5d9581b96: Claiming fa:16:3e:c1:f5:00 10.100.0.2
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.878 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:29.887 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f5:00 10.100.0.2'], port_security=['fa:16:3e:c1:f5:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '168e4a72-21ed-43cc-8551-86ce753fecde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f15e24c-f34c-4322-ae3d-17c2166d8091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68bb7f46e7ed430eaa1d724a2abe3a41', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e750d1e0-89c8-47bc-ac0b-0b086208ee72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6dd1340-4781-4689-93b2-8c74ec464c49, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=482dbf7c-1611-4a98-973e-b0b5d9581b96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:02:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:29.888 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 482dbf7c-1611-4a98-973e-b0b5d9581b96 in datapath 5f15e24c-f34c-4322-ae3d-17c2166d8091 bound to our chassis#033[00m
Nov 29 04:02:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:29.889 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f15e24c-f34c-4322-ae3d-17c2166d8091 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 04:02:29 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:29.891 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2a70c0-8319-4c21-8b37-aa116a18af86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:02:29 np0005539564 systemd-machined[190128]: New machine qemu-98-instance-000000d5.
Nov 29 04:02:29 np0005539564 systemd-udevd[311700]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:02:29 np0005539564 NetworkManager[48997]: <info>  [1764406949.9340] device (tap482dbf7c-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:02:29 np0005539564 NetworkManager[48997]: <info>  [1764406949.9354] device (tap482dbf7c-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:02:29 np0005539564 systemd[1]: Started Virtual Machine qemu-98-instance-000000d5.
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.945 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:29 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:29Z|00854|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 ovn-installed in OVS
Nov 29 04:02:29 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:29Z|00855|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 up in Southbound
Nov 29 04:02:29 np0005539564 nova_compute[226295]: 2025-11-29 09:02:29.951 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:30 np0005539564 nova_compute[226295]: 2025-11-29 09:02:30.227 226310 DEBUG nova.compute.manager [req-6c329f11-c2ad-4214-8e59-1b51fc4b9f48 req-251df993-49ad-4ed3-94d8-abde7f297f14 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:30 np0005539564 nova_compute[226295]: 2025-11-29 09:02:30.228 226310 DEBUG oslo_concurrency.lockutils [req-6c329f11-c2ad-4214-8e59-1b51fc4b9f48 req-251df993-49ad-4ed3-94d8-abde7f297f14 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:30 np0005539564 nova_compute[226295]: 2025-11-29 09:02:30.228 226310 DEBUG oslo_concurrency.lockutils [req-6c329f11-c2ad-4214-8e59-1b51fc4b9f48 req-251df993-49ad-4ed3-94d8-abde7f297f14 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:30 np0005539564 nova_compute[226295]: 2025-11-29 09:02:30.228 226310 DEBUG oslo_concurrency.lockutils [req-6c329f11-c2ad-4214-8e59-1b51fc4b9f48 req-251df993-49ad-4ed3-94d8-abde7f297f14 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:30 np0005539564 nova_compute[226295]: 2025-11-29 09:02:30.228 226310 DEBUG nova.compute.manager [req-6c329f11-c2ad-4214-8e59-1b51fc4b9f48 req-251df993-49ad-4ed3-94d8-abde7f297f14 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Processing event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:02:30 np0005539564 nova_compute[226295]: 2025-11-29 09:02:30.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.033 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.035 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406951.0327008, 168e4a72-21ed-43cc-8551-86ce753fecde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.035 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Started (Lifecycle Event)#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.040 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.045 226310 INFO nova.virt.libvirt.driver [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance spawned successfully.#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.046 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.066 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.075 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.083 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.084 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.085 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.086 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.087 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.088 226310 DEBUG nova.virt.libvirt.driver [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.101 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.102 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406951.033883, 168e4a72-21ed-43cc-8551-86ce753fecde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.102 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.143 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.149 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406951.0385838, 168e4a72-21ed-43cc-8551-86ce753fecde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.150 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.156 226310 INFO nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Took 10.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.157 226310 DEBUG nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.172 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.178 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.206 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.222 226310 INFO nova.compute.manager [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Took 11.68 seconds to build instance.#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.238 226310 DEBUG oslo_concurrency.lockutils [None req-c0bcdf1c-b021-4cb2-9523-fa59179e2fdf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:31 np0005539564 nova_compute[226295]: 2025-11-29 09:02:31.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:31.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:32 np0005539564 nova_compute[226295]: 2025-11-29 09:02:32.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:32 np0005539564 nova_compute[226295]: 2025-11-29 09:02:32.374 226310 DEBUG nova.compute.manager [req-66ecf4a1-c975-4b19-8ce3-953a6f58e204 req-1d1148fd-dbb8-421c-9a7f-37882c5e4e59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:32 np0005539564 nova_compute[226295]: 2025-11-29 09:02:32.374 226310 DEBUG oslo_concurrency.lockutils [req-66ecf4a1-c975-4b19-8ce3-953a6f58e204 req-1d1148fd-dbb8-421c-9a7f-37882c5e4e59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:32 np0005539564 nova_compute[226295]: 2025-11-29 09:02:32.375 226310 DEBUG oslo_concurrency.lockutils [req-66ecf4a1-c975-4b19-8ce3-953a6f58e204 req-1d1148fd-dbb8-421c-9a7f-37882c5e4e59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:32 np0005539564 nova_compute[226295]: 2025-11-29 09:02:32.375 226310 DEBUG oslo_concurrency.lockutils [req-66ecf4a1-c975-4b19-8ce3-953a6f58e204 req-1d1148fd-dbb8-421c-9a7f-37882c5e4e59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:32 np0005539564 nova_compute[226295]: 2025-11-29 09:02:32.376 226310 DEBUG nova.compute.manager [req-66ecf4a1-c975-4b19-8ce3-953a6f58e204 req-1d1148fd-dbb8-421c-9a7f-37882c5e4e59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:32 np0005539564 nova_compute[226295]: 2025-11-29 09:02:32.376 226310 WARNING nova.compute.manager [req-66ecf4a1-c975-4b19-8ce3-953a6f58e204 req-1d1148fd-dbb8-421c-9a7f-37882c5e4e59 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state active and task_state None.#033[00m
Nov 29 04:02:33 np0005539564 nova_compute[226295]: 2025-11-29 09:02:33.257 226310 DEBUG nova.objects.instance [None req-956acedb-cbae-4ec2-a9ce-0b08ac27cfe1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'pci_devices' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:33 np0005539564 nova_compute[226295]: 2025-11-29 09:02:33.286 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406953.2862902, 168e4a72-21ed-43cc-8551-86ce753fecde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:33 np0005539564 nova_compute[226295]: 2025-11-29 09:02:33.287 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:02:33 np0005539564 nova_compute[226295]: 2025-11-29 09:02:33.304 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:33 np0005539564 nova_compute[226295]: 2025-11-29 09:02:33.309 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:02:33 np0005539564 nova_compute[226295]: 2025-11-29 09:02:33.327 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 04:02:33 np0005539564 nova_compute[226295]: 2025-11-29 09:02:33.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:33.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:33 np0005539564 nova_compute[226295]: 2025-11-29 09:02:33.723 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:34 np0005539564 kernel: tap482dbf7c-16 (unregistering): left promiscuous mode
Nov 29 04:02:34 np0005539564 NetworkManager[48997]: <info>  [1764406954.0346] device (tap482dbf7c-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:02:34 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:34Z|00856|binding|INFO|Releasing lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 from this chassis (sb_readonly=0)
Nov 29 04:02:34 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:34Z|00857|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 down in Southbound
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.048 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:34 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:34Z|00858|binding|INFO|Removing iface tap482dbf7c-16 ovn-installed in OVS
Nov 29 04:02:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:34.057 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f5:00 10.100.0.2'], port_security=['fa:16:3e:c1:f5:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '168e4a72-21ed-43cc-8551-86ce753fecde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f15e24c-f34c-4322-ae3d-17c2166d8091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68bb7f46e7ed430eaa1d724a2abe3a41', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e750d1e0-89c8-47bc-ac0b-0b086208ee72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6dd1340-4781-4689-93b2-8c74ec464c49, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=482dbf7c-1611-4a98-973e-b0b5d9581b96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:02:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:34.059 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 482dbf7c-1611-4a98-973e-b0b5d9581b96 in datapath 5f15e24c-f34c-4322-ae3d-17c2166d8091 unbound from our chassis#033[00m
Nov 29 04:02:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:34.060 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f15e24c-f34c-4322-ae3d-17c2166d8091 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 04:02:34 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:34.061 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[83fc498b-6fc5-4de7-8a43-86b923fa85dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.076 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:34 np0005539564 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Nov 29 04:02:34 np0005539564 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000d5.scope: Consumed 3.697s CPU time.
Nov 29 04:02:34 np0005539564 systemd-machined[190128]: Machine qemu-98-instance-000000d5 terminated.
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.235 226310 DEBUG nova.compute.manager [None req-956acedb-cbae-4ec2-a9ce-0b08ac27cfe1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.455 226310 DEBUG nova.compute.manager [req-4d788da6-ab6d-48b1-8cd9-d1e1f78aa359 req-0a4fec3b-fcf3-4921-8f13-fa79577f669b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.456 226310 DEBUG oslo_concurrency.lockutils [req-4d788da6-ab6d-48b1-8cd9-d1e1f78aa359 req-0a4fec3b-fcf3-4921-8f13-fa79577f669b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.457 226310 DEBUG oslo_concurrency.lockutils [req-4d788da6-ab6d-48b1-8cd9-d1e1f78aa359 req-0a4fec3b-fcf3-4921-8f13-fa79577f669b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.457 226310 DEBUG oslo_concurrency.lockutils [req-4d788da6-ab6d-48b1-8cd9-d1e1f78aa359 req-0a4fec3b-fcf3-4921-8f13-fa79577f669b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.458 226310 DEBUG nova.compute.manager [req-4d788da6-ab6d-48b1-8cd9-d1e1f78aa359 req-0a4fec3b-fcf3-4921-8f13-fa79577f669b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:34 np0005539564 nova_compute[226295]: 2025-11-29 09:02:34.458 226310 WARNING nova.compute.manager [req-4d788da6-ab6d-48b1-8cd9-d1e1f78aa359 req-0a4fec3b-fcf3-4921-8f13-fa79577f669b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 04:02:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:35.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:35.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.366 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.367 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.367 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.368 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.389 226310 INFO nova.compute.manager [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Resuming#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.391 226310 DEBUG nova.objects.instance [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'flavor' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.441 226310 DEBUG oslo_concurrency.lockutils [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.591 226310 DEBUG nova.compute.manager [req-43076ce3-2b18-485b-b2cf-d44145018abc req-ef8441f8-edcd-460d-ae3e-714e045fd2e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.592 226310 DEBUG oslo_concurrency.lockutils [req-43076ce3-2b18-485b-b2cf-d44145018abc req-ef8441f8-edcd-460d-ae3e-714e045fd2e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.592 226310 DEBUG oslo_concurrency.lockutils [req-43076ce3-2b18-485b-b2cf-d44145018abc req-ef8441f8-edcd-460d-ae3e-714e045fd2e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.593 226310 DEBUG oslo_concurrency.lockutils [req-43076ce3-2b18-485b-b2cf-d44145018abc req-ef8441f8-edcd-460d-ae3e-714e045fd2e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.593 226310 DEBUG nova.compute.manager [req-43076ce3-2b18-485b-b2cf-d44145018abc req-ef8441f8-edcd-460d-ae3e-714e045fd2e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:36 np0005539564 nova_compute[226295]: 2025-11-29 09:02:36.594 226310 WARNING nova.compute.manager [req-43076ce3-2b18-485b-b2cf-d44145018abc req-ef8441f8-edcd-460d-ae3e-714e045fd2e7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 04:02:37 np0005539564 nova_compute[226295]: 2025-11-29 09:02:37.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:37.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.521 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Updating instance_info_cache with network_info: [{"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.539 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.540 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.540 226310 DEBUG oslo_concurrency.lockutils [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquired lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.541 226310 DEBUG nova.network.neutron [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.543 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.543 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.544 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:02:38 np0005539564 nova_compute[226295]: 2025-11-29 09:02:38.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:39.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.283 226310 DEBUG nova.network.neutron [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Updating instance_info_cache with network_info: [{"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.299 226310 DEBUG oslo_concurrency.lockutils [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Releasing lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.306 226310 DEBUG nova.virt.libvirt.vif [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1611733281',display_name='tempest-TestServerAdvancedOps-server-1611733281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1611733281',id=213,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='68bb7f46e7ed430eaa1d724a2abe3a41',ramdisk_id='',reservation_id='r-aamfen3j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1996398872',owner_user_name='tempest-TestServerAdvancedOps-1996398872-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:02:34Z,user_data=None,user_id='72d4bc4447574563becfcc44047872c6',uuid=168e4a72-21ed-43cc-8551-86ce753fecde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.307 226310 DEBUG nova.network.os_vif_util [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converting VIF {"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.308 226310 DEBUG nova.network.os_vif_util [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.309 226310 DEBUG os_vif [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.310 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.310 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.311 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.316 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.316 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap482dbf7c-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.317 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap482dbf7c-16, col_values=(('external_ids', {'iface-id': '482dbf7c-1611-4a98-973e-b0b5d9581b96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:f5:00', 'vm-uuid': '168e4a72-21ed-43cc-8551-86ce753fecde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.318 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.319 226310 INFO os_vif [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16')#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.349 226310 DEBUG nova.objects.instance [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'numa_topology' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:40 np0005539564 kernel: tap482dbf7c-16: entered promiscuous mode
Nov 29 04:02:40 np0005539564 NetworkManager[48997]: <info>  [1764406960.4530] manager: (tap482dbf7c-16): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.454 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:40Z|00859|binding|INFO|Claiming lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 for this chassis.
Nov 29 04:02:40 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:40Z|00860|binding|INFO|482dbf7c-1611-4a98-973e-b0b5d9581b96: Claiming fa:16:3e:c1:f5:00 10.100.0.2
Nov 29 04:02:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:40.462 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f5:00 10.100.0.2'], port_security=['fa:16:3e:c1:f5:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '168e4a72-21ed-43cc-8551-86ce753fecde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f15e24c-f34c-4322-ae3d-17c2166d8091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68bb7f46e7ed430eaa1d724a2abe3a41', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e750d1e0-89c8-47bc-ac0b-0b086208ee72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6dd1340-4781-4689-93b2-8c74ec464c49, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=482dbf7c-1611-4a98-973e-b0b5d9581b96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:02:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:40.465 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 482dbf7c-1611-4a98-973e-b0b5d9581b96 in datapath 5f15e24c-f34c-4322-ae3d-17c2166d8091 bound to our chassis#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.466 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:40.466 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f15e24c-f34c-4322-ae3d-17c2166d8091 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 04:02:40 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:40Z|00861|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 ovn-installed in OVS
Nov 29 04:02:40 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:40Z|00862|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 up in Southbound
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.468 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:40.469 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a90965ba-9287-42a4-8260-99fff5902ff3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.475 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539564 systemd-udevd[311785]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:02:40 np0005539564 systemd-machined[190128]: New machine qemu-99-instance-000000d5.
Nov 29 04:02:40 np0005539564 NetworkManager[48997]: <info>  [1764406960.5004] device (tap482dbf7c-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:02:40 np0005539564 NetworkManager[48997]: <info>  [1764406960.5014] device (tap482dbf7c-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:02:40 np0005539564 systemd[1]: Started Virtual Machine qemu-99-instance-000000d5.
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.669 226310 DEBUG nova.compute.manager [req-7de772fa-6fde-4a97-b32e-f849dca3239b req-ac5d9ee1-8689-4e2d-a850-c222cdcfa35d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.670 226310 DEBUG oslo_concurrency.lockutils [req-7de772fa-6fde-4a97-b32e-f849dca3239b req-ac5d9ee1-8689-4e2d-a850-c222cdcfa35d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.670 226310 DEBUG oslo_concurrency.lockutils [req-7de772fa-6fde-4a97-b32e-f849dca3239b req-ac5d9ee1-8689-4e2d-a850-c222cdcfa35d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.671 226310 DEBUG oslo_concurrency.lockutils [req-7de772fa-6fde-4a97-b32e-f849dca3239b req-ac5d9ee1-8689-4e2d-a850-c222cdcfa35d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.671 226310 DEBUG nova.compute.manager [req-7de772fa-6fde-4a97-b32e-f849dca3239b req-ac5d9ee1-8689-4e2d-a850-c222cdcfa35d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:40 np0005539564 nova_compute[226295]: 2025-11-29 09:02:40.671 226310 WARNING nova.compute.manager [req-7de772fa-6fde-4a97-b32e-f849dca3239b req-ac5d9ee1-8689-4e2d-a850-c222cdcfa35d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:41.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:41.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.931 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 168e4a72-21ed-43cc-8551-86ce753fecde due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.932 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406961.9310005, 168e4a72-21ed-43cc-8551-86ce753fecde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.933 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Started (Lifecycle Event)#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.952 226310 DEBUG nova.compute.manager [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.952 226310 DEBUG nova.objects.instance [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'pci_devices' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.980 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.989 226310 INFO nova.virt.libvirt.driver [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance running successfully.#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.989 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:02:41 np0005539564 virtqemud[225880]: argument unsupported: QEMU guest agent is not configured
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.993 226310 DEBUG nova.virt.libvirt.guest [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 04:02:41 np0005539564 nova_compute[226295]: 2025-11-29 09:02:41.994 226310 DEBUG nova.compute.manager [None req-3877958f-3742-402c-b886-06ccf05756b1 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.026 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.026 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406961.9365325, 168e4a72-21ed-43cc-8551-86ce753fecde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.026 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.056 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.057 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.061 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.813 226310 DEBUG nova.compute.manager [req-0b2ba4c6-eb2f-4f2e-8608-6c9490ddb984 req-42e8d105-5be2-4c83-9f63-3e825069065f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.813 226310 DEBUG oslo_concurrency.lockutils [req-0b2ba4c6-eb2f-4f2e-8608-6c9490ddb984 req-42e8d105-5be2-4c83-9f63-3e825069065f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.814 226310 DEBUG oslo_concurrency.lockutils [req-0b2ba4c6-eb2f-4f2e-8608-6c9490ddb984 req-42e8d105-5be2-4c83-9f63-3e825069065f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.814 226310 DEBUG oslo_concurrency.lockutils [req-0b2ba4c6-eb2f-4f2e-8608-6c9490ddb984 req-42e8d105-5be2-4c83-9f63-3e825069065f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.815 226310 DEBUG nova.compute.manager [req-0b2ba4c6-eb2f-4f2e-8608-6c9490ddb984 req-42e8d105-5be2-4c83-9f63-3e825069065f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:42 np0005539564 nova_compute[226295]: 2025-11-29 09:02:42.815 226310 WARNING nova.compute.manager [req-0b2ba4c6-eb2f-4f2e-8608-6c9490ddb984 req-42e8d105-5be2-4c83-9f63-3e825069065f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state active and task_state None.#033[00m
Nov 29 04:02:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:43.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:43 np0005539564 nova_compute[226295]: 2025-11-29 09:02:43.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:43.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:44 np0005539564 nova_compute[226295]: 2025-11-29 09:02:44.520 226310 DEBUG nova.objects.instance [None req-0f599898-7de3-43ed-aed3-2848eea44a4f 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'pci_devices' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:44 np0005539564 nova_compute[226295]: 2025-11-29 09:02:44.547 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406964.5466096, 168e4a72-21ed-43cc-8551-86ce753fecde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:44 np0005539564 nova_compute[226295]: 2025-11-29 09:02:44.548 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:02:44 np0005539564 nova_compute[226295]: 2025-11-29 09:02:44.570 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:44 np0005539564 nova_compute[226295]: 2025-11-29 09:02:44.575 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:02:44 np0005539564 nova_compute[226295]: 2025-11-29 09:02:44.599 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 04:02:45 np0005539564 kernel: tap482dbf7c-16 (unregistering): left promiscuous mode
Nov 29 04:02:45 np0005539564 NetworkManager[48997]: <info>  [1764406965.1317] device (tap482dbf7c-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:02:45 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:45Z|00863|binding|INFO|Releasing lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 from this chassis (sb_readonly=0)
Nov 29 04:02:45 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:45Z|00864|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 down in Southbound
Nov 29 04:02:45 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:45Z|00865|binding|INFO|Removing iface tap482dbf7c-16 ovn-installed in OVS
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.142 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:45.155 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f5:00 10.100.0.2'], port_security=['fa:16:3e:c1:f5:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '168e4a72-21ed-43cc-8551-86ce753fecde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f15e24c-f34c-4322-ae3d-17c2166d8091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68bb7f46e7ed430eaa1d724a2abe3a41', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e750d1e0-89c8-47bc-ac0b-0b086208ee72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6dd1340-4781-4689-93b2-8c74ec464c49, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=482dbf7c-1611-4a98-973e-b0b5d9581b96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:02:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:45.156 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 482dbf7c-1611-4a98-973e-b0b5d9581b96 in datapath 5f15e24c-f34c-4322-ae3d-17c2166d8091 unbound from our chassis#033[00m
Nov 29 04:02:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:45.157 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f15e24c-f34c-4322-ae3d-17c2166d8091 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 04:02:45 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:45.158 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3a85e9a8-5f31-420b-b9c2-e35c1d0941b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.174 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:45 np0005539564 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Nov 29 04:02:45 np0005539564 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000d5.scope: Consumed 3.922s CPU time.
Nov 29 04:02:45 np0005539564 systemd-machined[190128]: Machine qemu-99-instance-000000d5 terminated.
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.322 226310 DEBUG nova.compute.manager [req-731edb62-1e41-474e-ba1e-5712fc267c4d req-9b883976-553e-4d71-82a3-1380e87bbb80 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.324 226310 DEBUG oslo_concurrency.lockutils [req-731edb62-1e41-474e-ba1e-5712fc267c4d req-9b883976-553e-4d71-82a3-1380e87bbb80 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.324 226310 DEBUG oslo_concurrency.lockutils [req-731edb62-1e41-474e-ba1e-5712fc267c4d req-9b883976-553e-4d71-82a3-1380e87bbb80 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.325 226310 DEBUG oslo_concurrency.lockutils [req-731edb62-1e41-474e-ba1e-5712fc267c4d req-9b883976-553e-4d71-82a3-1380e87bbb80 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.325 226310 DEBUG nova.compute.manager [req-731edb62-1e41-474e-ba1e-5712fc267c4d req-9b883976-553e-4d71-82a3-1380e87bbb80 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.325 226310 WARNING nova.compute.manager [req-731edb62-1e41-474e-ba1e-5712fc267c4d req-9b883976-553e-4d71-82a3-1380e87bbb80 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state active and task_state suspending.#033[00m
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.326 226310 DEBUG nova.compute.manager [None req-0f599898-7de3-43ed-aed3-2848eea44a4f 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:45 np0005539564 nova_compute[226295]: 2025-11-29 09:02:45.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:45.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:46 np0005539564 nova_compute[226295]: 2025-11-29 09:02:46.862 226310 INFO nova.compute.manager [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Resuming#033[00m
Nov 29 04:02:46 np0005539564 nova_compute[226295]: 2025-11-29 09:02:46.864 226310 DEBUG nova.objects.instance [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'flavor' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:46 np0005539564 nova_compute[226295]: 2025-11-29 09:02:46.907 226310 DEBUG oslo_concurrency.lockutils [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:02:46 np0005539564 nova_compute[226295]: 2025-11-29 09:02:46.907 226310 DEBUG oslo_concurrency.lockutils [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquired lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:02:46 np0005539564 nova_compute[226295]: 2025-11-29 09:02:46.908 226310 DEBUG nova.network.neutron [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:02:47 np0005539564 nova_compute[226295]: 2025-11-29 09:02:47.059 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:47 np0005539564 nova_compute[226295]: 2025-11-29 09:02:47.435 226310 DEBUG nova.compute.manager [req-a7b4461c-9697-47a2-8dbe-efe386768fdc req-a55ab012-6227-40bf-8dfe-37cc9bbf4871 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:47 np0005539564 nova_compute[226295]: 2025-11-29 09:02:47.436 226310 DEBUG oslo_concurrency.lockutils [req-a7b4461c-9697-47a2-8dbe-efe386768fdc req-a55ab012-6227-40bf-8dfe-37cc9bbf4871 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:47 np0005539564 nova_compute[226295]: 2025-11-29 09:02:47.436 226310 DEBUG oslo_concurrency.lockutils [req-a7b4461c-9697-47a2-8dbe-efe386768fdc req-a55ab012-6227-40bf-8dfe-37cc9bbf4871 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:47 np0005539564 nova_compute[226295]: 2025-11-29 09:02:47.436 226310 DEBUG oslo_concurrency.lockutils [req-a7b4461c-9697-47a2-8dbe-efe386768fdc req-a55ab012-6227-40bf-8dfe-37cc9bbf4871 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:47 np0005539564 nova_compute[226295]: 2025-11-29 09:02:47.436 226310 DEBUG nova.compute.manager [req-a7b4461c-9697-47a2-8dbe-efe386768fdc req-a55ab012-6227-40bf-8dfe-37cc9bbf4871 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:47 np0005539564 nova_compute[226295]: 2025-11-29 09:02:47.437 226310 WARNING nova.compute.manager [req-a7b4461c-9697-47a2-8dbe-efe386768fdc req-a55ab012-6227-40bf-8dfe-37cc9bbf4871 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 04:02:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:47.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:47.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:48 np0005539564 nova_compute[226295]: 2025-11-29 09:02:48.732 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.246 226310 DEBUG nova.network.neutron [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Updating instance_info_cache with network_info: [{"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.272 226310 DEBUG oslo_concurrency.lockutils [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Releasing lock "refresh_cache-168e4a72-21ed-43cc-8551-86ce753fecde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.279 226310 DEBUG nova.virt.libvirt.vif [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1611733281',display_name='tempest-TestServerAdvancedOps-server-1611733281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1611733281',id=213,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='68bb7f46e7ed430eaa1d724a2abe3a41',ramdisk_id='',reservation_id='r-aamfen3j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1996398872',owner_user_name='tempest-TestServerAdvancedOps-1996398872-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:02:45Z,user_data=None,user_id='72d4bc4447574563becfcc44047872c6',uuid=168e4a72-21ed-43cc-8551-86ce753fecde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.280 226310 DEBUG nova.network.os_vif_util [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converting VIF {"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.282 226310 DEBUG nova.network.os_vif_util [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.283 226310 DEBUG os_vif [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.284 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.284 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.285 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.289 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.289 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap482dbf7c-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.290 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap482dbf7c-16, col_values=(('external_ids', {'iface-id': '482dbf7c-1611-4a98-973e-b0b5d9581b96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:f5:00', 'vm-uuid': '168e4a72-21ed-43cc-8551-86ce753fecde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.291 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.291 226310 INFO os_vif [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16')#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.324 226310 DEBUG nova.objects.instance [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'numa_topology' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:49 np0005539564 NetworkManager[48997]: <info>  [1764406969.4261] manager: (tap482dbf7c-16): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Nov 29 04:02:49 np0005539564 kernel: tap482dbf7c-16: entered promiscuous mode
Nov 29 04:02:49 np0005539564 systemd-udevd[311869]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.475 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:49 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:49Z|00866|binding|INFO|Claiming lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 for this chassis.
Nov 29 04:02:49 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:49Z|00867|binding|INFO|482dbf7c-1611-4a98-973e-b0b5d9581b96: Claiming fa:16:3e:c1:f5:00 10.100.0.2
Nov 29 04:02:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:49.482 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f5:00 10.100.0.2'], port_security=['fa:16:3e:c1:f5:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '168e4a72-21ed-43cc-8551-86ce753fecde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f15e24c-f34c-4322-ae3d-17c2166d8091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68bb7f46e7ed430eaa1d724a2abe3a41', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e750d1e0-89c8-47bc-ac0b-0b086208ee72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6dd1340-4781-4689-93b2-8c74ec464c49, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=482dbf7c-1611-4a98-973e-b0b5d9581b96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:02:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:49.483 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 482dbf7c-1611-4a98-973e-b0b5d9581b96 in datapath 5f15e24c-f34c-4322-ae3d-17c2166d8091 bound to our chassis#033[00m
Nov 29 04:02:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:49.484 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f15e24c-f34c-4322-ae3d-17c2166d8091 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 04:02:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:49.484 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1fed2225-c77f-485b-af6c-5bf35b3c7e74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:02:49 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:49Z|00868|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 up in Southbound
Nov 29 04:02:49 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:49Z|00869|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 ovn-installed in OVS
Nov 29 04:02:49 np0005539564 NetworkManager[48997]: <info>  [1764406969.4997] device (tap482dbf7c-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.498 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:49 np0005539564 NetworkManager[48997]: <info>  [1764406969.5016] device (tap482dbf7c-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.500 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:49 np0005539564 nova_compute[226295]: 2025-11-29 09:02:49.506 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:49 np0005539564 systemd-machined[190128]: New machine qemu-100-instance-000000d5.
Nov 29 04:02:49 np0005539564 systemd[1]: Started Virtual Machine qemu-100-instance-000000d5.
Nov 29 04:02:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:49.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:49.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.242 226310 DEBUG nova.compute.manager [req-731ba49b-9bde-4c5e-b06c-e4b8b9249199 req-a8655cdb-7b2b-4822-9b90-8cd02c4269e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.245 226310 DEBUG oslo_concurrency.lockutils [req-731ba49b-9bde-4c5e-b06c-e4b8b9249199 req-a8655cdb-7b2b-4822-9b90-8cd02c4269e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.245 226310 DEBUG oslo_concurrency.lockutils [req-731ba49b-9bde-4c5e-b06c-e4b8b9249199 req-a8655cdb-7b2b-4822-9b90-8cd02c4269e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.246 226310 DEBUG oslo_concurrency.lockutils [req-731ba49b-9bde-4c5e-b06c-e4b8b9249199 req-a8655cdb-7b2b-4822-9b90-8cd02c4269e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.246 226310 DEBUG nova.compute.manager [req-731ba49b-9bde-4c5e-b06c-e4b8b9249199 req-a8655cdb-7b2b-4822-9b90-8cd02c4269e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.247 226310 WARNING nova.compute.manager [req-731ba49b-9bde-4c5e-b06c-e4b8b9249199 req-a8655cdb-7b2b-4822-9b90-8cd02c4269e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.388 226310 DEBUG nova.virt.libvirt.host [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Removed pending event for 168e4a72-21ed-43cc-8551-86ce753fecde due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.389 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406970.3881323, 168e4a72-21ed-43cc-8551-86ce753fecde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.389 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Started (Lifecycle Event)#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.407 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.427 226310 DEBUG nova.compute.manager [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.427 226310 DEBUG nova.objects.instance [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'pci_devices' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.431 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.447 226310 INFO nova.virt.libvirt.driver [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance running successfully.#033[00m
Nov 29 04:02:50 np0005539564 virtqemud[225880]: argument unsupported: QEMU guest agent is not configured
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.450 226310 DEBUG nova.virt.libvirt.guest [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.451 226310 DEBUG nova.compute.manager [None req-e2a2f689-83d4-40bf-998b-74ed285237f2 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.453 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.454 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764406970.3930025, 168e4a72-21ed-43cc-8551-86ce753fecde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.454 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.481 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.485 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:02:50 np0005539564 nova_compute[226295]: 2025-11-29 09:02:50.508 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 04:02:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:51.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.059 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.339 226310 DEBUG nova.compute.manager [req-816d9d5f-f3c2-4eb4-9844-b6b02fec3282 req-6e1f4263-c6d1-447d-ad1c-391df6108eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.340 226310 DEBUG oslo_concurrency.lockutils [req-816d9d5f-f3c2-4eb4-9844-b6b02fec3282 req-6e1f4263-c6d1-447d-ad1c-391df6108eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.341 226310 DEBUG oslo_concurrency.lockutils [req-816d9d5f-f3c2-4eb4-9844-b6b02fec3282 req-6e1f4263-c6d1-447d-ad1c-391df6108eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.342 226310 DEBUG oslo_concurrency.lockutils [req-816d9d5f-f3c2-4eb4-9844-b6b02fec3282 req-6e1f4263-c6d1-447d-ad1c-391df6108eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.342 226310 DEBUG nova.compute.manager [req-816d9d5f-f3c2-4eb4-9844-b6b02fec3282 req-6e1f4263-c6d1-447d-ad1c-391df6108eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.343 226310 WARNING nova.compute.manager [req-816d9d5f-f3c2-4eb4-9844-b6b02fec3282 req-6e1f4263-c6d1-447d-ad1c-391df6108eb2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state active and task_state None.#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.394 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.395 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.396 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.397 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.397 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.399 226310 INFO nova.compute.manager [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Terminating instance#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.401 226310 DEBUG nova.compute.manager [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:02:52 np0005539564 kernel: tap482dbf7c-16 (unregistering): left promiscuous mode
Nov 29 04:02:52 np0005539564 NetworkManager[48997]: <info>  [1764406972.4603] device (tap482dbf7c-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:02:52 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:52Z|00870|binding|INFO|Releasing lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 from this chassis (sb_readonly=0)
Nov 29 04:02:52 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:52Z|00871|binding|INFO|Setting lport 482dbf7c-1611-4a98-973e-b0b5d9581b96 down in Southbound
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.471 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.472 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539564 ovn_controller[130591]: 2025-11-29T09:02:52Z|00872|binding|INFO|Removing iface tap482dbf7c-16 ovn-installed in OVS
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.473 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:52.478 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f5:00 10.100.0.2'], port_security=['fa:16:3e:c1:f5:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '168e4a72-21ed-43cc-8551-86ce753fecde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f15e24c-f34c-4322-ae3d-17c2166d8091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68bb7f46e7ed430eaa1d724a2abe3a41', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e750d1e0-89c8-47bc-ac0b-0b086208ee72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6dd1340-4781-4689-93b2-8c74ec464c49, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=482dbf7c-1611-4a98-973e-b0b5d9581b96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:02:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:52.480 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 482dbf7c-1611-4a98-973e-b0b5d9581b96 in datapath 5f15e24c-f34c-4322-ae3d-17c2166d8091 unbound from our chassis#033[00m
Nov 29 04:02:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:52.481 139780 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f15e24c-f34c-4322-ae3d-17c2166d8091 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 04:02:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:52.482 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[468fb547-36d7-4f1b-88f3-84bc570d1976]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.501 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539564 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Nov 29 04:02:52 np0005539564 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000d5.scope: Consumed 2.897s CPU time.
Nov 29 04:02:52 np0005539564 systemd-machined[190128]: Machine qemu-100-instance-000000d5 terminated.
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.656 226310 INFO nova.virt.libvirt.driver [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Instance destroyed successfully.#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.656 226310 DEBUG nova.objects.instance [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lazy-loading 'resources' on Instance uuid 168e4a72-21ed-43cc-8551-86ce753fecde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.674 226310 DEBUG nova.virt.libvirt.vif [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1611733281',display_name='tempest-TestServerAdvancedOps-server-1611733281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1611733281',id=213,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='68bb7f46e7ed430eaa1d724a2abe3a41',ramdisk_id='',reservation_id='r-aamfen3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1996398872',owner_user_name='tempest-TestServerAdvancedOps-1996398872-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:02:50Z,user_data=None,user_id='72d4bc4447574563becfcc44047872c6',uuid=168e4a72-21ed-43cc-8551-86ce753fecde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.675 226310 DEBUG nova.network.os_vif_util [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converting VIF {"id": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "address": "fa:16:3e:c1:f5:00", "network": {"id": "5f15e24c-f34c-4322-ae3d-17c2166d8091", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-674651663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68bb7f46e7ed430eaa1d724a2abe3a41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482dbf7c-16", "ovs_interfaceid": "482dbf7c-1611-4a98-973e-b0b5d9581b96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.676 226310 DEBUG nova.network.os_vif_util [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.677 226310 DEBUG os_vif [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.680 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.680 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap482dbf7c-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.683 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.687 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:02:52 np0005539564 nova_compute[226295]: 2025-11-29 09:02:52.690 226310 INFO os_vif [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:f5:00,bridge_name='br-int',has_traffic_filtering=True,id=482dbf7c-1611-4a98-973e-b0b5d9581b96,network=Network(5f15e24c-f34c-4322-ae3d-17c2166d8091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482dbf7c-16')#033[00m
Nov 29 04:02:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 04:02:52 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 04:02:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:53.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.079 226310 INFO nova.virt.libvirt.driver [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Deleting instance files /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde_del#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.080 226310 INFO nova.virt.libvirt.driver [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Deletion of /var/lib/nova/instances/168e4a72-21ed-43cc-8551-86ce753fecde_del complete#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.205 226310 INFO nova.compute.manager [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Took 1.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.206 226310 DEBUG oslo.service.loopingcall [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.206 226310 DEBUG nova.compute.manager [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.207 226310 DEBUG nova.network.neutron [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.371 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.399 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.400 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.400 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.400 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.401 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.458 226310 DEBUG nova.compute.manager [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.459 226310 DEBUG oslo_concurrency.lockutils [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.460 226310 DEBUG oslo_concurrency.lockutils [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.461 226310 DEBUG oslo_concurrency.lockutils [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.462 226310 DEBUG nova.compute.manager [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.462 226310 DEBUG nova.compute.manager [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-unplugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.463 226310 DEBUG nova.compute.manager [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.464 226310 DEBUG oslo_concurrency.lockutils [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.465 226310 DEBUG oslo_concurrency.lockutils [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.465 226310 DEBUG oslo_concurrency.lockutils [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.466 226310 DEBUG nova.compute.manager [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] No waiting events found dispatching network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.467 226310 WARNING nova.compute.manager [req-1be3ac10-c274-40d3-815a-d7be4c77a792 req-54f82b9f-974a-4d53-bd24-1e2ab7ad91a6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received unexpected event network-vif-plugged-482dbf7c-1611-4a98-973e-b0b5d9581b96 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 04:02:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:54 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1041010412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:54 np0005539564 nova_compute[226295]: 2025-11-29 09:02:54.909 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.177 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.179 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4142MB free_disk=20.968711853027344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.180 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.181 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.271 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 168e4a72-21ed-43cc-8551-86ce753fecde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.272 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.272 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.364 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:55.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1928395851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.846 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:55.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.856 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.880 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.906 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:02:55 np0005539564 nova_compute[226295]: 2025-11-29 09:02:55.906 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.126 226310 DEBUG nova.network.neutron [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.163 226310 INFO nova.compute.manager [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Took 1.96 seconds to deallocate network for instance.#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.225 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.226 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.277 226310 DEBUG oslo_concurrency.processutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.513 226310 DEBUG nova.compute.manager [req-c308b7c3-1553-42f5-89d3-b9d9b731cd31 req-f2a2ca3e-c6f4-4bae-b50a-ab97a11da970 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Received event network-vif-deleted-482dbf7c-1611-4a98-973e-b0b5d9581b96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:02:56 np0005539564 podman[312017]: 2025-11-29 09:02:56.550902663 +0000 UTC m=+0.099583308 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:02:56 np0005539564 podman[312018]: 2025-11-29 09:02:56.567450212 +0000 UTC m=+0.114250926 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:02:56 np0005539564 podman[312008]: 2025-11-29 09:02:56.589134809 +0000 UTC m=+0.137337612 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:02:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/25869047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.783 226310 DEBUG oslo_concurrency.processutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.792 226310 DEBUG nova.compute.provider_tree [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.813 226310 DEBUG nova.scheduler.client.report [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.843 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.873 226310 INFO nova.scheduler.client.report [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Deleted allocations for instance 168e4a72-21ed-43cc-8551-86ce753fecde#033[00m
Nov 29 04:02:56 np0005539564 nova_compute[226295]: 2025-11-29 09:02:56.944 226310 DEBUG oslo_concurrency.lockutils [None req-6dd7d2dd-19c1-4bb8-9282-8e09861f9bcf 72d4bc4447574563becfcc44047872c6 68bb7f46e7ed430eaa1d724a2abe3a41 - - default default] Lock "168e4a72-21ed-43cc-8551-86ce753fecde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:57 np0005539564 nova_compute[226295]: 2025-11-29 09:02:57.091 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:57 np0005539564 nova_compute[226295]: 2025-11-29 09:02:57.684 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:57.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:59.160 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:02:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:02:59.162 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:02:59 np0005539564 nova_compute[226295]: 2025-11-29 09:02:59.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:59.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:59 np0005539564 nova_compute[226295]: 2025-11-29 09:02:59.776 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:02:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:01.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:01.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:02 np0005539564 nova_compute[226295]: 2025-11-29 09:03:02.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:02 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:03:02.163 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:03:02 np0005539564 nova_compute[226295]: 2025-11-29 09:03:02.686 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:03:03.783 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:03:03.783 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:03:03.784 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:03.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:05.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:05.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:07 np0005539564 nova_compute[226295]: 2025-11-29 09:03:07.157 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:07 np0005539564 nova_compute[226295]: 2025-11-29 09:03:07.654 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406972.6522446, 168e4a72-21ed-43cc-8551-86ce753fecde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:03:07 np0005539564 nova_compute[226295]: 2025-11-29 09:03:07.655 226310 INFO nova.compute.manager [-] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:03:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:07.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:07 np0005539564 nova_compute[226295]: 2025-11-29 09:03:07.677 226310 DEBUG nova.compute.manager [None req-11215f88-16a7-41c8-83cf-d82ed2884095 - - - - - -] [instance: 168e4a72-21ed-43cc-8551-86ce753fecde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:03:07 np0005539564 nova_compute[226295]: 2025-11-29 09:03:07.688 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:07.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:09.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:09.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:11 np0005539564 nova_compute[226295]: 2025-11-29 09:03:11.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:11 np0005539564 nova_compute[226295]: 2025-11-29 09:03:11.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:03:11 np0005539564 nova_compute[226295]: 2025-11-29 09:03:11.374 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:03:11 np0005539564 nova_compute[226295]: 2025-11-29 09:03:11.375 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:11.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:11.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:12 np0005539564 nova_compute[226295]: 2025-11-29 09:03:12.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:12 np0005539564 nova_compute[226295]: 2025-11-29 09:03:12.690 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:13.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:13.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:15.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:15.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:17 np0005539564 nova_compute[226295]: 2025-11-29 09:03:17.165 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:17.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:17 np0005539564 nova_compute[226295]: 2025-11-29 09:03:17.693 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:17.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:19.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:19.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:20 np0005539564 podman[312260]: 2025-11-29 09:03:20.628572214 +0000 UTC m=+0.071450267 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:03:20 np0005539564 podman[312260]: 2025-11-29 09:03:20.746461316 +0000 UTC m=+0.189339359 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:03:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:21.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:21.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:22 np0005539564 nova_compute[226295]: 2025-11-29 09:03:22.167 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:22 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:22 np0005539564 nova_compute[226295]: 2025-11-29 09:03:22.697 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:23.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:03:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:03:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:23.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:25.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:25.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:26 np0005539564 nova_compute[226295]: 2025-11-29 09:03:26.388 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:27 np0005539564 nova_compute[226295]: 2025-11-29 09:03:27.191 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:27 np0005539564 podman[312517]: 2025-11-29 09:03:27.516163887 +0000 UTC m=+0.061347201 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 04:03:27 np0005539564 podman[312516]: 2025-11-29 09:03:27.531944295 +0000 UTC m=+0.081654472 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 04:03:27 np0005539564 podman[312515]: 2025-11-29 09:03:27.560781656 +0000 UTC m=+0.109521567 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 04:03:27 np0005539564 nova_compute[226295]: 2025-11-29 09:03:27.699 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:27.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:27.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:29.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:29.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:31 np0005539564 nova_compute[226295]: 2025-11-29 09:03:31.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:31 np0005539564 nova_compute[226295]: 2025-11-29 09:03:31.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:31.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:31.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:31 np0005539564 ovn_controller[130591]: 2025-11-29T09:03:31Z|00873|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 04:03:32 np0005539564 nova_compute[226295]: 2025-11-29 09:03:32.194 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:32 np0005539564 nova_compute[226295]: 2025-11-29 09:03:32.701 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:33.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:33.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:35.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:35.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:36 np0005539564 nova_compute[226295]: 2025-11-29 09:03:36.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:36 np0005539564 nova_compute[226295]: 2025-11-29 09:03:36.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:03:36 np0005539564 nova_compute[226295]: 2025-11-29 09:03:36.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:03:36 np0005539564 nova_compute[226295]: 2025-11-29 09:03:36.544 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:03:37 np0005539564 nova_compute[226295]: 2025-11-29 09:03:37.197 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:37 np0005539564 nova_compute[226295]: 2025-11-29 09:03:37.703 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:37.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:37.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:39 np0005539564 nova_compute[226295]: 2025-11-29 09:03:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:39 np0005539564 nova_compute[226295]: 2025-11-29 09:03:39.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:39 np0005539564 nova_compute[226295]: 2025-11-29 09:03:39.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:03:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:39.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:41.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:41.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:42 np0005539564 nova_compute[226295]: 2025-11-29 09:03:42.199 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:42 np0005539564 nova_compute[226295]: 2025-11-29 09:03:42.705 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:43 np0005539564 nova_compute[226295]: 2025-11-29 09:03:43.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:43.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:45.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:45.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:46 np0005539564 nova_compute[226295]: 2025-11-29 09:03:46.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:47 np0005539564 nova_compute[226295]: 2025-11-29 09:03:47.201 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:47 np0005539564 nova_compute[226295]: 2025-11-29 09:03:47.707 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.733824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027733867, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 2261, "num_deletes": 251, "total_data_size": 5555916, "memory_usage": 5638256, "flush_reason": "Manual Compaction"}
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027763796, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 3632560, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83270, "largest_seqno": 85526, "table_properties": {"data_size": 3623349, "index_size": 5768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18720, "raw_average_key_size": 20, "raw_value_size": 3605074, "raw_average_value_size": 3905, "num_data_blocks": 253, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406818, "oldest_key_time": 1764406818, "file_creation_time": 1764407027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 30037 microseconds, and 15390 cpu microseconds.
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.763858) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 3632560 bytes OK
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.763884) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.765649) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.765680) EVENT_LOG_v1 {"time_micros": 1764407027765669, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.765707) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 5546054, prev total WAL file size 5546054, number of live WAL files 2.
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.768654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(3547KB)], [171(10MB)]
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027768709, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 14462598, "oldest_snapshot_seqno": -1}
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10961 keys, 12466498 bytes, temperature: kUnknown
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027910542, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 12466498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12398103, "index_size": 39866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 290230, "raw_average_key_size": 26, "raw_value_size": 12208465, "raw_average_value_size": 1113, "num_data_blocks": 1503, "num_entries": 10961, "num_filter_entries": 10961, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.910899) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 12466498 bytes
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.912643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.9 rd, 87.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.3 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 11482, records dropped: 521 output_compression: NoCompression
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.912672) EVENT_LOG_v1 {"time_micros": 1764407027912659, "job": 110, "event": "compaction_finished", "compaction_time_micros": 141926, "compaction_time_cpu_micros": 56561, "output_level": 6, "num_output_files": 1, "total_output_size": 12466498, "num_input_records": 11482, "num_output_records": 10961, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027913835, "job": 110, "event": "table_file_deletion", "file_number": 173}
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027916571, "job": 110, "event": "table_file_deletion", "file_number": 171}
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.768526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.916660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.916664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.916665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.916666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:03:47.916668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:47.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:49.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:49.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:51.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:51.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:52 np0005539564 nova_compute[226295]: 2025-11-29 09:03:52.203 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:52 np0005539564 nova_compute[226295]: 2025-11-29 09:03:52.710 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:53.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:55.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:55.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:56 np0005539564 nova_compute[226295]: 2025-11-29 09:03:56.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:56 np0005539564 nova_compute[226295]: 2025-11-29 09:03:56.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:56 np0005539564 nova_compute[226295]: 2025-11-29 09:03:56.373 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:56 np0005539564 nova_compute[226295]: 2025-11-29 09:03:56.374 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:56 np0005539564 nova_compute[226295]: 2025-11-29 09:03:56.374 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:03:56 np0005539564 nova_compute[226295]: 2025-11-29 09:03:56.375 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1883244047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:56 np0005539564 nova_compute[226295]: 2025-11-29 09:03:56.867 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.077 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.079 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4180MB free_disk=20.967357635498047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.079 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.080 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.172 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.172 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.196 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.206 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.212 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.213 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.231 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.262 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.294 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:57.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/620365454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.769 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.777 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.797 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.824 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:03:57 np0005539564 nova_compute[226295]: 2025-11-29 09:03:57.824 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:57.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:58 np0005539564 podman[312675]: 2025-11-29 09:03:58.524884962 +0000 UTC m=+0.069777570 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 04:03:58 np0005539564 podman[312674]: 2025-11-29 09:03:58.548885382 +0000 UTC m=+0.093538924 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 04:03:58 np0005539564 podman[312673]: 2025-11-29 09:03:58.565081651 +0000 UTC m=+0.120214477 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:03:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:59.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:03:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:59.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:01.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:02 np0005539564 nova_compute[226295]: 2025-11-29 09:04:02.209 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:02 np0005539564 nova_compute[226295]: 2025-11-29 09:04:02.716 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:03.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:04:03.784 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:04:03.784 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:04:03.784 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:03.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:05.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:07 np0005539564 nova_compute[226295]: 2025-11-29 09:04:07.210 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:07 np0005539564 nova_compute[226295]: 2025-11-29 09:04:07.718 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:07.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:09.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:09.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:11.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:12 np0005539564 nova_compute[226295]: 2025-11-29 09:04:12.213 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:12 np0005539564 nova_compute[226295]: 2025-11-29 09:04:12.720 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:13.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:13.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:15.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:15.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:17 np0005539564 nova_compute[226295]: 2025-11-29 09:04:17.216 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:17 np0005539564 nova_compute[226295]: 2025-11-29 09:04:17.723 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:17.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:17.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:19.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:21.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:21.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:22 np0005539564 nova_compute[226295]: 2025-11-29 09:04:22.218 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:22 np0005539564 nova_compute[226295]: 2025-11-29 09:04:22.727 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:23.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:23.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:25.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:26.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:27 np0005539564 nova_compute[226295]: 2025-11-29 09:04:27.221 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:27 np0005539564 nova_compute[226295]: 2025-11-29 09:04:27.729 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:27.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:27 np0005539564 nova_compute[226295]: 2025-11-29 09:04:27.818 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:28.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:29 np0005539564 podman[312760]: 2025-11-29 09:04:29.479809192 +0000 UTC m=+0.066173503 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:04:29 np0005539564 podman[312759]: 2025-11-29 09:04:29.508413097 +0000 UTC m=+0.094438119 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:04:29 np0005539564 podman[312758]: 2025-11-29 09:04:29.564777214 +0000 UTC m=+0.164142817 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 04:04:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:04:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:30 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:04:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:32.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:32 np0005539564 nova_compute[226295]: 2025-11-29 09:04:32.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:32 np0005539564 nova_compute[226295]: 2025-11-29 09:04:32.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:32 np0005539564 nova_compute[226295]: 2025-11-29 09:04:32.731 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:33 np0005539564 nova_compute[226295]: 2025-11-29 09:04:33.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:35.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:04:36.278 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:04:36 np0005539564 nova_compute[226295]: 2025-11-29 09:04:36.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:36 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:04:36.280 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:04:36 np0005539564 nova_compute[226295]: 2025-11-29 09:04:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:36 np0005539564 nova_compute[226295]: 2025-11-29 09:04:36.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:04:36 np0005539564 nova_compute[226295]: 2025-11-29 09:04:36.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:04:36 np0005539564 nova_compute[226295]: 2025-11-29 09:04:36.533 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:04:37 np0005539564 nova_compute[226295]: 2025-11-29 09:04:37.227 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:37 np0005539564 nova_compute[226295]: 2025-11-29 09:04:37.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:37.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:38.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:38 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:04:38.283 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:04:38 np0005539564 nova_compute[226295]: 2025-11-29 09:04:38.526 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:39 np0005539564 nova_compute[226295]: 2025-11-29 09:04:39.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:39 np0005539564 nova_compute[226295]: 2025-11-29 09:04:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:39 np0005539564 nova_compute[226295]: 2025-11-29 09:04:39.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:04:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:39.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:40.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:41.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:42.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:42 np0005539564 nova_compute[226295]: 2025-11-29 09:04:42.228 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:42 np0005539564 nova_compute[226295]: 2025-11-29 09:04:42.736 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:44.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:45 np0005539564 nova_compute[226295]: 2025-11-29 09:04:45.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:45.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:46.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:47 np0005539564 nova_compute[226295]: 2025-11-29 09:04:47.231 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:47 np0005539564 nova_compute[226295]: 2025-11-29 09:04:47.737 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:47.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:48.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:48 np0005539564 nova_compute[226295]: 2025-11-29 09:04:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:49.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:50.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:51.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:52.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:52 np0005539564 nova_compute[226295]: 2025-11-29 09:04:52.234 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:52 np0005539564 nova_compute[226295]: 2025-11-29 09:04:52.740 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:54.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:55.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:56.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:56 np0005539564 nova_compute[226295]: 2025-11-29 09:04:56.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:56 np0005539564 nova_compute[226295]: 2025-11-29 09:04:56.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:56 np0005539564 nova_compute[226295]: 2025-11-29 09:04:56.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:56 np0005539564 nova_compute[226295]: 2025-11-29 09:04:56.382 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:56 np0005539564 nova_compute[226295]: 2025-11-29 09:04:56.383 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:04:56 np0005539564 nova_compute[226295]: 2025-11-29 09:04:56.383 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2843743218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:56 np0005539564 nova_compute[226295]: 2025-11-29 09:04:56.931 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.150 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.151 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4199MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.151 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.151 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.236 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.690 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.691 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.742 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:57 np0005539564 nova_compute[226295]: 2025-11-29 09:04:57.751 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:57.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:58.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2872714765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:58 np0005539564 nova_compute[226295]: 2025-11-29 09:04:58.206 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:58 np0005539564 nova_compute[226295]: 2025-11-29 09:04:58.215 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:04:58 np0005539564 nova_compute[226295]: 2025-11-29 09:04:58.235 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:04:58 np0005539564 nova_compute[226295]: 2025-11-29 09:04:58.238 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:04:58 np0005539564 nova_compute[226295]: 2025-11-29 09:04:58.239 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:04:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:59.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:00.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:00 np0005539564 podman[313025]: 2025-11-29 09:05:00.516769363 +0000 UTC m=+0.067862529 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:05:00 np0005539564 podman[313024]: 2025-11-29 09:05:00.524528693 +0000 UTC m=+0.075621309 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:05:00 np0005539564 podman[313023]: 2025-11-29 09:05:00.541351758 +0000 UTC m=+0.103048062 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:05:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:01.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:02.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:02 np0005539564 nova_compute[226295]: 2025-11-29 09:05:02.238 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:02 np0005539564 nova_compute[226295]: 2025-11-29 09:05:02.744 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:03.785 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:03.786 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:03.786 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:03.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:05.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:06.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:07 np0005539564 nova_compute[226295]: 2025-11-29 09:05:07.240 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:07 np0005539564 nova_compute[226295]: 2025-11-29 09:05:07.746 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:10.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:12.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:12 np0005539564 nova_compute[226295]: 2025-11-29 09:05:12.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:12 np0005539564 nova_compute[226295]: 2025-11-29 09:05:12.750 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:13.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:14.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:15.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:16.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:17 np0005539564 nova_compute[226295]: 2025-11-29 09:05:17.248 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:17 np0005539564 nova_compute[226295]: 2025-11-29 09:05:17.753 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:17.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:18.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:19.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:20.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:21.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:22.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:22 np0005539564 nova_compute[226295]: 2025-11-29 09:05:22.249 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:22 np0005539564 nova_compute[226295]: 2025-11-29 09:05:22.757 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:23.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:24.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:26.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:27 np0005539564 nova_compute[226295]: 2025-11-29 09:05:27.253 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:27 np0005539564 nova_compute[226295]: 2025-11-29 09:05:27.760 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:28.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:29 np0005539564 nova_compute[226295]: 2025-11-29 09:05:29.232 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:29.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:30.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:31 np0005539564 podman[313085]: 2025-11-29 09:05:31.542802696 +0000 UTC m=+0.089087374 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:05:31 np0005539564 podman[313086]: 2025-11-29 09:05:31.564299488 +0000 UTC m=+0.100662557 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 04:05:31 np0005539564 podman[313084]: 2025-11-29 09:05:31.592267155 +0000 UTC m=+0.144398971 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 04:05:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:31.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:32.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:32 np0005539564 nova_compute[226295]: 2025-11-29 09:05:32.256 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:32 np0005539564 nova_compute[226295]: 2025-11-29 09:05:32.762 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:33 np0005539564 nova_compute[226295]: 2025-11-29 09:05:33.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:33.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:34.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:34 np0005539564 nova_compute[226295]: 2025-11-29 09:05:34.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:35.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:36.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:36 np0005539564 nova_compute[226295]: 2025-11-29 09:05:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:36 np0005539564 nova_compute[226295]: 2025-11-29 09:05:36.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:05:36 np0005539564 nova_compute[226295]: 2025-11-29 09:05:36.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:05:36 np0005539564 nova_compute[226295]: 2025-11-29 09:05:36.363 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:05:37 np0005539564 nova_compute[226295]: 2025-11-29 09:05:37.274 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:37 np0005539564 nova_compute[226295]: 2025-11-29 09:05:37.778 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:37.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:38.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:38 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539564 nova_compute[226295]: 2025-11-29 09:05:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:40.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:40 np0005539564 nova_compute[226295]: 2025-11-29 09:05:40.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:40 np0005539564 nova_compute[226295]: 2025-11-29 09:05:40.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:05:41 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:05:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:41.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:42.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:42 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:05:42 np0005539564 nova_compute[226295]: 2025-11-29 09:05:42.328 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:42 np0005539564 nova_compute[226295]: 2025-11-29 09:05:42.781 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:43.471 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:05:43 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:43.472 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:05:43 np0005539564 nova_compute[226295]: 2025-11-29 09:05:43.525 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:43.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:44.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:05:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710333703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:05:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:45 np0005539564 nova_compute[226295]: 2025-11-29 09:05:45.925 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:45 np0005539564 nova_compute[226295]: 2025-11-29 09:05:45.925 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:45 np0005539564 nova_compute[226295]: 2025-11-29 09:05:45.966 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.065 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.066 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.074 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.075 226310 INFO nova.compute.claims [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 04:05:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:46.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.202 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/429392080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.718 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.731 226310 DEBUG nova.compute.provider_tree [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.759 226310 DEBUG nova.scheduler.client.report [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.785 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.786 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.830 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.831 226310 DEBUG nova.network.neutron [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.858 226310 INFO nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.875 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.985 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.988 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:05:46 np0005539564 nova_compute[226295]: 2025-11-29 09:05:46.989 226310 INFO nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Creating image(s)#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.037 226310 DEBUG nova.storage.rbd_utils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.077 226310 DEBUG nova.storage.rbd_utils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.115 226310 DEBUG nova.storage.rbd_utils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.121 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.231 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.232 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.233 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.233 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.267 226310 DEBUG nova.storage.rbd_utils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.272 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.331 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539564 nova_compute[226295]: 2025-11-29 09:05:47.784 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:47.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.059 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.787s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:48.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.152 226310 DEBUG nova.storage.rbd_utils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] resizing rbd image d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.316 226310 DEBUG nova.objects.instance [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'migration_context' on Instance uuid d9a8d4aa-59f5-4c17-9092-c1e0684f682e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:05:48 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:05:48 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.347 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.347 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Ensure instance console log exists: /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.348 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.349 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.349 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:48 np0005539564 nova_compute[226295]: 2025-11-29 09:05:48.697 226310 DEBUG nova.network.neutron [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Successfully created port: ce6d3947-7e05-4249-a344-883266a2f2cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:05:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.409 226310 DEBUG nova.network.neutron [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Successfully updated port: ce6d3947-7e05-4249-a344-883266a2f2cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.432 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "refresh_cache-d9a8d4aa-59f5-4c17-9092-c1e0684f682e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.433 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquired lock "refresh_cache-d9a8d4aa-59f5-4c17-9092-c1e0684f682e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.433 226310 DEBUG nova.network.neutron [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.501 226310 DEBUG nova.compute.manager [req-c17fb673-a791-4616-bcb9-961dc107d94d req-15b922f5-1fed-4212-8d17-e7de8efa0a51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received event network-changed-ce6d3947-7e05-4249-a344-883266a2f2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.502 226310 DEBUG nova.compute.manager [req-c17fb673-a791-4616-bcb9-961dc107d94d req-15b922f5-1fed-4212-8d17-e7de8efa0a51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Refreshing instance network info cache due to event network-changed-ce6d3947-7e05-4249-a344-883266a2f2cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.502 226310 DEBUG oslo_concurrency.lockutils [req-c17fb673-a791-4616-bcb9-961dc107d94d req-15b922f5-1fed-4212-8d17-e7de8efa0a51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-d9a8d4aa-59f5-4c17-9092-c1e0684f682e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:05:49 np0005539564 nova_compute[226295]: 2025-11-29 09:05:49.591 226310 DEBUG nova.network.neutron [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:05:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:50.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.664 226310 DEBUG nova.network.neutron [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Updating instance_info_cache with network_info: [{"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.686 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Releasing lock "refresh_cache-d9a8d4aa-59f5-4c17-9092-c1e0684f682e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.687 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Instance network_info: |[{"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.687 226310 DEBUG oslo_concurrency.lockutils [req-c17fb673-a791-4616-bcb9-961dc107d94d req-15b922f5-1fed-4212-8d17-e7de8efa0a51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-d9a8d4aa-59f5-4c17-9092-c1e0684f682e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.688 226310 DEBUG nova.network.neutron [req-c17fb673-a791-4616-bcb9-961dc107d94d req-15b922f5-1fed-4212-8d17-e7de8efa0a51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Refreshing network info cache for port ce6d3947-7e05-4249-a344-883266a2f2cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.693 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Start _get_guest_xml network_info=[{"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'size': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.701 226310 WARNING nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.711 226310 DEBUG nova.virt.libvirt.host [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.712 226310 DEBUG nova.virt.libvirt.host [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.716 226310 DEBUG nova.virt.libvirt.host [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.717 226310 DEBUG nova.virt.libvirt.host [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.719 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.720 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.721 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.721 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.721 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.722 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.722 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.723 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.723 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.724 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.724 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.725 226310 DEBUG nova.virt.hardware [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:05:50 np0005539564 nova_compute[226295]: 2025-11-29 09:05:50.731 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:05:51 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1803930433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.248 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.275 226310 DEBUG nova.storage.rbd_utils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.279 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:05:51 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1603083410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.723 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.725 226310 DEBUG nova.virt.libvirt.vif [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1989922931',display_name='tempest-TestServerMultinode-server-1989922931',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1989922931',id=217,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-lyw435v2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:05:46Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=d9a8d4aa-59f5-4c17-9092-c1e0684f682e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.726 226310 DEBUG nova.network.os_vif_util [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.727 226310 DEBUG nova.network.os_vif_util [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:6a:9c,bridge_name='br-int',has_traffic_filtering=True,id=ce6d3947-7e05-4249-a344-883266a2f2cf,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce6d3947-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.728 226310 DEBUG nova.objects.instance [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9a8d4aa-59f5-4c17-9092-c1e0684f682e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.758 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <uuid>d9a8d4aa-59f5-4c17-9092-c1e0684f682e</uuid>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <name>instance-000000d9</name>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestServerMultinode-server-1989922931</nova:name>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 09:05:50</nova:creationTime>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <nova:user uuid="1ef789b2d4084ff99c58ebaccf153280">tempest-TestServerMultinode-1741703404-project-admin</nova:user>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <nova:project uuid="8c7919c45c334cfb95f0fdc69027c245">tempest-TestServerMultinode-1741703404</nova:project>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <nova:port uuid="ce6d3947-7e05-4249-a344-883266a2f2cf">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <system>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <entry name="serial">d9a8d4aa-59f5-4c17-9092-c1e0684f682e</entry>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <entry name="uuid">d9a8d4aa-59f5-4c17-9092-c1e0684f682e</entry>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </system>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <os>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  </os>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <features>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  </features>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  </clock>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  <devices>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk.config">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:2b:6a:9c"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <target dev="tapce6d3947-7e"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </interface>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e/console.log" append="off"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </serial>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <video>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </video>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </rng>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 04:05:51 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 04:05:51 np0005539564 nova_compute[226295]:  </devices>
Nov 29 04:05:51 np0005539564 nova_compute[226295]: </domain>
Nov 29 04:05:51 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.760 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Preparing to wait for external event network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.760 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.761 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.761 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.762 226310 DEBUG nova.virt.libvirt.vif [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1989922931',display_name='tempest-TestServerMultinode-server-1989922931',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1989922931',id=217,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-lyw435v2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:05:46Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=d9a8d4aa-59f5-4c17-9092-c1e0684f682e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.762 226310 DEBUG nova.network.os_vif_util [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.763 226310 DEBUG nova.network.os_vif_util [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:6a:9c,bridge_name='br-int',has_traffic_filtering=True,id=ce6d3947-7e05-4249-a344-883266a2f2cf,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce6d3947-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.764 226310 DEBUG os_vif [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:6a:9c,bridge_name='br-int',has_traffic_filtering=True,id=ce6d3947-7e05-4249-a344-883266a2f2cf,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce6d3947-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.766 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.767 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.768 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.774 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.775 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce6d3947-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.776 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce6d3947-7e, col_values=(('external_ids', {'iface-id': 'ce6d3947-7e05-4249-a344-883266a2f2cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:6a:9c', 'vm-uuid': 'd9a8d4aa-59f5-4c17-9092-c1e0684f682e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.778 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:51 np0005539564 NetworkManager[48997]: <info>  [1764407151.7798] manager: (tapce6d3947-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.782 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.790 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.791 226310 INFO os_vif [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:6a:9c,bridge_name='br-int',has_traffic_filtering=True,id=ce6d3947-7e05-4249-a344-883266a2f2cf,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce6d3947-7e')#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.839 226310 DEBUG nova.network.neutron [req-c17fb673-a791-4616-bcb9-961dc107d94d req-15b922f5-1fed-4212-8d17-e7de8efa0a51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Updated VIF entry in instance network info cache for port ce6d3947-7e05-4249-a344-883266a2f2cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.840 226310 DEBUG nova.network.neutron [req-c17fb673-a791-4616-bcb9-961dc107d94d req-15b922f5-1fed-4212-8d17-e7de8efa0a51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Updating instance_info_cache with network_info: [{"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.844 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.845 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.845 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No VIF found with MAC fa:16:3e:2b:6a:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.845 226310 INFO nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Using config drive#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.866 226310 DEBUG nova.storage.rbd_utils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:51 np0005539564 nova_compute[226295]: 2025-11-29 09:05:51.871 226310 DEBUG oslo_concurrency.lockutils [req-c17fb673-a791-4616-bcb9-961dc107d94d req-15b922f5-1fed-4212-8d17-e7de8efa0a51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-d9a8d4aa-59f5-4c17-9092-c1e0684f682e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:05:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:52.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:52 np0005539564 nova_compute[226295]: 2025-11-29 09:05:52.158 226310 INFO nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Creating config drive at /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e/disk.config#033[00m
Nov 29 04:05:52 np0005539564 nova_compute[226295]: 2025-11-29 09:05:52.168 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp657ygdo0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:52 np0005539564 nova_compute[226295]: 2025-11-29 09:05:52.332 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp657ygdo0" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:52 np0005539564 nova_compute[226295]: 2025-11-29 09:05:52.381 226310 DEBUG nova.storage.rbd_utils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:52 np0005539564 nova_compute[226295]: 2025-11-29 09:05:52.386 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e/disk.config d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:52 np0005539564 nova_compute[226295]: 2025-11-29 09:05:52.436 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:52 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:52.474 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:52 np0005539564 nova_compute[226295]: 2025-11-29 09:05:52.954 226310 DEBUG oslo_concurrency.processutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e/disk.config d9a8d4aa-59f5-4c17-9092-c1e0684f682e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:52 np0005539564 nova_compute[226295]: 2025-11-29 09:05:52.958 226310 INFO nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Deleting local config drive /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e/disk.config because it was imported into RBD.#033[00m
Nov 29 04:05:53 np0005539564 kernel: tapce6d3947-7e: entered promiscuous mode
Nov 29 04:05:53 np0005539564 NetworkManager[48997]: <info>  [1764407153.0466] manager: (tapce6d3947-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/403)
Nov 29 04:05:53 np0005539564 ovn_controller[130591]: 2025-11-29T09:05:53Z|00874|binding|INFO|Claiming lport ce6d3947-7e05-4249-a344-883266a2f2cf for this chassis.
Nov 29 04:05:53 np0005539564 ovn_controller[130591]: 2025-11-29T09:05:53Z|00875|binding|INFO|ce6d3947-7e05-4249-a344-883266a2f2cf: Claiming fa:16:3e:2b:6a:9c 10.100.0.5
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.048 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.055 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.063 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539564 systemd-udevd[313770]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:05:53 np0005539564 systemd-machined[190128]: New machine qemu-101-instance-000000d9.
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.119 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:6a:9c 10.100.0.5'], port_security=['fa:16:3e:2b:6a:9c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd9a8d4aa-59f5-4c17-9092-c1e0684f682e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c7919c45c334cfb95f0fdc69027c245', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64bf80fe-f6f5-45b2-bd8e-9bcbdb5e2a9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=194d050b-f997-4b45-91e1-9c8d251911a1, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=ce6d3947-7e05-4249-a344-883266a2f2cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.121 139780 INFO neutron.agent.ovn.metadata.agent [-] Port ce6d3947-7e05-4249-a344-883266a2f2cf in datapath 7f61907c-426d-40db-9f88-8bc5f33db1b9 bound to our chassis#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.123 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f61907c-426d-40db-9f88-8bc5f33db1b9#033[00m
Nov 29 04:05:53 np0005539564 NetworkManager[48997]: <info>  [1764407153.1325] device (tapce6d3947-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:05:53 np0005539564 NetworkManager[48997]: <info>  [1764407153.1355] device (tapce6d3947-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:05:53 np0005539564 systemd[1]: Started Virtual Machine qemu-101-instance-000000d9.
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.144 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b50990b1-13f6-4ed0-a460-e252e0ea0dd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.145 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f61907c-41 in ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.148 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f61907c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.149 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac571a1-774c-45b2-9528-2af1452e53ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.150 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5fde1ca9-a769-4976-b0f9-f30e8d9b081a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_controller[130591]: 2025-11-29T09:05:53Z|00876|binding|INFO|Setting lport ce6d3947-7e05-4249-a344-883266a2f2cf ovn-installed in OVS
Nov 29 04:05:53 np0005539564 ovn_controller[130591]: 2025-11-29T09:05:53Z|00877|binding|INFO|Setting lport ce6d3947-7e05-4249-a344-883266a2f2cf up in Southbound
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.153 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.171 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe9b7bc-8dcd-43c3-9d0d-d09d74bd5023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.206 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ca03d044-af8c-417b-8469-cee67bba7525]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.246 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b8e3fa-c816-48fe-b3b8-6c67e282392f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.259 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2e4b8882-1e12-44c8-9bd9-612fa52521b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 NetworkManager[48997]: <info>  [1764407153.2601] manager: (tap7f61907c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/404)
Nov 29 04:05:53 np0005539564 systemd-udevd[313773]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.293 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bf0409-6e38-4716-9351-7b91fa70a518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.296 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a13b6c-d6fa-4d33-ba8f-1ca2a8bad386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 NetworkManager[48997]: <info>  [1764407153.3149] device (tap7f61907c-40): carrier: link connected
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.320 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[cdaad333-3ba3-4313-9626-37d2279f307d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.340 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e711f5-2f07-4bc9-91ed-42d6647d91b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f61907c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:f8:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1012799, 'reachable_time': 40153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313805, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.362 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[08067de0-9cca-445f-999c-cbfcdcb4ce9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:f80e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1012799, 'tstamp': 1012799}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313806, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.381 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0c63ad30-55ba-41a0-ace9-82a265b5509f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f61907c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:f8:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1012799, 'reachable_time': 40153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313807, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.413 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[31198e32-dd9f-4dde-b503-ba61f5738c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.467 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[947eeac4-705d-4b8f-8c6c-a567ea735bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.468 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f61907c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.469 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.469 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f61907c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:53 np0005539564 kernel: tap7f61907c-40: entered promiscuous mode
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.471 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539564 NetworkManager[48997]: <info>  [1764407153.4728] manager: (tap7f61907c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.476 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f61907c-40, col_values=(('external_ids', {'iface-id': 'f4d00aa1-326b-4003-b66e-9a8340a19429'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:53 np0005539564 ovn_controller[130591]: 2025-11-29T09:05:53Z|00878|binding|INFO|Releasing lport f4d00aa1-326b-4003-b66e-9a8340a19429 from this chassis (sb_readonly=0)
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.477 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.479 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.480 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8a9489-5b1d-48e1-aff4-8bb7318c7b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.480 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-7f61907c-426d-40db-9f88-8bc5f33db1b9
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 7f61907c-426d-40db-9f88-8bc5f33db1b9
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:05:53 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:53.481 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'env', 'PROCESS_TAG=haproxy-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f61907c-426d-40db-9f88-8bc5f33db1b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.490 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.604 226310 DEBUG nova.compute.manager [req-52355e0b-7c0b-464c-aeb0-123731f186dc req-7d7a61f5-fc76-4d60-af4e-a2ac982db30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received event network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.604 226310 DEBUG oslo_concurrency.lockutils [req-52355e0b-7c0b-464c-aeb0-123731f186dc req-7d7a61f5-fc76-4d60-af4e-a2ac982db30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.605 226310 DEBUG oslo_concurrency.lockutils [req-52355e0b-7c0b-464c-aeb0-123731f186dc req-7d7a61f5-fc76-4d60-af4e-a2ac982db30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.605 226310 DEBUG oslo_concurrency.lockutils [req-52355e0b-7c0b-464c-aeb0-123731f186dc req-7d7a61f5-fc76-4d60-af4e-a2ac982db30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:53 np0005539564 nova_compute[226295]: 2025-11-29 09:05:53.605 226310 DEBUG nova.compute.manager [req-52355e0b-7c0b-464c-aeb0-123731f186dc req-7d7a61f5-fc76-4d60-af4e-a2ac982db30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Processing event network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:05:53 np0005539564 podman[313856]: 2025-11-29 09:05:53.879794795 +0000 UTC m=+0.061908317 container create 3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 04:05:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:53 np0005539564 systemd[1]: Started libpod-conmon-3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230.scope.
Nov 29 04:05:53 np0005539564 podman[313856]: 2025-11-29 09:05:53.847288265 +0000 UTC m=+0.029401787 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:05:53 np0005539564 systemd[1]: Started libcrun container.
Nov 29 04:05:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:54.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.309 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.310 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407154.3085597, d9a8d4aa-59f5-4c17-9092-c1e0684f682e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.310 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] VM Started (Lifecycle Event)#033[00m
Nov 29 04:05:54 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba67f189fdf7fc2d8ed28a3a27dbf719c07cdfc8dce6098b18550ec14fd3b6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.319 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.326 226310 INFO nova.virt.libvirt.driver [-] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Instance spawned successfully.#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.327 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:05:54 np0005539564 podman[313856]: 2025-11-29 09:05:54.334246614 +0000 UTC m=+0.516360186 container init 3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 04:05:54 np0005539564 podman[313856]: 2025-11-29 09:05:54.344677426 +0000 UTC m=+0.526790958 container start 3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.358 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.358 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.359 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.360 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.361 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.361 226310 DEBUG nova.virt.libvirt.driver [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.368 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.374 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:05:54 np0005539564 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[313893]: [NOTICE]   (313899) : New worker (313901) forked
Nov 29 04:05:54 np0005539564 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[313893]: [NOTICE]   (313899) : Loading success.
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.393 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.394 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407154.3137226, d9a8d4aa-59f5-4c17-9092-c1e0684f682e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.394 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.421 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.425 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407154.318774, d9a8d4aa-59f5-4c17-9092-c1e0684f682e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.425 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.429 226310 INFO nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Took 7.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.429 226310 DEBUG nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.441 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.445 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.474 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.494 226310 INFO nova.compute.manager [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Took 8.46 seconds to build instance.#033[00m
Nov 29 04:05:54 np0005539564 nova_compute[226295]: 2025-11-29 09:05:54.511 226310 DEBUG oslo_concurrency.lockutils [None req-89a65473-04ec-4a1c-aae6-011d056355a6 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:55 np0005539564 nova_compute[226295]: 2025-11-29 09:05:55.755 226310 DEBUG nova.compute.manager [req-9ad50240-2347-40f2-8836-316c46b2d2df req-ae8e8a7c-2566-4f4d-8456-062ae87b3644 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received event network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:05:55 np0005539564 nova_compute[226295]: 2025-11-29 09:05:55.755 226310 DEBUG oslo_concurrency.lockutils [req-9ad50240-2347-40f2-8836-316c46b2d2df req-ae8e8a7c-2566-4f4d-8456-062ae87b3644 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:55 np0005539564 nova_compute[226295]: 2025-11-29 09:05:55.755 226310 DEBUG oslo_concurrency.lockutils [req-9ad50240-2347-40f2-8836-316c46b2d2df req-ae8e8a7c-2566-4f4d-8456-062ae87b3644 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:55 np0005539564 nova_compute[226295]: 2025-11-29 09:05:55.756 226310 DEBUG oslo_concurrency.lockutils [req-9ad50240-2347-40f2-8836-316c46b2d2df req-ae8e8a7c-2566-4f4d-8456-062ae87b3644 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:55 np0005539564 nova_compute[226295]: 2025-11-29 09:05:55.756 226310 DEBUG nova.compute.manager [req-9ad50240-2347-40f2-8836-316c46b2d2df req-ae8e8a7c-2566-4f4d-8456-062ae87b3644 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] No waiting events found dispatching network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:05:55 np0005539564 nova_compute[226295]: 2025-11-29 09:05:55.756 226310 WARNING nova.compute.manager [req-9ad50240-2347-40f2-8836-316c46b2d2df req-ae8e8a7c-2566-4f4d-8456-062ae87b3644 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received unexpected event network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf for instance with vm_state active and task_state None.#033[00m
Nov 29 04:05:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:56.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:56 np0005539564 nova_compute[226295]: 2025-11-29 09:05:56.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:56 np0005539564 nova_compute[226295]: 2025-11-29 09:05:56.507 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:56 np0005539564 nova_compute[226295]: 2025-11-29 09:05:56.507 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:56 np0005539564 nova_compute[226295]: 2025-11-29 09:05:56.507 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:56 np0005539564 nova_compute[226295]: 2025-11-29 09:05:56.508 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:05:56 np0005539564 nova_compute[226295]: 2025-11-29 09:05:56.508 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:56 np0005539564 nova_compute[226295]: 2025-11-29 09:05:56.780 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:56 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:56 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2396982345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:56 np0005539564 nova_compute[226295]: 2025-11-29 09:05:56.972 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.025 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.025 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.026 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.026 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.026 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.027 226310 INFO nova.compute.manager [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Terminating instance#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.028 226310 DEBUG nova.compute.manager [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:05:57 np0005539564 kernel: tapce6d3947-7e (unregistering): left promiscuous mode
Nov 29 04:05:57 np0005539564 NetworkManager[48997]: <info>  [1764407157.0736] device (tapce6d3947-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.086 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 ovn_controller[130591]: 2025-11-29T09:05:57Z|00879|binding|INFO|Releasing lport ce6d3947-7e05-4249-a344-883266a2f2cf from this chassis (sb_readonly=0)
Nov 29 04:05:57 np0005539564 ovn_controller[130591]: 2025-11-29T09:05:57Z|00880|binding|INFO|Setting lport ce6d3947-7e05-4249-a344-883266a2f2cf down in Southbound
Nov 29 04:05:57 np0005539564 ovn_controller[130591]: 2025-11-29T09:05:57Z|00881|binding|INFO|Removing iface tapce6d3947-7e ovn-installed in OVS
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.108 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Nov 29 04:05:57 np0005539564 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d9.scope: Consumed 3.621s CPU time.
Nov 29 04:05:57 np0005539564 systemd-machined[190128]: Machine qemu-101-instance-000000d9 terminated.
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.132 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:6a:9c 10.100.0.5'], port_security=['fa:16:3e:2b:6a:9c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd9a8d4aa-59f5-4c17-9092-c1e0684f682e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c7919c45c334cfb95f0fdc69027c245', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64bf80fe-f6f5-45b2-bd8e-9bcbdb5e2a9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=194d050b-f997-4b45-91e1-9c8d251911a1, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=ce6d3947-7e05-4249-a344-883266a2f2cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.135 139780 INFO neutron.agent.ovn.metadata.agent [-] Port ce6d3947-7e05-4249-a344-883266a2f2cf in datapath 7f61907c-426d-40db-9f88-8bc5f33db1b9 unbound from our chassis#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.137 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f61907c-426d-40db-9f88-8bc5f33db1b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.139 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ce465701-f60d-47d6-b21a-8bcaf81f06d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.139 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 namespace which is not needed anymore#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.254 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.257 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.266 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.267 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.268 226310 INFO nova.virt.libvirt.driver [-] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Instance destroyed successfully.#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.269 226310 DEBUG nova.objects.instance [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'resources' on Instance uuid d9a8d4aa-59f5-4c17-9092-c1e0684f682e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:05:57 np0005539564 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[313893]: [NOTICE]   (313899) : haproxy version is 2.8.14-c23fe91
Nov 29 04:05:57 np0005539564 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[313893]: [NOTICE]   (313899) : path to executable is /usr/sbin/haproxy
Nov 29 04:05:57 np0005539564 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[313893]: [WARNING]  (313899) : Exiting Master process...
Nov 29 04:05:57 np0005539564 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[313893]: [WARNING]  (313899) : Exiting Master process...
Nov 29 04:05:57 np0005539564 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[313893]: [ALERT]    (313899) : Current worker (313901) exited with code 143 (Terminated)
Nov 29 04:05:57 np0005539564 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[313893]: [WARNING]  (313899) : All workers exited. Exiting... (0)
Nov 29 04:05:57 np0005539564 systemd[1]: libpod-3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230.scope: Deactivated successfully.
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.287 226310 DEBUG nova.virt.libvirt.vif [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1989922931',display_name='tempest-TestServerMultinode-server-1989922931',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1989922931',id=217,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:05:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-lyw435v2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:05:54Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=d9a8d4aa-59f5-4c17-9092-c1e0684f682e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.288 226310 DEBUG nova.network.os_vif_util [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "ce6d3947-7e05-4249-a344-883266a2f2cf", "address": "fa:16:3e:2b:6a:9c", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce6d3947-7e", "ovs_interfaceid": "ce6d3947-7e05-4249-a344-883266a2f2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:05:57 np0005539564 podman[313959]: 2025-11-29 09:05:57.288684082 +0000 UTC m=+0.053999603 container died 3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.289 226310 DEBUG nova.network.os_vif_util [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:6a:9c,bridge_name='br-int',has_traffic_filtering=True,id=ce6d3947-7e05-4249-a344-883266a2f2cf,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce6d3947-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.289 226310 DEBUG os_vif [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:6a:9c,bridge_name='br-int',has_traffic_filtering=True,id=ce6d3947-7e05-4249-a344-883266a2f2cf,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce6d3947-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.292 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.293 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce6d3947-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.295 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.298 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.300 226310 INFO os_vif [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:6a:9c,bridge_name='br-int',has_traffic_filtering=True,id=ce6d3947-7e05-4249-a344-883266a2f2cf,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce6d3947-7e')#033[00m
Nov 29 04:05:57 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230-userdata-shm.mount: Deactivated successfully.
Nov 29 04:05:57 np0005539564 systemd[1]: var-lib-containers-storage-overlay-4ba67f189fdf7fc2d8ed28a3a27dbf719c07cdfc8dce6098b18550ec14fd3b6d-merged.mount: Deactivated successfully.
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.333 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 podman[313959]: 2025-11-29 09:05:57.335540811 +0000 UTC m=+0.100856312 container cleanup 3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:05:57 np0005539564 systemd[1]: libpod-conmon-3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230.scope: Deactivated successfully.
Nov 29 04:05:57 np0005539564 podman[314012]: 2025-11-29 09:05:57.40602123 +0000 UTC m=+0.046155410 container remove 3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.412 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a151c50f-3a7d-447c-a8cf-60facb104e9a]: (4, ('Sat Nov 29 09:05:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 (3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230)\n3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230\nSat Nov 29 09:05:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 (3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230)\n3a8785b13a24102d70fc3767d302ffc589a5206e2acbb0b527838fc1a4bca230\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.414 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5e5c17-f5b1-417f-8532-2b1635a1fd4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.415 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f61907c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.416 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 kernel: tap7f61907c-40: left promiscuous mode
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.429 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.431 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[d063bacc-1c28-4a37-81ea-ee844391fc51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.442 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef53b16-ecb8-4dc1-88d7-d19dbaab0eb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.443 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2563f19e-0711-4173-9f13-76ccb92ca1f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.457 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[be5a68e8-44b4-45df-ac21-3d695a5727a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1012791, 'reachable_time': 20278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314029, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.460 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:05:57 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:05:57.460 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[c148c9e5-681e-4750-bc1a-ead6780c56f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:57 np0005539564 systemd[1]: run-netns-ovnmeta\x2d7f61907c\x2d426d\x2d40db\x2d9f88\x2d8bc5f33db1b9.mount: Deactivated successfully.
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.511 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.512 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3971MB free_disk=20.925586700439453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.513 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.513 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.603 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance d9a8d4aa-59f5-4c17-9092-c1e0684f682e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.604 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.605 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:05:57 np0005539564 nova_compute[226295]: 2025-11-29 09:05:57.641 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:57.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3593974134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.098 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.106 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.133 226310 DEBUG nova.compute.manager [req-6b1c14ae-247c-4d14-a787-bef4746883da req-b6cb77fc-32c2-4c6f-8dc9-eeafdd6a6dd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received event network-vif-unplugged-ce6d3947-7e05-4249-a344-883266a2f2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.133 226310 DEBUG oslo_concurrency.lockutils [req-6b1c14ae-247c-4d14-a787-bef4746883da req-b6cb77fc-32c2-4c6f-8dc9-eeafdd6a6dd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.134 226310 DEBUG oslo_concurrency.lockutils [req-6b1c14ae-247c-4d14-a787-bef4746883da req-b6cb77fc-32c2-4c6f-8dc9-eeafdd6a6dd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.134 226310 DEBUG oslo_concurrency.lockutils [req-6b1c14ae-247c-4d14-a787-bef4746883da req-b6cb77fc-32c2-4c6f-8dc9-eeafdd6a6dd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.134 226310 DEBUG nova.compute.manager [req-6b1c14ae-247c-4d14-a787-bef4746883da req-b6cb77fc-32c2-4c6f-8dc9-eeafdd6a6dd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] No waiting events found dispatching network-vif-unplugged-ce6d3947-7e05-4249-a344-883266a2f2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.135 226310 DEBUG nova.compute.manager [req-6b1c14ae-247c-4d14-a787-bef4746883da req-b6cb77fc-32c2-4c6f-8dc9-eeafdd6a6dd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received event network-vif-unplugged-ce6d3947-7e05-4249-a344-883266a2f2cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:05:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:58.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.159 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.418 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.418 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.745 226310 INFO nova.virt.libvirt.driver [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Deleting instance files /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e_del#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.746 226310 INFO nova.virt.libvirt.driver [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Deletion of /var/lib/nova/instances/d9a8d4aa-59f5-4c17-9092-c1e0684f682e_del complete#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.874 226310 INFO nova.compute.manager [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Took 1.85 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.875 226310 DEBUG oslo.service.loopingcall [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.875 226310 DEBUG nova.compute.manager [-] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:05:58 np0005539564 nova_compute[226295]: 2025-11-29 09:05:58.876 226310 DEBUG nova.network.neutron [-] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:05:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:05:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:59.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.009 226310 DEBUG nova.network.neutron [-] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:06:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:00.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.324 226310 DEBUG nova.compute.manager [req-93fb9da0-1c07-48d8-b46b-ee23b91bb7ed req-8395e07d-fb44-4385-b933-e20a02e0f38e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received event network-vif-deleted-ce6d3947-7e05-4249-a344-883266a2f2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.324 226310 INFO nova.compute.manager [req-93fb9da0-1c07-48d8-b46b-ee23b91bb7ed req-8395e07d-fb44-4385-b933-e20a02e0f38e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Neutron deleted interface ce6d3947-7e05-4249-a344-883266a2f2cf; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.325 226310 DEBUG nova.network.neutron [req-93fb9da0-1c07-48d8-b46b-ee23b91bb7ed req-8395e07d-fb44-4385-b933-e20a02e0f38e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.327 226310 DEBUG nova.compute.manager [req-9ed634f1-7a1c-4431-a12e-588b0948af44 req-65a82b03-d135-4083-b35e-e4276629d23b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received event network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.328 226310 DEBUG oslo_concurrency.lockutils [req-9ed634f1-7a1c-4431-a12e-588b0948af44 req-65a82b03-d135-4083-b35e-e4276629d23b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.328 226310 DEBUG oslo_concurrency.lockutils [req-9ed634f1-7a1c-4431-a12e-588b0948af44 req-65a82b03-d135-4083-b35e-e4276629d23b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.329 226310 DEBUG oslo_concurrency.lockutils [req-9ed634f1-7a1c-4431-a12e-588b0948af44 req-65a82b03-d135-4083-b35e-e4276629d23b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.329 226310 DEBUG nova.compute.manager [req-9ed634f1-7a1c-4431-a12e-588b0948af44 req-65a82b03-d135-4083-b35e-e4276629d23b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] No waiting events found dispatching network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.330 226310 WARNING nova.compute.manager [req-9ed634f1-7a1c-4431-a12e-588b0948af44 req-65a82b03-d135-4083-b35e-e4276629d23b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Received unexpected event network-vif-plugged-ce6d3947-7e05-4249-a344-883266a2f2cf for instance with vm_state active and task_state deleting.#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.332 226310 INFO nova.compute.manager [-] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Took 1.46 seconds to deallocate network for instance.#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.380 226310 DEBUG nova.compute.manager [req-93fb9da0-1c07-48d8-b46b-ee23b91bb7ed req-8395e07d-fb44-4385-b933-e20a02e0f38e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Detach interface failed, port_id=ce6d3947-7e05-4249-a344-883266a2f2cf, reason: Instance d9a8d4aa-59f5-4c17-9092-c1e0684f682e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.611 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.611 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:00 np0005539564 nova_compute[226295]: 2025-11-29 09:06:00.675 226310 DEBUG oslo_concurrency.processutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2862961704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:01 np0005539564 nova_compute[226295]: 2025-11-29 09:06:01.163 226310 DEBUG oslo_concurrency.processutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:01 np0005539564 nova_compute[226295]: 2025-11-29 09:06:01.172 226310 DEBUG nova.compute.provider_tree [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:06:01 np0005539564 nova_compute[226295]: 2025-11-29 09:06:01.192 226310 DEBUG nova.scheduler.client.report [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:06:01 np0005539564 nova_compute[226295]: 2025-11-29 09:06:01.227 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:01 np0005539564 nova_compute[226295]: 2025-11-29 09:06:01.250 226310 INFO nova.scheduler.client.report [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Deleted allocations for instance d9a8d4aa-59f5-4c17-9092-c1e0684f682e#033[00m
Nov 29 04:06:01 np0005539564 nova_compute[226295]: 2025-11-29 09:06:01.313 226310 DEBUG oslo_concurrency.lockutils [None req-30601cfa-758b-415c-a7bd-31e4f4c6b60d 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "d9a8d4aa-59f5-4c17-9092-c1e0684f682e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:01.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:02.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:02 np0005539564 nova_compute[226295]: 2025-11-29 09:06:02.335 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:02 np0005539564 podman[314078]: 2025-11-29 09:06:02.538021715 +0000 UTC m=+0.075721552 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 04:06:02 np0005539564 podman[314077]: 2025-11-29 09:06:02.546545376 +0000 UTC m=+0.087733487 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:06:02 np0005539564 podman[314076]: 2025-11-29 09:06:02.547583583 +0000 UTC m=+0.101921231 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:06:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:06:03.786 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:06:03.787 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:06:03.787 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:04.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:05.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:06.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:07 np0005539564 nova_compute[226295]: 2025-11-29 09:06:07.337 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:07 np0005539564 nova_compute[226295]: 2025-11-29 09:06:07.339 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:07 np0005539564 nova_compute[226295]: 2025-11-29 09:06:07.340 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:06:07 np0005539564 nova_compute[226295]: 2025-11-29 09:06:07.340 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:06:07 np0005539564 nova_compute[226295]: 2025-11-29 09:06:07.398 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:07 np0005539564 nova_compute[226295]: 2025-11-29 09:06:07.399 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:06:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:07.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:08.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:09.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:10.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:11 np0005539564 nova_compute[226295]: 2025-11-29 09:06:11.793 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:11.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:12.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:12 np0005539564 nova_compute[226295]: 2025-11-29 09:06:12.264 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407157.26282, d9a8d4aa-59f5-4c17-9092-c1e0684f682e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:06:12 np0005539564 nova_compute[226295]: 2025-11-29 09:06:12.265 226310 INFO nova.compute.manager [-] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:06:12 np0005539564 nova_compute[226295]: 2025-11-29 09:06:12.293 226310 DEBUG nova.compute.manager [None req-4b601803-6c2d-4ec1-aa25-5f5c04ba8b63 - - - - - -] [instance: d9a8d4aa-59f5-4c17-9092-c1e0684f682e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:06:12 np0005539564 nova_compute[226295]: 2025-11-29 09:06:12.449 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:13.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:14.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:15.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:16.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.404988) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177405080, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1696, "num_deletes": 256, "total_data_size": 4062990, "memory_usage": 4116712, "flush_reason": "Manual Compaction"}
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177429539, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 2659495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85531, "largest_seqno": 87222, "table_properties": {"data_size": 2652279, "index_size": 4222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14829, "raw_average_key_size": 19, "raw_value_size": 2637990, "raw_average_value_size": 3545, "num_data_blocks": 184, "num_entries": 744, "num_filter_entries": 744, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407028, "oldest_key_time": 1764407028, "file_creation_time": 1764407177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 24702 microseconds, and 9244 cpu microseconds.
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.429696) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 2659495 bytes OK
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.429738) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.431965) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.431994) EVENT_LOG_v1 {"time_micros": 1764407177431983, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.432023) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4055290, prev total WAL file size 4055290, number of live WAL files 2.
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.434760) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323733' seq:72057594037927935, type:22 .. '6C6F676D0033353235' seq:0, type:0; will stop at (end)
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(2597KB)], [174(11MB)]
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177434814, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 15125993, "oldest_snapshot_seqno": -1}
Nov 29 04:06:17 np0005539564 nova_compute[226295]: 2025-11-29 09:06:17.451 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 11176 keys, 14981578 bytes, temperature: kUnknown
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177594533, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14981578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14909014, "index_size": 43523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 295664, "raw_average_key_size": 26, "raw_value_size": 14713138, "raw_average_value_size": 1316, "num_data_blocks": 1657, "num_entries": 11176, "num_filter_entries": 11176, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.594882) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14981578 bytes
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.596683) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.6 rd, 93.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.9 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 11705, records dropped: 529 output_compression: NoCompression
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.596703) EVENT_LOG_v1 {"time_micros": 1764407177596692, "job": 112, "event": "compaction_finished", "compaction_time_micros": 159828, "compaction_time_cpu_micros": 58618, "output_level": 6, "num_output_files": 1, "total_output_size": 14981578, "num_input_records": 11705, "num_output_records": 11176, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177597275, "job": 112, "event": "table_file_deletion", "file_number": 176}
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177599852, "job": 112, "event": "table_file_deletion", "file_number": 174}
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.434578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.600063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.600073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.600074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.600076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:06:17.600077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:18.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:19.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:20.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:21.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:22.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:22 np0005539564 nova_compute[226295]: 2025-11-29 09:06:22.454 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:23.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:24.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:25.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:26.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:27 np0005539564 nova_compute[226295]: 2025-11-29 09:06:27.457 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:27 np0005539564 nova_compute[226295]: 2025-11-29 09:06:27.458 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:27 np0005539564 nova_compute[226295]: 2025-11-29 09:06:27.458 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:06:27 np0005539564 nova_compute[226295]: 2025-11-29 09:06:27.458 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:06:27 np0005539564 nova_compute[226295]: 2025-11-29 09:06:27.459 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:06:27 np0005539564 nova_compute[226295]: 2025-11-29 09:06:27.460 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:27.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:06:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/475602414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:06:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:06:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/475602414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:06:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:29.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:30.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:31 np0005539564 nova_compute[226295]: 2025-11-29 09:06:31.413 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:32.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:32 np0005539564 nova_compute[226295]: 2025-11-29 09:06:32.461 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:33 np0005539564 podman[314142]: 2025-11-29 09:06:33.516605331 +0000 UTC m=+0.068039393 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 04:06:33 np0005539564 podman[314143]: 2025-11-29 09:06:33.52649474 +0000 UTC m=+0.068755884 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 04:06:33 np0005539564 podman[314141]: 2025-11-29 09:06:33.575857196 +0000 UTC m=+0.131830561 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:06:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:34.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:34 np0005539564 nova_compute[226295]: 2025-11-29 09:06:34.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:34 np0005539564 nova_compute[226295]: 2025-11-29 09:06:34.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:35.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:36.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:37 np0005539564 nova_compute[226295]: 2025-11-29 09:06:37.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:37 np0005539564 nova_compute[226295]: 2025-11-29 09:06:37.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:06:37 np0005539564 nova_compute[226295]: 2025-11-29 09:06:37.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:06:37 np0005539564 nova_compute[226295]: 2025-11-29 09:06:37.376 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:06:37 np0005539564 nova_compute[226295]: 2025-11-29 09:06:37.463 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:37.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:38.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:39.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:40.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:41 np0005539564 nova_compute[226295]: 2025-11-29 09:06:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:41 np0005539564 nova_compute[226295]: 2025-11-29 09:06:41.361 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:41 np0005539564 nova_compute[226295]: 2025-11-29 09:06:41.361 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:41 np0005539564 nova_compute[226295]: 2025-11-29 09:06:41.361 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:06:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:41.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:42.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:42 np0005539564 nova_compute[226295]: 2025-11-29 09:06:42.466 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:44.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:46.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:47 np0005539564 nova_compute[226295]: 2025-11-29 09:06:47.469 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:06:47.958 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:06:47 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:06:47.959 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:06:47 np0005539564 nova_compute[226295]: 2025-11-29 09:06:47.959 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:47.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:48 np0005539564 nova_compute[226295]: 2025-11-29 09:06:48.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:49 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:06:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:49.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:50.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:06:51 np0005539564 nova_compute[226295]: 2025-11-29 09:06:51.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:51 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:06:51.961 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:06:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:52.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:52 np0005539564 nova_compute[226295]: 2025-11-29 09:06:52.473 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:54.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:54 np0005539564 ovn_controller[130591]: 2025-11-29T09:06:54Z|00882|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 04:06:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:56.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:56.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:56 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:56 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:57 np0005539564 nova_compute[226295]: 2025-11-29 09:06:57.477 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:58.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:06:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:58.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:58 np0005539564 nova_compute[226295]: 2025-11-29 09:06:58.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:58 np0005539564 nova_compute[226295]: 2025-11-29 09:06:58.404 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:58 np0005539564 nova_compute[226295]: 2025-11-29 09:06:58.405 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:58 np0005539564 nova_compute[226295]: 2025-11-29 09:06:58.406 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:58 np0005539564 nova_compute[226295]: 2025-11-29 09:06:58.406 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:06:58 np0005539564 nova_compute[226295]: 2025-11-29 09:06:58.407 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:58 np0005539564 nova_compute[226295]: 2025-11-29 09:06:58.907 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.157 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.159 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4190MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.160 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.161 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.258 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.258 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.427 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2329302652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.933 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.943 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:06:59 np0005539564 nova_compute[226295]: 2025-11-29 09:06:59.977 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:07:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:00 np0005539564 nova_compute[226295]: 2025-11-29 09:07:00.012 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:07:00 np0005539564 nova_compute[226295]: 2025-11-29 09:07:00.012 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:00.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:00.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:02.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:02.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:02 np0005539564 nova_compute[226295]: 2025-11-29 09:07:02.480 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:02 np0005539564 nova_compute[226295]: 2025-11-29 09:07:02.482 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:02 np0005539564 nova_compute[226295]: 2025-11-29 09:07:02.483 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:07:02 np0005539564 nova_compute[226295]: 2025-11-29 09:07:02.483 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:02 np0005539564 nova_compute[226295]: 2025-11-29 09:07:02.532 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:02 np0005539564 nova_compute[226295]: 2025-11-29 09:07:02.533 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:07:03.787 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:07:03.788 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:07:03.789 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:04.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:04.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:04 np0005539564 podman[314432]: 2025-11-29 09:07:04.518492068 +0000 UTC m=+0.068416484 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:07:04 np0005539564 podman[314433]: 2025-11-29 09:07:04.54223171 +0000 UTC m=+0.080338496 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:07:04 np0005539564 podman[314431]: 2025-11-29 09:07:04.582763708 +0000 UTC m=+0.140310481 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:07:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:06.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:06.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:07 np0005539564 nova_compute[226295]: 2025-11-29 09:07:07.534 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:07 np0005539564 nova_compute[226295]: 2025-11-29 09:07:07.536 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:07 np0005539564 nova_compute[226295]: 2025-11-29 09:07:07.536 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:07:07 np0005539564 nova_compute[226295]: 2025-11-29 09:07:07.536 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:07 np0005539564 nova_compute[226295]: 2025-11-29 09:07:07.595 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:07 np0005539564 nova_compute[226295]: 2025-11-29 09:07:07.596 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:08.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:10.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:12.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:12 np0005539564 nova_compute[226295]: 2025-11-29 09:07:12.598 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:14.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:16.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:16.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:17 np0005539564 nova_compute[226295]: 2025-11-29 09:07:17.601 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:17 np0005539564 nova_compute[226295]: 2025-11-29 09:07:17.603 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:17 np0005539564 nova_compute[226295]: 2025-11-29 09:07:17.604 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:07:17 np0005539564 nova_compute[226295]: 2025-11-29 09:07:17.604 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:17 np0005539564 nova_compute[226295]: 2025-11-29 09:07:17.634 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:17 np0005539564 nova_compute[226295]: 2025-11-29 09:07:17.635 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:18.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:20.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:22.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:22 np0005539564 nova_compute[226295]: 2025-11-29 09:07:22.636 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:24.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:24.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:26.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:27 np0005539564 nova_compute[226295]: 2025-11-29 09:07:27.640 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:28.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:30.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:30.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:32 np0005539564 nova_compute[226295]: 2025-11-29 09:07:32.005 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:07:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:32.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.115187) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252115288, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 930, "num_deletes": 251, "total_data_size": 1950577, "memory_usage": 1968368, "flush_reason": "Manual Compaction"}
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252129180, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 1287117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87227, "largest_seqno": 88152, "table_properties": {"data_size": 1282808, "index_size": 2024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9448, "raw_average_key_size": 19, "raw_value_size": 1274216, "raw_average_value_size": 2649, "num_data_blocks": 90, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407178, "oldest_key_time": 1764407178, "file_creation_time": 1764407252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 14056 microseconds, and 5300 cpu microseconds.
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.129249) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 1287117 bytes OK
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.129301) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.131515) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.131542) EVENT_LOG_v1 {"time_micros": 1764407252131533, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.131568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1945940, prev total WAL file size 1945940, number of live WAL files 2.
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.132735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(1256KB)], [177(14MB)]
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252132784, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 16268695, "oldest_snapshot_seqno": -1}
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 11142 keys, 14303207 bytes, temperature: kUnknown
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252251537, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 14303207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14231466, "index_size": 42764, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27909, "raw_key_size": 295650, "raw_average_key_size": 26, "raw_value_size": 14036481, "raw_average_value_size": 1259, "num_data_blocks": 1620, "num_entries": 11142, "num_filter_entries": 11142, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.251888) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 14303207 bytes
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.254088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.9 rd, 120.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.3 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(23.8) write-amplify(11.1) OK, records in: 11657, records dropped: 515 output_compression: NoCompression
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.254117) EVENT_LOG_v1 {"time_micros": 1764407252254103, "job": 114, "event": "compaction_finished", "compaction_time_micros": 118857, "compaction_time_cpu_micros": 54598, "output_level": 6, "num_output_files": 1, "total_output_size": 14303207, "num_input_records": 11657, "num_output_records": 11142, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252254627, "job": 114, "event": "table_file_deletion", "file_number": 179}
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252259358, "job": 114, "event": "table_file_deletion", "file_number": 177}
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.132618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.259403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.259409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.259412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.259415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:07:32.259418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:32.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:32 np0005539564 nova_compute[226295]: 2025-11-29 09:07:32.642 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:32 np0005539564 nova_compute[226295]: 2025-11-29 09:07:32.644 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:32 np0005539564 nova_compute[226295]: 2025-11-29 09:07:32.644 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:07:32 np0005539564 nova_compute[226295]: 2025-11-29 09:07:32.644 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:32 np0005539564 nova_compute[226295]: 2025-11-29 09:07:32.676 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:32 np0005539564 nova_compute[226295]: 2025-11-29 09:07:32.677 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:34.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:34 np0005539564 nova_compute[226295]: 2025-11-29 09:07:34.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:35 np0005539564 podman[314501]: 2025-11-29 09:07:35.539561595 +0000 UTC m=+0.078050304 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:07:35 np0005539564 podman[314500]: 2025-11-29 09:07:35.539521084 +0000 UTC m=+0.084829218 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 04:07:35 np0005539564 podman[314499]: 2025-11-29 09:07:35.569190638 +0000 UTC m=+0.119473877 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Nov 29 04:07:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:36.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:36 np0005539564 nova_compute[226295]: 2025-11-29 09:07:36.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:37 np0005539564 nova_compute[226295]: 2025-11-29 09:07:37.678 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:37 np0005539564 nova_compute[226295]: 2025-11-29 09:07:37.679 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:37 np0005539564 nova_compute[226295]: 2025-11-29 09:07:37.679 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:07:37 np0005539564 nova_compute[226295]: 2025-11-29 09:07:37.679 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:37 np0005539564 nova_compute[226295]: 2025-11-29 09:07:37.708 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:37 np0005539564 nova_compute[226295]: 2025-11-29 09:07:37.708 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:38.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:07:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:38.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:07:39 np0005539564 nova_compute[226295]: 2025-11-29 09:07:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:39 np0005539564 nova_compute[226295]: 2025-11-29 09:07:39.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:07:39 np0005539564 nova_compute[226295]: 2025-11-29 09:07:39.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:07:39 np0005539564 nova_compute[226295]: 2025-11-29 09:07:39.538 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:07:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:40.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:40.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:42.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:42.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:42 np0005539564 nova_compute[226295]: 2025-11-29 09:07:42.709 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:42 np0005539564 nova_compute[226295]: 2025-11-29 09:07:42.711 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:42 np0005539564 nova_compute[226295]: 2025-11-29 09:07:42.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:07:42 np0005539564 nova_compute[226295]: 2025-11-29 09:07:42.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:42 np0005539564 nova_compute[226295]: 2025-11-29 09:07:42.713 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:43 np0005539564 nova_compute[226295]: 2025-11-29 09:07:43.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:43 np0005539564 nova_compute[226295]: 2025-11-29 09:07:43.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:43 np0005539564 nova_compute[226295]: 2025-11-29 09:07:43.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:07:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:44.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:44.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:46.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:46.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:47 np0005539564 nova_compute[226295]: 2025-11-29 09:07:47.715 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:48.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:48 np0005539564 nova_compute[226295]: 2025-11-29 09:07:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:48.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:50.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:50.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:52.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:52.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:52 np0005539564 nova_compute[226295]: 2025-11-29 09:07:52.718 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:53 np0005539564 nova_compute[226295]: 2025-11-29 09:07:53.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:54.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:54.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:55 np0005539564 nova_compute[226295]: 2025-11-29 09:07:55.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:55 np0005539564 nova_compute[226295]: 2025-11-29 09:07:55.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:07:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:07:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:56.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:07:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:56.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:57 np0005539564 nova_compute[226295]: 2025-11-29 09:07:57.721 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 04:07:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 04:07:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:57 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:58.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:58 np0005539564 nova_compute[226295]: 2025-11-29 09:07:58.370 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:07:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:58.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:58 np0005539564 nova_compute[226295]: 2025-11-29 09:07:58.551 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:58 np0005539564 nova_compute[226295]: 2025-11-29 09:07:58.552 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:58 np0005539564 nova_compute[226295]: 2025-11-29 09:07:58.552 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:58 np0005539564 nova_compute[226295]: 2025-11-29 09:07:58.553 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:07:58 np0005539564 nova_compute[226295]: 2025-11-29 09:07:58.554 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:07:58.574 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:07:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:07:58.575 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:07:58 np0005539564 nova_compute[226295]: 2025-11-29 09:07:58.618 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:07:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:07:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/71947940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:59 np0005539564 nova_compute[226295]: 2025-11-29 09:07:59.052 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:59 np0005539564 nova_compute[226295]: 2025-11-29 09:07:59.296 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:07:59 np0005539564 nova_compute[226295]: 2025-11-29 09:07:59.297 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4184MB free_disk=20.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:07:59 np0005539564 nova_compute[226295]: 2025-11-29 09:07:59.298 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:59 np0005539564 nova_compute[226295]: 2025-11-29 09:07:59.298 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:59 np0005539564 nova_compute[226295]: 2025-11-29 09:07:59.577 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:07:59 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:07:59.578 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:59 np0005539564 nova_compute[226295]: 2025-11-29 09:07:59.578 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:07:59 np0005539564 nova_compute[226295]: 2025-11-29 09:07:59.605 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:08:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/473711722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:00 np0005539564 nova_compute[226295]: 2025-11-29 09:08:00.090 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:00 np0005539564 nova_compute[226295]: 2025-11-29 09:08:00.098 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:08:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:00.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:00 np0005539564 nova_compute[226295]: 2025-11-29 09:08:00.354 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:08:00 np0005539564 nova_compute[226295]: 2025-11-29 09:08:00.356 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:08:00 np0005539564 nova_compute[226295]: 2025-11-29 09:08:00.356 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:00.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:02.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:02.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:02 np0005539564 nova_compute[226295]: 2025-11-29 09:08:02.727 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:08:03.789 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:08:03.790 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:08:03.790 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:04.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:04.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:08:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:08:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:06.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:06.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:06 np0005539564 podman[314788]: 2025-11-29 09:08:06.53137538 +0000 UTC m=+0.067209451 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 04:08:06 np0005539564 podman[314787]: 2025-11-29 09:08:06.541098184 +0000 UTC m=+0.087507401 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 04:08:06 np0005539564 podman[314786]: 2025-11-29 09:08:06.58341561 +0000 UTC m=+0.128026139 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:08:07 np0005539564 nova_compute[226295]: 2025-11-29 09:08:07.731 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:08.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:08.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:10.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:10.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:12.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:12 np0005539564 nova_compute[226295]: 2025-11-29 09:08:12.733 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:14.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:14 np0005539564 nova_compute[226295]: 2025-11-29 09:08:14.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:14 np0005539564 nova_compute[226295]: 2025-11-29 09:08:14.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:08:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:14.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:16.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:16.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:17 np0005539564 nova_compute[226295]: 2025-11-29 09:08:17.735 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:18.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:18.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:20.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:20.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:22.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:22 np0005539564 nova_compute[226295]: 2025-11-29 09:08:22.737 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:22 np0005539564 nova_compute[226295]: 2025-11-29 09:08:22.739 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:22 np0005539564 nova_compute[226295]: 2025-11-29 09:08:22.739 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:08:22 np0005539564 nova_compute[226295]: 2025-11-29 09:08:22.739 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:22 np0005539564 nova_compute[226295]: 2025-11-29 09:08:22.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:22 np0005539564 nova_compute[226295]: 2025-11-29 09:08:22.764 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:24.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:26 np0005539564 nova_compute[226295]: 2025-11-29 09:08:26.007 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:08:26 np0005539564 nova_compute[226295]: 2025-11-29 09:08:26.008 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:26.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:26.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:27 np0005539564 nova_compute[226295]: 2025-11-29 09:08:27.269 226310 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 2.28 sec#033[00m
Nov 29 04:08:27 np0005539564 nova_compute[226295]: 2025-11-29 09:08:27.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:27 np0005539564 nova_compute[226295]: 2025-11-29 09:08:27.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:27 np0005539564 nova_compute[226295]: 2025-11-29 09:08:27.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:08:27 np0005539564 nova_compute[226295]: 2025-11-29 09:08:27.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:27 np0005539564 nova_compute[226295]: 2025-11-29 09:08:27.803 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:27 np0005539564 nova_compute[226295]: 2025-11-29 09:08:27.804 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:28.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:30.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:30.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:32.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:32 np0005539564 nova_compute[226295]: 2025-11-29 09:08:32.805 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:32 np0005539564 nova_compute[226295]: 2025-11-29 09:08:32.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:32 np0005539564 nova_compute[226295]: 2025-11-29 09:08:32.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:08:32 np0005539564 nova_compute[226295]: 2025-11-29 09:08:32.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:32 np0005539564 nova_compute[226295]: 2025-11-29 09:08:32.808 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:32 np0005539564 nova_compute[226295]: 2025-11-29 09:08:32.809 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:34.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:36.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:36.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:37 np0005539564 podman[314851]: 2025-11-29 09:08:37.506764105 +0000 UTC m=+0.057127009 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:08:37 np0005539564 podman[314852]: 2025-11-29 09:08:37.522554622 +0000 UTC m=+0.068096335 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 04:08:37 np0005539564 podman[314850]: 2025-11-29 09:08:37.569200106 +0000 UTC m=+0.117661658 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 04:08:37 np0005539564 nova_compute[226295]: 2025-11-29 09:08:37.811 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:37 np0005539564 nova_compute[226295]: 2025-11-29 09:08:37.813 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:37 np0005539564 nova_compute[226295]: 2025-11-29 09:08:37.813 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:08:37 np0005539564 nova_compute[226295]: 2025-11-29 09:08:37.813 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:37 np0005539564 nova_compute[226295]: 2025-11-29 09:08:37.853 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:37 np0005539564 nova_compute[226295]: 2025-11-29 09:08:37.854 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:38.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.229019) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319229102, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 910, "num_deletes": 250, "total_data_size": 1758116, "memory_usage": 1778200, "flush_reason": "Manual Compaction"}
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319343107, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 761991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88157, "largest_seqno": 89062, "table_properties": {"data_size": 758458, "index_size": 1312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9579, "raw_average_key_size": 20, "raw_value_size": 750852, "raw_average_value_size": 1639, "num_data_blocks": 57, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407253, "oldest_key_time": 1764407253, "file_creation_time": 1764407319, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 114160 microseconds, and 5564 cpu microseconds.
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.343182) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 761991 bytes OK
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.343212) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.346585) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.346613) EVENT_LOG_v1 {"time_micros": 1764407319346603, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.346636) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 1753524, prev total WAL file size 1769567, number of live WAL files 2.
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.347736) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303138' seq:72057594037927935, type:22 .. '6D6772737461740033323639' seq:0, type:0; will stop at (end)
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(744KB)], [180(13MB)]
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319347816, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 15065198, "oldest_snapshot_seqno": -1}
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 11108 keys, 11615523 bytes, temperature: kUnknown
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319815171, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 11615523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11547963, "index_size": 38711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 295140, "raw_average_key_size": 26, "raw_value_size": 11357691, "raw_average_value_size": 1022, "num_data_blocks": 1452, "num_entries": 11108, "num_filter_entries": 11108, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407319, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.815504) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11615523 bytes
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.818132) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.2 rd, 24.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.6 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(35.0) write-amplify(15.2) OK, records in: 11600, records dropped: 492 output_compression: NoCompression
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.818152) EVENT_LOG_v1 {"time_micros": 1764407319818142, "job": 116, "event": "compaction_finished", "compaction_time_micros": 467459, "compaction_time_cpu_micros": 34168, "output_level": 6, "num_output_files": 1, "total_output_size": 11615523, "num_input_records": 11600, "num_output_records": 11108, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319818573, "job": 116, "event": "table_file_deletion", "file_number": 182}
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319821838, "job": 116, "event": "table_file_deletion", "file_number": 180}
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.347641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.822010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.822018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.822023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.822026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:08:39.822029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:40.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:40.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:42.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:42.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:42 np0005539564 nova_compute[226295]: 2025-11-29 09:08:42.855 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:42 np0005539564 nova_compute[226295]: 2025-11-29 09:08:42.855 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:42 np0005539564 nova_compute[226295]: 2025-11-29 09:08:42.856 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:08:42 np0005539564 nova_compute[226295]: 2025-11-29 09:08:42.856 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:42 np0005539564 nova_compute[226295]: 2025-11-29 09:08:42.856 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:42 np0005539564 nova_compute[226295]: 2025-11-29 09:08:42.858 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:44.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:44.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:46.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:46.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:46 np0005539564 nova_compute[226295]: 2025-11-29 09:08:46.979 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:46 np0005539564 nova_compute[226295]: 2025-11-29 09:08:46.980 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:47 np0005539564 nova_compute[226295]: 2025-11-29 09:08:47.858 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:48.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:50.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:50 np0005539564 nova_compute[226295]: 2025-11-29 09:08:50.473 226310 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 3.20 sec#033[00m
Nov 29 04:08:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:50.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:52.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:52.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.854 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.855 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.855 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.859 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.861 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.913 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:52 np0005539564 nova_compute[226295]: 2025-11-29 09:08:52.915 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:54.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:54.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:56.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:56.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:56 np0005539564 nova_compute[226295]: 2025-11-29 09:08:56.758 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:08:56 np0005539564 nova_compute[226295]: 2025-11-29 09:08:56.759 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:56 np0005539564 nova_compute[226295]: 2025-11-29 09:08:56.760 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:56 np0005539564 nova_compute[226295]: 2025-11-29 09:08:56.760 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:56 np0005539564 nova_compute[226295]: 2025-11-29 09:08:56.761 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:56 np0005539564 nova_compute[226295]: 2025-11-29 09:08:56.761 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:56 np0005539564 nova_compute[226295]: 2025-11-29 09:08:56.762 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:57 np0005539564 nova_compute[226295]: 2025-11-29 09:08:57.916 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:57 np0005539564 nova_compute[226295]: 2025-11-29 09:08:57.918 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:57 np0005539564 nova_compute[226295]: 2025-11-29 09:08:57.918 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:08:57 np0005539564 nova_compute[226295]: 2025-11-29 09:08:57.919 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:57 np0005539564 nova_compute[226295]: 2025-11-29 09:08:57.946 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:57 np0005539564 nova_compute[226295]: 2025-11-29 09:08:57.947 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:58.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:08:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:58.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:00.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:00.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:02.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:02 np0005539564 nova_compute[226295]: 2025-11-29 09:09:02.948 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:09:02 np0005539564 nova_compute[226295]: 2025-11-29 09:09:02.950 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:09:02 np0005539564 nova_compute[226295]: 2025-11-29 09:09:02.950 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:09:02 np0005539564 nova_compute[226295]: 2025-11-29 09:09:02.951 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:09:03 np0005539564 nova_compute[226295]: 2025-11-29 09:09:02.998 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:03 np0005539564 nova_compute[226295]: 2025-11-29 09:09:02.999 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:09:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:09:03.791 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:09:03.792 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:09:03.792 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:04.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:04.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:05 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:06.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:06 np0005539564 nova_compute[226295]: 2025-11-29 09:09:06.490 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:06 np0005539564 nova_compute[226295]: 2025-11-29 09:09:06.490 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:09:06 np0005539564 nova_compute[226295]: 2025-11-29 09:09:06.491 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:06.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:07 np0005539564 nova_compute[226295]: 2025-11-29 09:09:07.871 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:07 np0005539564 nova_compute[226295]: 2025-11-29 09:09:07.871 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:07 np0005539564 nova_compute[226295]: 2025-11-29 09:09:07.872 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:07 np0005539564 nova_compute[226295]: 2025-11-29 09:09:07.872 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:09:07 np0005539564 nova_compute[226295]: 2025-11-29 09:09:07.873 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:08 np0005539564 nova_compute[226295]: 2025-11-29 09:09:07.999 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:08.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:09:08 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3160855647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:09:08 np0005539564 nova_compute[226295]: 2025-11-29 09:09:08.350 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:09:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:09:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:08.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:08 np0005539564 podman[315076]: 2025-11-29 09:09:08.514662709 +0000 UTC m=+0.054490347 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 04:09:08 np0005539564 podman[315071]: 2025-11-29 09:09:08.521667499 +0000 UTC m=+0.069437022 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 04:09:08 np0005539564 podman[315070]: 2025-11-29 09:09:08.533069588 +0000 UTC m=+0.085194098 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 29 04:09:08 np0005539564 nova_compute[226295]: 2025-11-29 09:09:08.550 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:09:08 np0005539564 nova_compute[226295]: 2025-11-29 09:09:08.551 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4175MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:09:08 np0005539564 nova_compute[226295]: 2025-11-29 09:09:08.551 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:08 np0005539564 nova_compute[226295]: 2025-11-29 09:09:08.551 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:10.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:10.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:12.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:12.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:13 np0005539564 nova_compute[226295]: 2025-11-29 09:09:13.002 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:14.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:14.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:15 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.089 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.090 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.130 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.159 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.160 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.177 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.205 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:09:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:16.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.237 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:16.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:16 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:09:16 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1702390274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.702 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.708 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.748 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.750 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.750 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 8.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:16 np0005539564 nova_compute[226295]: 2025-11-29 09:09:16.750 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:16 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 e429: 3 total, 3 up, 3 in
Nov 29 04:09:18 np0005539564 nova_compute[226295]: 2025-11-29 09:09:18.005 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:18.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:18.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:20.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:22.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:22.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:23 np0005539564 nova_compute[226295]: 2025-11-29 09:09:23.044 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:09:23 np0005539564 nova_compute[226295]: 2025-11-29 09:09:23.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:23 np0005539564 nova_compute[226295]: 2025-11-29 09:09:23.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5039 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:09:23 np0005539564 nova_compute[226295]: 2025-11-29 09:09:23.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:09:23 np0005539564 nova_compute[226295]: 2025-11-29 09:09:23.046 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:09:23 np0005539564 nova_compute[226295]: 2025-11-29 09:09:23.047 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:24.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:24.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:26.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:26.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:28 np0005539564 nova_compute[226295]: 2025-11-29 09:09:28.047 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:28.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:28.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:29 np0005539564 nova_compute[226295]: 2025-11-29 09:09:29.342 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:29 np0005539564 nova_compute[226295]: 2025-11-29 09:09:29.343 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:29 np0005539564 nova_compute[226295]: 2025-11-29 09:09:29.361 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:09:29 np0005539564 nova_compute[226295]: 2025-11-29 09:09:29.466 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:29 np0005539564 nova_compute[226295]: 2025-11-29 09:09:29.467 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:29 np0005539564 nova_compute[226295]: 2025-11-29 09:09:29.477 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:09:29 np0005539564 nova_compute[226295]: 2025-11-29 09:09:29.477 226310 INFO nova.compute.claims [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 04:09:29 np0005539564 nova_compute[226295]: 2025-11-29 09:09:29.590 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:30.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:30.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:32.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:32.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:32 np0005539564 ceph-mds[84716]: mds.beacon.cephfs.compute-1.oeerwd missed beacon ack from the monitors
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.049 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:09:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1042361463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.375 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.785s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.380 226310 DEBUG nova.compute.provider_tree [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.547 226310 DEBUG nova.scheduler.client.report [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.611 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.612 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.668 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.669 226310 DEBUG nova.network.neutron [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.704 226310 INFO nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.734 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:09:33 np0005539564 nova_compute[226295]: 2025-11-29 09:09:33.781 226310 INFO nova.virt.block_device [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Booting with volume snapshot 237d90f9-3af7-4b2e-a811-c8302a1ac7f7 at /dev/vda#033[00m
Nov 29 04:09:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:34.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:34 np0005539564 nova_compute[226295]: 2025-11-29 09:09:34.557 226310 DEBUG nova.policy [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ff561a95dc44b9fb9f7fd8fee80f589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:09:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:36.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:36.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:38 np0005539564 nova_compute[226295]: 2025-11-29 09:09:38.053 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:38.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:39 np0005539564 podman[315230]: 2025-11-29 09:09:39.539309465 +0000 UTC m=+0.082833794 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 04:09:39 np0005539564 podman[315229]: 2025-11-29 09:09:39.546391688 +0000 UTC m=+0.091120000 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 04:09:39 np0005539564 podman[315228]: 2025-11-29 09:09:39.564785875 +0000 UTC m=+0.114781150 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:09:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:40.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:42.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:42.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:43 np0005539564 nova_compute[226295]: 2025-11-29 09:09:43.055 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:09:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:44.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:44.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:46.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:46.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:48 np0005539564 nova_compute[226295]: 2025-11-29 09:09:48.056 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:48.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:48.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:09:48.643 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:09:48 np0005539564 nova_compute[226295]: 2025-11-29 09:09:48.643 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:09:48.645 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:09:48 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:09:48.645 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:09:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:50.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:50.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.699 226310 DEBUG os_brick.utils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.703 226310 DEBUG nova.network.neutron [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Successfully created port: 2d688aa8-e9a3-4129-b88d-a7ab81fca989 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.701 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.714 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.715 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[53aaa10e-3e0d-4824-84b2-1511d22f5e00]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.717 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.726 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.727 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1b7ce0-f7d6-4e4e-a3e8-e042a8dfccba]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.729 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.739 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.740 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[6531388a-44f6-4af1-973d-a1ab91347962]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.741 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[8c62baf7-5ca7-4ed5-9d64-0504e626937b]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.742 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.786 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "nvme version" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.790 226310 DEBUG os_brick.initiator.connectors.lightos [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.790 226310 DEBUG os_brick.initiator.connectors.lightos [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.791 226310 DEBUG os_brick.initiator.connectors.lightos [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.792 226310 DEBUG os_brick.utils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 04:09:50 np0005539564 nova_compute[226295]: 2025-11-29 09:09:50.792 226310 DEBUG nova.virt.block_device [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Updating existing volume attachment record: f2156a7c-75e9-4ffe-a015-a4347592bf1a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 04:09:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:52.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:53 np0005539564 nova_compute[226295]: 2025-11-29 09:09:53.059 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.158 226310 DEBUG nova.network.neutron [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Successfully updated port: 2d688aa8-e9a3-4129-b88d-a7ab81fca989 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.176 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "refresh_cache-d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.177 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquired lock "refresh_cache-d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.177 226310 DEBUG nova.network.neutron [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:09:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:54.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.508 226310 DEBUG nova.compute.manager [req-5d3701ad-715a-4811-bd8f-6fb2b09fd2c8 req-8a7fd961-1f0e-4aa6-bd8e-de62d5856b9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received event network-changed-2d688aa8-e9a3-4129-b88d-a7ab81fca989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.508 226310 DEBUG nova.compute.manager [req-5d3701ad-715a-4811-bd8f-6fb2b09fd2c8 req-8a7fd961-1f0e-4aa6-bd8e-de62d5856b9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Refreshing instance network info cache due to event network-changed-2d688aa8-e9a3-4129-b88d-a7ab81fca989. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.509 226310 DEBUG oslo_concurrency.lockutils [req-5d3701ad-715a-4811-bd8f-6fb2b09fd2c8 req-8a7fd961-1f0e-4aa6-bd8e-de62d5856b9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:09:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:54.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.644 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.646 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.646 226310 INFO nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Creating image(s)#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.647 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.647 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Ensure instance console log exists: /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.648 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.648 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:54 np0005539564 nova_compute[226295]: 2025-11-29 09:09:54.648 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:55 np0005539564 nova_compute[226295]: 2025-11-29 09:09:55.541 226310 DEBUG nova.network.neutron [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:09:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:56.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:56.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:09:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:58.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.614 226310 DEBUG nova.network.neutron [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Updating instance_info_cache with network_info: [{"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.662 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Releasing lock "refresh_cache-d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.662 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Instance network_info: |[{"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.663 226310 DEBUG oslo_concurrency.lockutils [req-5d3701ad-715a-4811-bd8f-6fb2b09fd2c8 req-8a7fd961-1f0e-4aa6-bd8e-de62d5856b9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.663 226310 DEBUG nova.network.neutron [req-5d3701ad-715a-4811-bd8f-6fb2b09fd2c8 req-8a7fd961-1f0e-4aa6-bd8e-de62d5856b9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Refreshing network info cache for port 2d688aa8-e9a3-4129-b88d-a7ab81fca989 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.666 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Start _get_guest_xml network_info=[{"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-42ac62ec-79ea-47f9-8105-4e6fd3447306', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '42ac62ec-79ea-47f9-8105-4e6fd3447306', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d', 'attached_at': '', 'detached_at': '', 'volume_id': '42ac62ec-79ea-47f9-8105-4e6fd3447306', 'serial': '42ac62ec-79ea-47f9-8105-4e6fd3447306'}, 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'f2156a7c-75e9-4ffe-a015-a4347592bf1a', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.672 226310 WARNING nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.682 226310 DEBUG nova.virt.libvirt.host [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.683 226310 DEBUG nova.virt.libvirt.host [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.697 226310 DEBUG nova.virt.libvirt.host [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.698 226310 DEBUG nova.virt.libvirt.host [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.700 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.701 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.702 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.703 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.703 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.704 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.704 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.705 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.705 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.706 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.706 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.706 226310 DEBUG nova.virt.hardware [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.755 226310 DEBUG nova.storage.rbd_utils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:09:58 np0005539564 nova_compute[226295]: 2025-11-29 09:09:58.761 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:09:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1739398218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.219 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.249 226310 DEBUG nova.virt.libvirt.vif [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1826346449',display_name='tempest-TestVolumeBootPattern-server-1826346449',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1826346449',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-81vmtj70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:09:33Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.250 226310 DEBUG nova.network.os_vif_util [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.251 226310 DEBUG nova.network.os_vif_util [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2c:42,bridge_name='br-int',has_traffic_filtering=True,id=2d688aa8-e9a3-4129-b88d-a7ab81fca989,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d688aa8-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.252 226310 DEBUG nova.objects.instance [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.271 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <uuid>d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d</uuid>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <name>instance-000000db</name>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestVolumeBootPattern-server-1826346449</nova:name>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 09:09:58</nova:creationTime>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <nova:user uuid="5ff561a95dc44b9fb9f7fd8fee80f589">tempest-TestVolumeBootPattern-531976395-project-member</nova:user>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <nova:project uuid="51af0a2ee11a460ab825a484e5c6f4a3">tempest-TestVolumeBootPattern-531976395</nova:project>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <nova:port uuid="2d688aa8-e9a3-4129-b88d-a7ab81fca989">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <system>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <entry name="serial">d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d</entry>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <entry name="uuid">d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d</entry>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </system>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <os>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  </os>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <features>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  </features>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  </clock>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  <devices>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d_disk.config">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-42ac62ec-79ea-47f9-8105-4e6fd3447306">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <serial>42ac62ec-79ea-47f9-8105-4e6fd3447306</serial>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:d6:2c:42"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <target dev="tap2d688aa8-e9"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </interface>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d/console.log" append="off"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </serial>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <video>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </video>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </rng>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 04:09:59 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 04:09:59 np0005539564 nova_compute[226295]:  </devices>
Nov 29 04:09:59 np0005539564 nova_compute[226295]: </domain>
Nov 29 04:09:59 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.272 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Preparing to wait for external event network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.273 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.273 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.273 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.274 226310 DEBUG nova.virt.libvirt.vif [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1826346449',display_name='tempest-TestVolumeBootPattern-server-1826346449',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1826346449',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-81vmtj70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:09:33Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.274 226310 DEBUG nova.network.os_vif_util [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.275 226310 DEBUG nova.network.os_vif_util [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2c:42,bridge_name='br-int',has_traffic_filtering=True,id=2d688aa8-e9a3-4129-b88d-a7ab81fca989,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d688aa8-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.276 226310 DEBUG os_vif [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2c:42,bridge_name='br-int',has_traffic_filtering=True,id=2d688aa8-e9a3-4129-b88d-a7ab81fca989,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d688aa8-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.277 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.277 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.282 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.282 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d688aa8-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.282 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d688aa8-e9, col_values=(('external_ids', {'iface-id': '2d688aa8-e9a3-4129-b88d-a7ab81fca989', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:2c:42', 'vm-uuid': 'd3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.284 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:59 np0005539564 NetworkManager[48997]: <info>  [1764407399.2850] manager: (tap2d688aa8-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.288 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.293 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.295 226310 INFO os_vif [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2c:42,bridge_name='br-int',has_traffic_filtering=True,id=2d688aa8-e9a3-4129-b88d-a7ab81fca989,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d688aa8-e9')#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.362 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.363 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.363 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No VIF found with MAC fa:16:3e:d6:2c:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.364 226310 INFO nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Using config drive#033[00m
Nov 29 04:09:59 np0005539564 nova_compute[226295]: 2025-11-29 09:09:59.392 226310 DEBUG nova.storage.rbd_utils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.145 226310 INFO nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Creating config drive at /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d/disk.config#033[00m
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.155 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9g6ctba6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:00.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.317 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9g6ctba6" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.368 226310 DEBUG nova.storage.rbd_utils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.373 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d/disk.config d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.578 226310 DEBUG oslo_concurrency.processutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d/disk.config d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.579 226310 INFO nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Deleting local config drive /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d/disk.config because it was imported into RBD.#033[00m
Nov 29 04:10:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:00.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:00 np0005539564 kernel: tap2d688aa8-e9: entered promiscuous mode
Nov 29 04:10:00 np0005539564 NetworkManager[48997]: <info>  [1764407400.6587] manager: (tap2d688aa8-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.659 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:00 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:00Z|00883|binding|INFO|Claiming lport 2d688aa8-e9a3-4129-b88d-a7ab81fca989 for this chassis.
Nov 29 04:10:00 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:00Z|00884|binding|INFO|2d688aa8-e9a3-4129-b88d-a7ab81fca989: Claiming fa:16:3e:d6:2c:42 10.100.0.13
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.670 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.684 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:2c:42 10.100.0.13'], port_security=['fa:16:3e:d6:2c:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bbe29fc0-1435-473c-891a-fae6e52fd8dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2d688aa8-e9a3-4129-b88d-a7ab81fca989) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.686 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2d688aa8-e9a3-4129-b88d-a7ab81fca989 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad bound to our chassis#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.687 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad#033[00m
Nov 29 04:10:00 np0005539564 systemd-udevd[315407]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:10:00 np0005539564 NetworkManager[48997]: <info>  [1764407400.7072] device (tap2d688aa8-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:10:00 np0005539564 NetworkManager[48997]: <info>  [1764407400.7081] device (tap2d688aa8-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:10:00 np0005539564 systemd-machined[190128]: New machine qemu-102-instance-000000db.
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.706 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a1dc04c8-f8f3-48ea-8f5e-90c0876e96a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.708 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8aaf4606-91 in ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.711 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8aaf4606-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.712 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[c5faeaaf-4d67-4aa2-86f7-2d77866c8f72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.714 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f43a01-babc-4b4d-a093-d7b4cb785373]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.731 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[d7124973-127a-467c-bb25-9e420f149756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.753 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7883c09b-5349-4fdb-84fd-c9a3e6f1aa36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 systemd[1]: Started Virtual Machine qemu-102-instance-000000db.
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.756 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:00 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:00Z|00885|binding|INFO|Setting lport 2d688aa8-e9a3-4129-b88d-a7ab81fca989 ovn-installed in OVS
Nov 29 04:10:00 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:00Z|00886|binding|INFO|Setting lport 2d688aa8-e9a3-4129-b88d-a7ab81fca989 up in Southbound
Nov 29 04:10:00 np0005539564 nova_compute[226295]: 2025-11-29 09:10:00.760 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.787 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[b525bc93-4cf0-4594-95bc-afa59e197b70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 NetworkManager[48997]: <info>  [1764407400.7949] manager: (tap8aaf4606-90): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.794 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[830ca324-8c67-4cf3-ac2b-80f3e39e6c64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 systemd-udevd[315411]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.838 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0e93c5-eb46-4785-ae8b-a409cacac428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.841 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[e51a1b4f-0fc5-4a0e-b1c7-6fbe512b0274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 NetworkManager[48997]: <info>  [1764407400.8633] device (tap8aaf4606-90): carrier: link connected
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.868 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0608a2cf-0263-48d8-a9af-be103ac7450e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.889 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[cc27c0d9-5d48-4d0e-9109-b29d5438849f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1037554, 'reachable_time': 31179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315441, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.906 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[45cf69ce-510a-4d3e-a972-d854d8871541]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:8863'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1037554, 'tstamp': 1037554}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315442, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.929 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[27bc12e6-b52d-4be7-aa61-be76d8e42115]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1037554, 'reachable_time': 31179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315443, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:00 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 04:10:00 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:00.965 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3e1e64-89cf-41e6-a4df-7da1207d313f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.050 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[40839101-dcd7-4353-bb4e-c5cd38dab21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.052 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.052 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.052 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8aaf4606-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.097 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:01 np0005539564 NetworkManager[48997]: <info>  [1764407401.0982] manager: (tap8aaf4606-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 29 04:10:01 np0005539564 kernel: tap8aaf4606-90: entered promiscuous mode
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.102 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.103 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8aaf4606-90, col_values=(('external_ids', {'iface-id': 'dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:10:01 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:01Z|00887|binding|INFO|Releasing lport dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a from this chassis (sb_readonly=0)
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.104 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.120 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.123 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.123 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.124 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2b9d9b-ef13-432d-935f-b52f1346e533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.124 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:10:01 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:01.125 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'env', 'PROCESS_TAG=haproxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.157 226310 DEBUG nova.network.neutron [req-5d3701ad-715a-4811-bd8f-6fb2b09fd2c8 req-8a7fd961-1f0e-4aa6-bd8e-de62d5856b9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Updated VIF entry in instance network info cache for port 2d688aa8-e9a3-4129-b88d-a7ab81fca989. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.158 226310 DEBUG nova.network.neutron [req-5d3701ad-715a-4811-bd8f-6fb2b09fd2c8 req-8a7fd961-1f0e-4aa6-bd8e-de62d5856b9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Updating instance_info_cache with network_info: [{"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.175 226310 DEBUG oslo_concurrency.lockutils [req-5d3701ad-715a-4811-bd8f-6fb2b09fd2c8 req-8a7fd961-1f0e-4aa6-bd8e-de62d5856b9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.202 226310 DEBUG nova.compute.manager [req-d58c3514-2007-418c-a810-effdbd3afbe3 req-f61d7d69-3c4d-459c-bb60-950c28efa553 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received event network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.203 226310 DEBUG oslo_concurrency.lockutils [req-d58c3514-2007-418c-a810-effdbd3afbe3 req-f61d7d69-3c4d-459c-bb60-950c28efa553 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.203 226310 DEBUG oslo_concurrency.lockutils [req-d58c3514-2007-418c-a810-effdbd3afbe3 req-f61d7d69-3c4d-459c-bb60-950c28efa553 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.204 226310 DEBUG oslo_concurrency.lockutils [req-d58c3514-2007-418c-a810-effdbd3afbe3 req-f61d7d69-3c4d-459c-bb60-950c28efa553 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.204 226310 DEBUG nova.compute.manager [req-d58c3514-2007-418c-a810-effdbd3afbe3 req-f61d7d69-3c4d-459c-bb60-950c28efa553 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Processing event network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.231 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407401.2313783, d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.232 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] VM Started (Lifecycle Event)#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.233 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.237 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.241 226310 INFO nova.virt.libvirt.driver [-] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Instance spawned successfully.#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.242 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.263 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.274 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.279 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.280 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.280 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.281 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.281 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.282 226310 DEBUG nova.virt.libvirt.driver [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.293 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.294 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407401.232175, d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.294 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.318 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.322 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407401.236715, d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.323 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.365 226310 INFO nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Took 6.72 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.367 226310 DEBUG nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.368 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.382 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.421 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.455 226310 INFO nova.compute.manager [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Took 32.03 seconds to build instance.#033[00m
Nov 29 04:10:01 np0005539564 nova_compute[226295]: 2025-11-29 09:10:01.482 226310 DEBUG oslo_concurrency.lockutils [None req-1b2b9991-d843-49f6-a638-d0e334ba13ea 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 32.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:01 np0005539564 podman[315517]: 2025-11-29 09:10:01.525081801 +0000 UTC m=+0.065827274 container create 79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 04:10:01 np0005539564 systemd[1]: Started libpod-conmon-79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346.scope.
Nov 29 04:10:01 np0005539564 podman[315517]: 2025-11-29 09:10:01.494442091 +0000 UTC m=+0.035187544 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:10:01 np0005539564 systemd[1]: Started libcrun container.
Nov 29 04:10:01 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37020df634352d1b521d6d3c837feeefa1a87977f0e457439c990c0075799e9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:01 np0005539564 podman[315517]: 2025-11-29 09:10:01.635394499 +0000 UTC m=+0.176139952 container init 79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:10:01 np0005539564 podman[315517]: 2025-11-29 09:10:01.64611261 +0000 UTC m=+0.186858043 container start 79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:10:01 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[315532]: [NOTICE]   (315536) : New worker (315538) forked
Nov 29 04:10:01 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[315532]: [NOTICE]   (315536) : Loading success.
Nov 29 04:10:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:02.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:03 np0005539564 nova_compute[226295]: 2025-11-29 09:10:03.065 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:03 np0005539564 nova_compute[226295]: 2025-11-29 09:10:03.330 226310 DEBUG nova.compute.manager [req-60a2d8ef-4422-49aa-bacb-db61acc77c40 req-0296f344-198a-4da6-9508-f88ebc7e35ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received event network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:10:03 np0005539564 nova_compute[226295]: 2025-11-29 09:10:03.331 226310 DEBUG oslo_concurrency.lockutils [req-60a2d8ef-4422-49aa-bacb-db61acc77c40 req-0296f344-198a-4da6-9508-f88ebc7e35ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:03 np0005539564 nova_compute[226295]: 2025-11-29 09:10:03.332 226310 DEBUG oslo_concurrency.lockutils [req-60a2d8ef-4422-49aa-bacb-db61acc77c40 req-0296f344-198a-4da6-9508-f88ebc7e35ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:03 np0005539564 nova_compute[226295]: 2025-11-29 09:10:03.332 226310 DEBUG oslo_concurrency.lockutils [req-60a2d8ef-4422-49aa-bacb-db61acc77c40 req-0296f344-198a-4da6-9508-f88ebc7e35ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:03 np0005539564 nova_compute[226295]: 2025-11-29 09:10:03.333 226310 DEBUG nova.compute.manager [req-60a2d8ef-4422-49aa-bacb-db61acc77c40 req-0296f344-198a-4da6-9508-f88ebc7e35ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] No waiting events found dispatching network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:10:03 np0005539564 nova_compute[226295]: 2025-11-29 09:10:03.333 226310 WARNING nova.compute.manager [req-60a2d8ef-4422-49aa-bacb-db61acc77c40 req-0296f344-198a-4da6-9508-f88ebc7e35ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received unexpected event network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 for instance with vm_state active and task_state None.#033[00m
Nov 29 04:10:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:03.792 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:03.793 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:03.794 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:04 np0005539564 nova_compute[226295]: 2025-11-29 09:10:04.286 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.216 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.218 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.218 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.219 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.219 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.222 226310 INFO nova.compute.manager [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Terminating instance#033[00m
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.225 226310 DEBUG nova.compute.manager [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:10:06 np0005539564 kernel: tap2d688aa8-e9 (unregistering): left promiscuous mode
Nov 29 04:10:06 np0005539564 NetworkManager[48997]: <info>  [1764407406.2844] device (tap2d688aa8-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.300 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:06 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:06Z|00888|binding|INFO|Releasing lport 2d688aa8-e9a3-4129-b88d-a7ab81fca989 from this chassis (sb_readonly=0)
Nov 29 04:10:06 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:06Z|00889|binding|INFO|Setting lport 2d688aa8-e9a3-4129-b88d-a7ab81fca989 down in Southbound
Nov 29 04:10:06 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:06Z|00890|binding|INFO|Removing iface tap2d688aa8-e9 ovn-installed in OVS
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.303 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:06.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:06.312 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:2c:42 10.100.0.13'], port_security=['fa:16:3e:d6:2c:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbe29fc0-1435-473c-891a-fae6e52fd8dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=2d688aa8-e9a3-4129-b88d-a7ab81fca989) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:10:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:06.314 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 2d688aa8-e9a3-4129-b88d-a7ab81fca989 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad unbound from our chassis#033[00m
Nov 29 04:10:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:06.316 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:10:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:06.317 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7e430bd1-47a1-460c-9ff0-8967a4d40673]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:06.318 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace which is not needed anymore#033[00m
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.333 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:06 np0005539564 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000db.scope: Deactivated successfully.
Nov 29 04:10:06 np0005539564 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000db.scope: Consumed 5.757s CPU time.
Nov 29 04:10:06 np0005539564 systemd-machined[190128]: Machine qemu-102-instance-000000db terminated.
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.484 226310 INFO nova.virt.libvirt.driver [-] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Instance destroyed successfully.#033[00m
Nov 29 04:10:06 np0005539564 nova_compute[226295]: 2025-11-29 09:10:06.485 226310 DEBUG nova.objects.instance [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'resources' on Instance uuid d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:10:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:06.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.009 226310 DEBUG nova.virt.libvirt.vif [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1826346449',display_name='tempest-TestVolumeBootPattern-server-1826346449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1826346449',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:10:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-81vmtj70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:10:01Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.011 226310 DEBUG nova.network.os_vif_util [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "address": "fa:16:3e:d6:2c:42", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d688aa8-e9", "ovs_interfaceid": "2d688aa8-e9a3-4129-b88d-a7ab81fca989", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.012 226310 DEBUG nova.network.os_vif_util [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2c:42,bridge_name='br-int',has_traffic_filtering=True,id=2d688aa8-e9a3-4129-b88d-a7ab81fca989,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d688aa8-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.013 226310 DEBUG os_vif [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2c:42,bridge_name='br-int',has_traffic_filtering=True,id=2d688aa8-e9a3-4129-b88d-a7ab81fca989,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d688aa8-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.018 226310 DEBUG nova.compute.manager [req-c70a2241-176d-4b50-8d35-fbcfc24fa8e6 req-3f15da8a-2c52-425e-99e1-41808143998d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received event network-vif-unplugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.019 226310 DEBUG oslo_concurrency.lockutils [req-c70a2241-176d-4b50-8d35-fbcfc24fa8e6 req-3f15da8a-2c52-425e-99e1-41808143998d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.019 226310 DEBUG oslo_concurrency.lockutils [req-c70a2241-176d-4b50-8d35-fbcfc24fa8e6 req-3f15da8a-2c52-425e-99e1-41808143998d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.019 226310 DEBUG oslo_concurrency.lockutils [req-c70a2241-176d-4b50-8d35-fbcfc24fa8e6 req-3f15da8a-2c52-425e-99e1-41808143998d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.020 226310 DEBUG nova.compute.manager [req-c70a2241-176d-4b50-8d35-fbcfc24fa8e6 req-3f15da8a-2c52-425e-99e1-41808143998d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] No waiting events found dispatching network-vif-unplugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.020 226310 DEBUG nova.compute.manager [req-c70a2241-176d-4b50-8d35-fbcfc24fa8e6 req-3f15da8a-2c52-425e-99e1-41808143998d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received event network-vif-unplugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.021 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.021 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d688aa8-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.024 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.026 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:07 np0005539564 nova_compute[226295]: 2025-11-29 09:10:07.029 226310 INFO os_vif [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2c:42,bridge_name='br-int',has_traffic_filtering=True,id=2d688aa8-e9a3-4129-b88d-a7ab81fca989,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d688aa8-e9')#033[00m
Nov 29 04:10:07 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[315532]: [NOTICE]   (315536) : haproxy version is 2.8.14-c23fe91
Nov 29 04:10:07 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[315532]: [NOTICE]   (315536) : path to executable is /usr/sbin/haproxy
Nov 29 04:10:07 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[315532]: [WARNING]  (315536) : Exiting Master process...
Nov 29 04:10:07 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[315532]: [ALERT]    (315536) : Current worker (315538) exited with code 143 (Terminated)
Nov 29 04:10:07 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[315532]: [WARNING]  (315536) : All workers exited. Exiting... (0)
Nov 29 04:10:07 np0005539564 systemd[1]: libpod-79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346.scope: Deactivated successfully.
Nov 29 04:10:07 np0005539564 podman[315572]: 2025-11-29 09:10:07.514512949 +0000 UTC m=+1.052674511 container died 79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:10:07 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346-userdata-shm.mount: Deactivated successfully.
Nov 29 04:10:07 np0005539564 systemd[1]: var-lib-containers-storage-overlay-37020df634352d1b521d6d3c837feeefa1a87977f0e457439c990c0075799e9c-merged.mount: Deactivated successfully.
Nov 29 04:10:07 np0005539564 podman[315572]: 2025-11-29 09:10:07.666190597 +0000 UTC m=+1.204352149 container cleanup 79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:10:07 np0005539564 systemd[1]: libpod-conmon-79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346.scope: Deactivated successfully.
Nov 29 04:10:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:08 np0005539564 podman[315634]: 2025-11-29 09:10:08.054879324 +0000 UTC m=+0.362168529 container remove 79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.066 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8dff95-938a-44f2-aab7-29a13d368b8e]: (4, ('Sat Nov 29 09:10:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346)\n79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346\nSat Nov 29 09:10:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346)\n79a9440bb6f32a8e28f228dcaf584f401d9c03d8d069e1b2aa2d6dc3e6383346\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:08 np0005539564 nova_compute[226295]: 2025-11-29 09:10:08.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.069 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9ba759-3843-4588-8161-d1046fca4634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.072 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:10:08 np0005539564 kernel: tap8aaf4606-90: left promiscuous mode
Nov 29 04:10:08 np0005539564 nova_compute[226295]: 2025-11-29 09:10:08.074 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:08 np0005539564 nova_compute[226295]: 2025-11-29 09:10:08.100 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.104 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[542f1088-8069-4337-b632-55fc108ea5bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.121 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc4473d-b16e-45cf-84bc-44d2e798c4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.123 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[272277a7-bbef-4b1f-baba-5fa07e93ee2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.150 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5609abce-861d-463c-b56d-cfc61c8952c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1037546, 'reachable_time': 18184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315646, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.155 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:10:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:08.155 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[a3387bcf-9130-42c9-8659-d3a752f91e25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:10:08 np0005539564 systemd[1]: run-netns-ovnmeta\x2d8aaf4606\x2d9df9\x2d4ad5\x2d9ade\x2df48fdc6cfaad.mount: Deactivated successfully.
Nov 29 04:10:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:09 np0005539564 nova_compute[226295]: 2025-11-29 09:10:09.446 226310 DEBUG nova.compute.manager [req-cd4252fa-6c1d-4ed2-b502-f12528ad4563 req-2f83409d-957a-438a-9117-9ac5b98eca12 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received event network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:10:09 np0005539564 nova_compute[226295]: 2025-11-29 09:10:09.447 226310 DEBUG oslo_concurrency.lockutils [req-cd4252fa-6c1d-4ed2-b502-f12528ad4563 req-2f83409d-957a-438a-9117-9ac5b98eca12 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:09 np0005539564 nova_compute[226295]: 2025-11-29 09:10:09.447 226310 DEBUG oslo_concurrency.lockutils [req-cd4252fa-6c1d-4ed2-b502-f12528ad4563 req-2f83409d-957a-438a-9117-9ac5b98eca12 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:09 np0005539564 nova_compute[226295]: 2025-11-29 09:10:09.447 226310 DEBUG oslo_concurrency.lockutils [req-cd4252fa-6c1d-4ed2-b502-f12528ad4563 req-2f83409d-957a-438a-9117-9ac5b98eca12 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:09 np0005539564 nova_compute[226295]: 2025-11-29 09:10:09.448 226310 DEBUG nova.compute.manager [req-cd4252fa-6c1d-4ed2-b502-f12528ad4563 req-2f83409d-957a-438a-9117-9ac5b98eca12 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] No waiting events found dispatching network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:10:09 np0005539564 nova_compute[226295]: 2025-11-29 09:10:09.448 226310 WARNING nova.compute.manager [req-cd4252fa-6c1d-4ed2-b502-f12528ad4563 req-2f83409d-957a-438a-9117-9ac5b98eca12 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received unexpected event network-vif-plugged-2d688aa8-e9a3-4129-b88d-a7ab81fca989 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 04:10:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:10.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:10 np0005539564 podman[315649]: 2025-11-29 09:10:10.515294243 +0000 UTC m=+0.060165831 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:10:10 np0005539564 podman[315648]: 2025-11-29 09:10:10.543790344 +0000 UTC m=+0.090509373 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:10:10 np0005539564 podman[315647]: 2025-11-29 09:10:10.56614704 +0000 UTC m=+0.108976533 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 04:10:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:11 np0005539564 nova_compute[226295]: 2025-11-29 09:10:11.256 226310 INFO nova.virt.libvirt.driver [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Deleting instance files /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d_del#033[00m
Nov 29 04:10:11 np0005539564 nova_compute[226295]: 2025-11-29 09:10:11.258 226310 INFO nova.virt.libvirt.driver [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Deletion of /var/lib/nova/instances/d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d_del complete#033[00m
Nov 29 04:10:11 np0005539564 nova_compute[226295]: 2025-11-29 09:10:11.605 226310 INFO nova.compute.manager [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Took 5.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:10:11 np0005539564 nova_compute[226295]: 2025-11-29 09:10:11.606 226310 DEBUG oslo.service.loopingcall [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:10:11 np0005539564 nova_compute[226295]: 2025-11-29 09:10:11.607 226310 DEBUG nova.compute.manager [-] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:10:11 np0005539564 nova_compute[226295]: 2025-11-29 09:10:11.607 226310 DEBUG nova.network.neutron [-] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:10:12 np0005539564 nova_compute[226295]: 2025-11-29 09:10:12.025 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:12.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:12.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:13 np0005539564 nova_compute[226295]: 2025-11-29 09:10:13.072 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:14.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:14.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:15 np0005539564 nova_compute[226295]: 2025-11-29 09:10:15.883 226310 DEBUG nova.network.neutron [-] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.144 226310 DEBUG nova.compute.manager [req-94734ede-1a3f-484d-b2e4-0d87db871f44 req-c929cd40-d85a-413e-b1f6-e62dc1d35970 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Received event network-vif-deleted-2d688aa8-e9a3-4129-b88d-a7ab81fca989 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.144 226310 INFO nova.compute.manager [req-94734ede-1a3f-484d-b2e4-0d87db871f44 req-c929cd40-d85a-413e-b1f6-e62dc1d35970 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Neutron deleted interface 2d688aa8-e9a3-4129-b88d-a7ab81fca989; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.145 226310 DEBUG nova.network.neutron [req-94734ede-1a3f-484d-b2e4-0d87db871f44 req-c929cd40-d85a-413e-b1f6-e62dc1d35970 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:10:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:16.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.624 226310 INFO nova.compute.manager [-] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Took 5.02 seconds to deallocate network for instance.#033[00m
Nov 29 04:10:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:16.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.753 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.754 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.754 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.754 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:10:16 np0005539564 nova_compute[226295]: 2025-11-29 09:10:16.925 226310 DEBUG nova.compute.manager [req-94734ede-1a3f-484d-b2e4-0d87db871f44 req-c929cd40-d85a-413e-b1f6-e62dc1d35970 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Detach interface failed, port_id=2d688aa8-e9a3-4129-b88d-a7ab81fca989, reason: Instance d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.028 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.333 226310 INFO nova.compute.manager [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Took 0.71 seconds to detach 1 volumes for instance.#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.335 226310 DEBUG nova.compute.manager [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Deleting volume: 42ac62ec-79ea-47f9-8105-4e6fd3447306 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.340 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.341 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.853 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.854 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.855 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.855 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:10:17 np0005539564 nova_compute[226295]: 2025-11-29 09:10:17.856 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:10:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:10:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:10:18 np0005539564 nova_compute[226295]: 2025-11-29 09:10:18.073 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:18.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:10:18 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3233531520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:10:18 np0005539564 nova_compute[226295]: 2025-11-29 09:10:18.351 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:18 np0005539564 nova_compute[226295]: 2025-11-29 09:10:18.598 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:10:18 np0005539564 nova_compute[226295]: 2025-11-29 09:10:18.599 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4134MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:10:18 np0005539564 nova_compute[226295]: 2025-11-29 09:10:18.599 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:18 np0005539564 nova_compute[226295]: 2025-11-29 09:10:18.599 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:19 np0005539564 nova_compute[226295]: 2025-11-29 09:10:19.975 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:10:19 np0005539564 nova_compute[226295]: 2025-11-29 09:10:19.976 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:10:19 np0005539564 nova_compute[226295]: 2025-11-29 09:10:19.976 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:10:20 np0005539564 nova_compute[226295]: 2025-11-29 09:10:20.183 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:20.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:20.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:10:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/513338051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:10:20 np0005539564 nova_compute[226295]: 2025-11-29 09:10:20.705 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:20 np0005539564 nova_compute[226295]: 2025-11-29 09:10:20.714 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.024 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.255 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.482 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407406.4806387, d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.483 226310 INFO nova.compute.manager [-] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.617 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.618 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.619 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.660 226310 DEBUG nova.compute.manager [None req-2da513de-8e13-4f0c-bad9-16f8dd2bc8bc - - - - - -] [instance: d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:10:21 np0005539564 nova_compute[226295]: 2025-11-29 09:10:21.689 226310 DEBUG oslo_concurrency.processutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:22 np0005539564 nova_compute[226295]: 2025-11-29 09:10:22.030 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:10:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/992761591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:10:22 np0005539564 nova_compute[226295]: 2025-11-29 09:10:22.173 226310 DEBUG oslo_concurrency.processutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:22 np0005539564 nova_compute[226295]: 2025-11-29 09:10:22.181 226310 DEBUG nova.compute.provider_tree [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:10:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:22.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:22 np0005539564 nova_compute[226295]: 2025-11-29 09:10:22.347 226310 DEBUG nova.scheduler.client.report [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:10:22 np0005539564 nova_compute[226295]: 2025-11-29 09:10:22.489 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:22 np0005539564 nova_compute[226295]: 2025-11-29 09:10:22.599 226310 INFO nova.scheduler.client.report [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Deleted allocations for instance d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d#033[00m
Nov 29 04:10:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:22.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:23 np0005539564 nova_compute[226295]: 2025-11-29 09:10:23.060 226310 DEBUG oslo_concurrency.lockutils [None req-fa32860b-89e8-48ff-bf6f-228eeaf4be53 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "d3c8c6ba-d88b-4170-9c9f-0aa03a6b0b4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:23 np0005539564 nova_compute[226295]: 2025-11-29 09:10:23.076 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:23 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:24.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:24.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:24 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e430 e430: 3 total, 3 up, 3 in
Nov 29 04:10:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:26.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:26.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:27 np0005539564 nova_compute[226295]: 2025-11-29 09:10:27.035 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:28 np0005539564 nova_compute[226295]: 2025-11-29 09:10:28.079 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:28.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:10:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4025583321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:10:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:10:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4025583321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:10:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:28.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:30.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:30.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:32 np0005539564 nova_compute[226295]: 2025-11-29 09:10:32.039 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 e431: 3 total, 3 up, 3 in
Nov 29 04:10:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:33 np0005539564 nova_compute[226295]: 2025-11-29 09:10:33.082 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:34.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:34.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:36.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:36.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:37 np0005539564 nova_compute[226295]: 2025-11-29 09:10:37.042 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:38 np0005539564 nova_compute[226295]: 2025-11-29 09:10:38.083 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:38.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:38.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:40.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:40.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:41 np0005539564 podman[315963]: 2025-11-29 09:10:41.569683512 +0000 UTC m=+0.105513949 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:10:41 np0005539564 podman[315962]: 2025-11-29 09:10:41.578353937 +0000 UTC m=+0.120730461 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 29 04:10:41 np0005539564 podman[315961]: 2025-11-29 09:10:41.603542269 +0000 UTC m=+0.146034696 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Nov 29 04:10:42 np0005539564 nova_compute[226295]: 2025-11-29 09:10:42.045 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:42.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:42.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:43 np0005539564 nova_compute[226295]: 2025-11-29 09:10:43.084 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:44.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:45 np0005539564 ovn_controller[130591]: 2025-11-29T09:10:45Z|00891|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 29 04:10:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:46.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:10:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 69K writes, 272K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 69K writes, 26K syncs, 2.69 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1858 writes, 6311 keys, 1858 commit groups, 1.0 writes per commit group, ingest: 5.23 MB, 0.01 MB/s#012Interval WAL: 1858 writes, 811 syncs, 2.29 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:10:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:46.605 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:10:46 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:46.606 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:10:46 np0005539564 nova_compute[226295]: 2025-11-29 09:10:46.609 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:46.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:47 np0005539564 nova_compute[226295]: 2025-11-29 09:10:47.048 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.087 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.201 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.201 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:48.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.386 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.387 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.387 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.437 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.438 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.438 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.438 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.439 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:48 np0005539564 nova_compute[226295]: 2025-11-29 09:10:48.439 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:10:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:48.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:50.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:50.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:52 np0005539564 nova_compute[226295]: 2025-11-29 09:10:52.052 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:52 np0005539564 nova_compute[226295]: 2025-11-29 09:10:52.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:52.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:52.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:53 np0005539564 nova_compute[226295]: 2025-11-29 09:10:53.122 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:54.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:54.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:55 np0005539564 nova_compute[226295]: 2025-11-29 09:10:55.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:10:55.608 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:10:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:56.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:56.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:57 np0005539564 nova_compute[226295]: 2025-11-29 09:10:57.056 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:58 np0005539564 nova_compute[226295]: 2025-11-29 09:10:58.125 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:58 np0005539564 nova_compute[226295]: 2025-11-29 09:10:58.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:58.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:58 np0005539564 nova_compute[226295]: 2025-11-29 09:10:58.607 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:58 np0005539564 nova_compute[226295]: 2025-11-29 09:10:58.607 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:58 np0005539564 nova_compute[226295]: 2025-11-29 09:10:58.608 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:58 np0005539564 nova_compute[226295]: 2025-11-29 09:10:58.608 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:10:58 np0005539564 nova_compute[226295]: 2025-11-29 09:10:58.609 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:10:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:58.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:10:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1502372707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:10:59 np0005539564 nova_compute[226295]: 2025-11-29 09:10:59.109 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:59 np0005539564 nova_compute[226295]: 2025-11-29 09:10:59.344 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:10:59 np0005539564 nova_compute[226295]: 2025-11-29 09:10:59.345 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4198MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:10:59 np0005539564 nova_compute[226295]: 2025-11-29 09:10:59.346 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:59 np0005539564 nova_compute[226295]: 2025-11-29 09:10:59.346 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:59 np0005539564 nova_compute[226295]: 2025-11-29 09:10:59.780 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:10:59 np0005539564 nova_compute[226295]: 2025-11-29 09:10:59.781 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:10:59 np0005539564 nova_compute[226295]: 2025-11-29 09:10:59.804 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:11:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3148641319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:11:00 np0005539564 nova_compute[226295]: 2025-11-29 09:11:00.348 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:00 np0005539564 nova_compute[226295]: 2025-11-29 09:11:00.354 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:11:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:00.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:00 np0005539564 nova_compute[226295]: 2025-11-29 09:11:00.486 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:11:00 np0005539564 nova_compute[226295]: 2025-11-29 09:11:00.570 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:11:00 np0005539564 nova_compute[226295]: 2025-11-29 09:11:00.572 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:00.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:02 np0005539564 nova_compute[226295]: 2025-11-29 09:11:02.059 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:02.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:02.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:03 np0005539564 nova_compute[226295]: 2025-11-29 09:11:03.128 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:11:03.793 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:11:03.794 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:11:03.794 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:04.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:04.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:06.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:06.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:07 np0005539564 nova_compute[226295]: 2025-11-29 09:11:07.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:08 np0005539564 nova_compute[226295]: 2025-11-29 09:11:08.130 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:08.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:10.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:10.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:12 np0005539564 nova_compute[226295]: 2025-11-29 09:11:12.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:12.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:12 np0005539564 podman[316069]: 2025-11-29 09:11:12.528552295 +0000 UTC m=+0.064675422 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 04:11:12 np0005539564 podman[316067]: 2025-11-29 09:11:12.56747251 +0000 UTC m=+0.116868647 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 29 04:11:12 np0005539564 podman[316068]: 2025-11-29 09:11:12.568671582 +0000 UTC m=+0.109024033 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 29 04:11:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:12.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:13 np0005539564 nova_compute[226295]: 2025-11-29 09:11:13.132 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:14.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:14.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:16.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:16.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:17 np0005539564 nova_compute[226295]: 2025-11-29 09:11:17.070 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:18 np0005539564 nova_compute[226295]: 2025-11-29 09:11:18.135 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.170368) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478170433, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1737, "num_deletes": 252, "total_data_size": 4047994, "memory_usage": 4098992, "flush_reason": "Manual Compaction"}
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Nov 29 04:11:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:18.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478573410, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 2658815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89068, "largest_seqno": 90799, "table_properties": {"data_size": 2651603, "index_size": 4218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15405, "raw_average_key_size": 20, "raw_value_size": 2637041, "raw_average_value_size": 3474, "num_data_blocks": 186, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407319, "oldest_key_time": 1764407319, "file_creation_time": 1764407478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 403161 microseconds, and 8431 cpu microseconds.
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.573533) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 2658815 bytes OK
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.573566) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.580431) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.580458) EVENT_LOG_v1 {"time_micros": 1764407478580450, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.580482) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 4040143, prev total WAL file size 4040424, number of live WAL files 2.
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.586216) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(2596KB)], [183(11MB)]
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478586289, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 14274338, "oldest_snapshot_seqno": -1}
Nov 29 04:11:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:18.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 11346 keys, 12314577 bytes, temperature: kUnknown
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478900219, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 12314577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12244710, "index_size": 40378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 300881, "raw_average_key_size": 26, "raw_value_size": 12049700, "raw_average_value_size": 1062, "num_data_blocks": 1517, "num_entries": 11346, "num_filter_entries": 11346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.900550) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 12314577 bytes
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.933893) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.5 rd, 39.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 11867, records dropped: 521 output_compression: NoCompression
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.933956) EVENT_LOG_v1 {"time_micros": 1764407478933942, "job": 118, "event": "compaction_finished", "compaction_time_micros": 314011, "compaction_time_cpu_micros": 48268, "output_level": 6, "num_output_files": 1, "total_output_size": 12314577, "num_input_records": 11867, "num_output_records": 11346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478934709, "job": 118, "event": "table_file_deletion", "file_number": 185}
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478939569, "job": 118, "event": "table_file_deletion", "file_number": 183}
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.586037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.939632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.939640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.939644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.939648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:11:18.939652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:19 np0005539564 nova_compute[226295]: 2025-11-29 09:11:19.801 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:19 np0005539564 NetworkManager[48997]: <info>  [1764407479.8029] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Nov 29 04:11:19 np0005539564 NetworkManager[48997]: <info>  [1764407479.8042] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 29 04:11:19 np0005539564 nova_compute[226295]: 2025-11-29 09:11:19.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:19 np0005539564 nova_compute[226295]: 2025-11-29 09:11:19.910 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:20 np0005539564 nova_compute[226295]: 2025-11-29 09:11:20.036 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:20.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:22 np0005539564 nova_compute[226295]: 2025-11-29 09:11:22.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:22.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:23 np0005539564 nova_compute[226295]: 2025-11-29 09:11:23.169 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:24.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:24.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:26.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:26.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:26 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:11:27 np0005539564 nova_compute[226295]: 2025-11-29 09:11:27.078 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:28 np0005539564 nova_compute[226295]: 2025-11-29 09:11:28.171 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:11:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:28.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:28.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:30.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:30.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:32 np0005539564 nova_compute[226295]: 2025-11-29 09:11:32.081 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:32.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:33 np0005539564 nova_compute[226295]: 2025-11-29 09:11:33.172 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:34.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:11:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 18K writes, 90K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1464 writes, 6962 keys, 1464 commit groups, 1.0 writes per commit group, ingest: 15.04 MB, 0.03 MB/s#012Interval WAL: 1464 writes, 1464 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     24.9      4.59              0.46        59    0.078       0      0       0.0       0.0#012  L6      1/0   11.74 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4     54.4     46.7     13.29              2.23        58    0.229    487K    31K       0.0       0.0#012 Sum      1/0   11.74 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     40.4     41.1     17.89              2.69       117    0.153    487K    31K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0     40.1     40.9      1.79              0.30        10    0.179     58K   2578       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     54.4     46.7     13.29              2.23        58    0.229    487K    31K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     24.9      4.59              0.46        58    0.079       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.112, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.72 GB write, 0.10 MB/s write, 0.71 GB read, 0.10 MB/s read, 17.9 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 80.73 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000444 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4415,77.31 MB,25.4321%) FilterBlock(117,1.31 MB,0.430413%) IndexBlock(117,2.11 MB,0.693688%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 04:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:34 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:34.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:36.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:36 np0005539564 nova_compute[226295]: 2025-11-29 09:11:36.566 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:37 np0005539564 nova_compute[226295]: 2025-11-29 09:11:37.084 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:38 np0005539564 nova_compute[226295]: 2025-11-29 09:11:38.175 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 e432: 3 total, 3 up, 3 in
Nov 29 04:11:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:38.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:39 np0005539564 nova_compute[226295]: 2025-11-29 09:11:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:39 np0005539564 nova_compute[226295]: 2025-11-29 09:11:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:40.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:40.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:42 np0005539564 nova_compute[226295]: 2025-11-29 09:11:42.088 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:42.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:43 np0005539564 nova_compute[226295]: 2025-11-29 09:11:43.177 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:43 np0005539564 podman[316316]: 2025-11-29 09:11:43.510684389 +0000 UTC m=+0.061139387 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 04:11:43 np0005539564 podman[316315]: 2025-11-29 09:11:43.520770291 +0000 UTC m=+0.075507856 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 04:11:43 np0005539564 podman[316314]: 2025-11-29 09:11:43.542421238 +0000 UTC m=+0.102709783 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:11:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:44.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:44.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:45 np0005539564 nova_compute[226295]: 2025-11-29 09:11:45.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:45 np0005539564 nova_compute[226295]: 2025-11-29 09:11:45.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:11:45 np0005539564 nova_compute[226295]: 2025-11-29 09:11:45.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:11:45 np0005539564 nova_compute[226295]: 2025-11-29 09:11:45.370 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:11:45 np0005539564 nova_compute[226295]: 2025-11-29 09:11:45.371 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:45 np0005539564 nova_compute[226295]: 2025-11-29 09:11:45.371 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:11:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:46.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.009 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.010 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.030 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.092 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.101 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.102 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.111 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.111 226310 INFO nova.compute.claims [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.229 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:11:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3033889382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.829 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.837 226310 DEBUG nova.compute.provider_tree [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.852 226310 DEBUG nova.scheduler.client.report [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.875 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.876 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.933 226310 INFO nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.936 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.936 226310 DEBUG nova.network.neutron [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:11:47 np0005539564 nova_compute[226295]: 2025-11-29 09:11:47.968 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:11:48 np0005539564 nova_compute[226295]: 2025-11-29 09:11:48.012 226310 INFO nova.virt.block_device [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Booting with volume snapshot 30784891-c32c-4cd3-8120-4b3acd34934a at /dev/vda#033[00m
Nov 29 04:11:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:48 np0005539564 nova_compute[226295]: 2025-11-29 09:11:48.178 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:48 np0005539564 nova_compute[226295]: 2025-11-29 09:11:48.268 226310 DEBUG nova.policy [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ff561a95dc44b9fb9f7fd8fee80f589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:11:48 np0005539564 nova_compute[226295]: 2025-11-29 09:11:48.346 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:49 np0005539564 ovn_controller[130591]: 2025-11-29T09:11:49Z|00892|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 04:11:50 np0005539564 nova_compute[226295]: 2025-11-29 09:11:50.032 226310 DEBUG nova.network.neutron [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Successfully created port: 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:11:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:50.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:50.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:52 np0005539564 nova_compute[226295]: 2025-11-29 09:11:52.094 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:52.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:53 np0005539564 nova_compute[226295]: 2025-11-29 09:11:53.181 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.208 226310 DEBUG nova.network.neutron [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Successfully updated port: 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:54.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.515 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.516 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquired lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.516 226310 DEBUG nova.network.neutron [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.519 226310 DEBUG nova.compute.manager [req-aac2f9a0-512f-4535-9bf8-ab161f4a3757 req-1fea7571-79f1-4ea5-a74b-02b46dbec913 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received event network-changed-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.519 226310 DEBUG nova.compute.manager [req-aac2f9a0-512f-4535-9bf8-ab161f4a3757 req-1fea7571-79f1-4ea5-a74b-02b46dbec913 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Refreshing instance network info cache due to event network-changed-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.519 226310 DEBUG oslo_concurrency.lockutils [req-aac2f9a0-512f-4535-9bf8-ab161f4a3757 req-1fea7571-79f1-4ea5-a74b-02b46dbec913 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:11:54 np0005539564 nova_compute[226295]: 2025-11-29 09:11:54.760 226310 DEBUG nova.network.neutron [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:11:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:54.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.588 226310 DEBUG nova.network.neutron [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updating instance_info_cache with network_info: [{"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.863 226310 DEBUG os_brick.utils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.864 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.877 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.878 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d92574-bb39-46bc-b1b8-3ab779dbb882]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.879 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.888 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.888 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[4e956a12-6fe4-421f-8007-ce889c84f8be]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.890 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.904 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.904 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[44ec5b69-7c15-4c98-8119-8a064b6b1075]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.905 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1af4b4-bb04-425c-9b45-7a0c766da746]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.906 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.941 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.945 226310 DEBUG os_brick.initiator.connectors.lightos [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.946 226310 DEBUG os_brick.initiator.connectors.lightos [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.947 226310 DEBUG os_brick.initiator.connectors.lightos [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.948 226310 DEBUG os_brick.utils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.948 226310 DEBUG nova.virt.block_device [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updating existing volume attachment record: dd9f2485-806a-4942-a348-1d7f2440bec1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.976 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Releasing lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.977 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Instance network_info: |[{"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.978 226310 DEBUG oslo_concurrency.lockutils [req-aac2f9a0-512f-4535-9bf8-ab161f4a3757 req-1fea7571-79f1-4ea5-a74b-02b46dbec913 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:11:55 np0005539564 nova_compute[226295]: 2025-11-29 09:11:55.979 226310 DEBUG nova.network.neutron [req-aac2f9a0-512f-4535-9bf8-ab161f4a3757 req-1fea7571-79f1-4ea5-a74b-02b46dbec913 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Refreshing network info cache for port 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:11:56 np0005539564 nova_compute[226295]: 2025-11-29 09:11:56.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:56.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:57 np0005539564 nova_compute[226295]: 2025-11-29 09:11:57.145 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:58 np0005539564 nova_compute[226295]: 2025-11-29 09:11:58.182 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:58 np0005539564 nova_compute[226295]: 2025-11-29 09:11:58.613 226310 DEBUG nova.network.neutron [req-aac2f9a0-512f-4535-9bf8-ab161f4a3757 req-1fea7571-79f1-4ea5-a74b-02b46dbec913 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updated VIF entry in instance network info cache for port 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:11:58 np0005539564 nova_compute[226295]: 2025-11-29 09:11:58.613 226310 DEBUG nova.network.neutron [req-aac2f9a0-512f-4535-9bf8-ab161f4a3757 req-1fea7571-79f1-4ea5-a74b-02b46dbec913 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updating instance_info_cache with network_info: [{"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:11:58 np0005539564 nova_compute[226295]: 2025-11-29 09:11:58.663 226310 DEBUG oslo_concurrency.lockutils [req-aac2f9a0-512f-4535-9bf8-ab161f4a3757 req-1fea7571-79f1-4ea5-a74b-02b46dbec913 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:11:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:11:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:58.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:59 np0005539564 nova_compute[226295]: 2025-11-29 09:11:59.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:59 np0005539564 nova_compute[226295]: 2025-11-29 09:11:59.664 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:59 np0005539564 nova_compute[226295]: 2025-11-29 09:11:59.665 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:59 np0005539564 nova_compute[226295]: 2025-11-29 09:11:59.666 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:59 np0005539564 nova_compute[226295]: 2025-11-29 09:11:59.666 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:11:59 np0005539564 nova_compute[226295]: 2025-11-29 09:11:59.667 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:12:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1244052288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:12:00 np0005539564 nova_compute[226295]: 2025-11-29 09:12:00.156 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:12:00 np0005539564 nova_compute[226295]: 2025-11-29 09:12:00.362 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:12:00 np0005539564 nova_compute[226295]: 2025-11-29 09:12:00.363 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4174MB free_disk=20.988109588623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:12:00 np0005539564 nova_compute[226295]: 2025-11-29 09:12:00.363 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:00 np0005539564 nova_compute[226295]: 2025-11-29 09:12:00.364 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.141 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.142 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.143 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.218 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.443 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.446 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.447 226310 INFO nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Creating image(s)#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.447 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.448 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Ensure instance console log exists: /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.449 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.449 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.450 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.454 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Start _get_guest_xml network_info=[{"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-29T09:11:37Z,direct_url=<?>,disk_format='qcow2',id=9edb158f-7f78-4486-aa1c-e248407be0c7,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-892575640',owner='51af0a2ee11a460ab825a484e5c6f4a3',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-29T09:11:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-74e740e5-5b5d-4c86-ad39-4e66e4082491', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '74e740e5-5b5d-4c86-ad39-4e66e4082491', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9', 'attached_at': '', 'detached_at': '', 'volume_id': '74e740e5-5b5d-4c86-ad39-4e66e4082491', 'serial': '74e740e5-5b5d-4c86-ad39-4e66e4082491'}, 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'dd9f2485-806a-4942-a348-1d7f2440bec1', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.458 226310 WARNING nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.464 226310 DEBUG nova.virt.libvirt.host [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.465 226310 DEBUG nova.virt.libvirt.host [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.468 226310 DEBUG nova.virt.libvirt.host [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.469 226310 DEBUG nova.virt.libvirt.host [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.470 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.471 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-29T09:11:37Z,direct_url=<?>,disk_format='qcow2',id=9edb158f-7f78-4486-aa1c-e248407be0c7,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-892575640',owner='51af0a2ee11a460ab825a484e5c6f4a3',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-29T09:11:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.472 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.472 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.473 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.474 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.474 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.475 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.475 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.476 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.476 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.477 226310 DEBUG nova.virt.hardware [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.523 226310 DEBUG nova.storage.rbd_utils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.528 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:12:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1491720927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.707 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.715 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.903 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:12:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:12:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/179801139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:12:01 np0005539564 nova_compute[226295]: 2025-11-29 09:12:01.969 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:12:02 np0005539564 nova_compute[226295]: 2025-11-29 09:12:02.150 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:02.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:02 np0005539564 nova_compute[226295]: 2025-11-29 09:12:02.718 226310 DEBUG nova.virt.libvirt.vif [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-176205723',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-176205723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-176205723',id=221,image_ref='9edb158f-7f78-4486-aa1c-e248407be0c7',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPlNv+UuArIm18l69A5HfxFG21WpyJW8ToVt4k7LsCCm4KdCfn7zzpB30MrFJcDQN8lOais3fDY7vdggDjEDoZEquToagz5NNpmyQCISG63suEZVNGLiZXnMZoQvz4f23Q==',key_name='tempest-keypair-1579184601',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-1m4xm20p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-531976395',image_owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:11:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:12:02 np0005539564 nova_compute[226295]: 2025-11-29 09:12:02.720 226310 DEBUG nova.network.os_vif_util [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:12:02 np0005539564 nova_compute[226295]: 2025-11-29 09:12:02.722 226310 DEBUG nova.network.os_vif_util [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:17:01,bridge_name='br-int',has_traffic_filtering=True,id=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c54a8ab-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:12:02 np0005539564 nova_compute[226295]: 2025-11-29 09:12:02.724 226310 DEBUG nova.objects.instance [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:12:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:02.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.039 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.040 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.185 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.224 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <uuid>06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9</uuid>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <name>instance-000000dd</name>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-176205723</nova:name>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 09:12:01</nova:creationTime>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <nova:user uuid="5ff561a95dc44b9fb9f7fd8fee80f589">tempest-TestVolumeBootPattern-531976395-project-member</nova:user>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <nova:project uuid="51af0a2ee11a460ab825a484e5c6f4a3">tempest-TestVolumeBootPattern-531976395</nova:project>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <nova:root type="image" uuid="9edb158f-7f78-4486-aa1c-e248407be0c7"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <nova:port uuid="7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <system>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <entry name="serial">06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9</entry>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <entry name="uuid">06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9</entry>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </system>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <os>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  </os>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <features>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  </features>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  </clock>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  <devices>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9_disk.config">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-74e740e5-5b5d-4c86-ad39-4e66e4082491">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <serial>74e740e5-5b5d-4c86-ad39-4e66e4082491</serial>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:5c:17:01"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <target dev="tap7c54a8ab-26"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </interface>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9/console.log" append="off"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </serial>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <video>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </video>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <input type="keyboard" bus="usb"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </rng>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 04:12:03 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 04:12:03 np0005539564 nova_compute[226295]:  </devices>
Nov 29 04:12:03 np0005539564 nova_compute[226295]: </domain>
Nov 29 04:12:03 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.225 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Preparing to wait for external event network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.225 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.225 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.226 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.227 226310 DEBUG nova.virt.libvirt.vif [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-176205723',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-176205723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-176205723',id=221,image_ref='9edb158f-7f78-4486-aa1c-e248407be0c7',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPlNv+UuArIm18l69A5HfxFG21WpyJW8ToVt4k7LsCCm4KdCfn7zzpB30MrFJcDQN8lOais3fDY7vdggDjEDoZEquToagz5NNpmyQCISG63suEZVNGLiZXnMZoQvz4f23Q==',key_name='tempest-keypair-1579184601',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-1m4xm20p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-531976395',image_owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:11:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.227 226310 DEBUG nova.network.os_vif_util [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.228 226310 DEBUG nova.network.os_vif_util [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:17:01,bridge_name='br-int',has_traffic_filtering=True,id=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c54a8ab-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.228 226310 DEBUG os_vif [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:17:01,bridge_name='br-int',has_traffic_filtering=True,id=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c54a8ab-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.230 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.231 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.232 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.239 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.240 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c54a8ab-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.241 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c54a8ab-26, col_values=(('external_ids', {'iface-id': '7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:17:01', 'vm-uuid': '06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:03 np0005539564 NetworkManager[48997]: <info>  [1764407523.2447] manager: (tap7c54a8ab-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.243 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.248 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.254 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:03 np0005539564 nova_compute[226295]: 2025-11-29 09:12:03.256 226310 INFO os_vif [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:17:01,bridge_name='br-int',has_traffic_filtering=True,id=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c54a8ab-26')#033[00m
Nov 29 04:12:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:03.795 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:03.796 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:03.796 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:04.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:04 np0005539564 nova_compute[226295]: 2025-11-29 09:12:04.596 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:12:04 np0005539564 nova_compute[226295]: 2025-11-29 09:12:04.596 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:12:04 np0005539564 nova_compute[226295]: 2025-11-29 09:12:04.597 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No VIF found with MAC fa:16:3e:5c:17:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:12:04 np0005539564 nova_compute[226295]: 2025-11-29 09:12:04.598 226310 INFO nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Using config drive#033[00m
Nov 29 04:12:04 np0005539564 nova_compute[226295]: 2025-11-29 09:12:04.628 226310 DEBUG nova.storage.rbd_utils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:12:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:04.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:04 np0005539564 nova_compute[226295]: 2025-11-29 09:12:04.969 226310 INFO nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Creating config drive at /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9/disk.config#033[00m
Nov 29 04:12:04 np0005539564 nova_compute[226295]: 2025-11-29 09:12:04.978 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfjoeu330 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:05 np0005539564 nova_compute[226295]: 2025-11-29 09:12:05.141 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfjoeu330" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:12:05 np0005539564 nova_compute[226295]: 2025-11-29 09:12:05.189 226310 DEBUG nova.storage.rbd_utils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:12:05 np0005539564 nova_compute[226295]: 2025-11-29 09:12:05.196 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9/disk.config 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.224 226310 DEBUG oslo_concurrency.processutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9/disk.config 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.225 226310 INFO nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Deleting local config drive /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9/disk.config because it was imported into RBD.#033[00m
Nov 29 04:12:06 np0005539564 kernel: tap7c54a8ab-26: entered promiscuous mode
Nov 29 04:12:06 np0005539564 NetworkManager[48997]: <info>  [1764407526.3096] manager: (tap7c54a8ab-26): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Nov 29 04:12:06 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:06Z|00893|binding|INFO|Claiming lport 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 for this chassis.
Nov 29 04:12:06 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:06Z|00894|binding|INFO|7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4: Claiming fa:16:3e:5c:17:01 10.100.0.6
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.311 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.319 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:06 np0005539564 NetworkManager[48997]: <info>  [1764407526.3216] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 29 04:12:06 np0005539564 NetworkManager[48997]: <info>  [1764407526.3226] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Nov 29 04:12:06 np0005539564 systemd-udevd[316556]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:12:06 np0005539564 NetworkManager[48997]: <info>  [1764407526.3577] device (tap7c54a8ab-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:12:06 np0005539564 NetworkManager[48997]: <info>  [1764407526.3583] device (tap7c54a8ab-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:12:06 np0005539564 systemd-machined[190128]: New machine qemu-103-instance-000000dd.
Nov 29 04:12:06 np0005539564 systemd[1]: Started Virtual Machine qemu-103-instance-000000dd.
Nov 29 04:12:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:06.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.514 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.718 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:17:01 10.100.0.6'], port_security=['fa:16:3e:5c:17:01 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '163ad31c-1a42-447f-bc30-3a8f2cc05a7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.720 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad bound to our chassis#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.722 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad#033[00m
Nov 29 04:12:06 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:06Z|00895|binding|INFO|Setting lport 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 ovn-installed in OVS
Nov 29 04:12:06 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:06Z|00896|binding|INFO|Setting lport 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 up in Southbound
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.734 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac1ffa1-a128-42d9-97d2-22b6b20568e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.735 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8aaf4606-91 in ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.739 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.742 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8aaf4606-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.742 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[22d84b16-09ac-44e2-b00a-cd47125278ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.743 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[13731d90-9115-4b4b-9740-7eced56bcf97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.763 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[632668b4-aeb4-43c4-89fd-15b1bbd263dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.779 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[3f46a65d-9840-494f-8e89-7b8e6b8bc9ef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.805 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[93f87589-30ca-44e3-9418-dbc0ccb4b455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 NetworkManager[48997]: <info>  [1764407526.8136] manager: (tap8aaf4606-90): new Veth device (/org/freedesktop/NetworkManager/Devices/416)
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.811 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5b097bdd-0a1b-40b3-ba12-a7f963c5be4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:06.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.854 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdbc568-ce57-4fdc-8873-fc88b08940d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.857 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[375a35ef-aabb-4ba4-95e3-50a8d4d8e385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 NetworkManager[48997]: <info>  [1764407526.8827] device (tap8aaf4606-90): carrier: link connected
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.889 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d84c4a-0c3f-42b1-8012-c0130f64f722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.910 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c54ecd-7a5a-4711-ba6e-7222d8d2522d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1050156, 'reachable_time': 18863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316633, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.924 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[94cee7fb-c4f7-42e3-8449-ec68c0c9bb1f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:8863'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1050156, 'tstamp': 1050156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316635, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.941 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407526.9411256, 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:12:06 np0005539564 nova_compute[226295]: 2025-11-29 09:12:06.942 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] VM Started (Lifecycle Event)#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.946 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dc69cc-fa3d-428b-8fae-48451fa08c1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1050156, 'reachable_time': 18863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316636, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:06.984 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f3895945-a9a9-4c5f-89d6-99376be436c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.058 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[b4086f59-9154-4a8c-99ba-eff479adc3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.059 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.060 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.061 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8aaf4606-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:07 np0005539564 NetworkManager[48997]: <info>  [1764407527.0634] manager: (tap8aaf4606-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Nov 29 04:12:07 np0005539564 kernel: tap8aaf4606-90: entered promiscuous mode
Nov 29 04:12:07 np0005539564 nova_compute[226295]: 2025-11-29 09:12:07.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.066 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8aaf4606-90, col_values=(('external_ids', {'iface-id': 'dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:07 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:07Z|00897|binding|INFO|Releasing lport dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a from this chassis (sb_readonly=0)
Nov 29 04:12:07 np0005539564 nova_compute[226295]: 2025-11-29 09:12:07.067 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:07 np0005539564 nova_compute[226295]: 2025-11-29 09:12:07.091 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.092 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.093 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[19adba41-dde4-4be4-924e-50234c5f3644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.094 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:12:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:07.094 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'env', 'PROCESS_TAG=haproxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:12:07 np0005539564 nova_compute[226295]: 2025-11-29 09:12:07.406 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:12:07 np0005539564 nova_compute[226295]: 2025-11-29 09:12:07.412 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407526.9423707, 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:12:07 np0005539564 nova_compute[226295]: 2025-11-29 09:12:07.413 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:12:07 np0005539564 podman[316668]: 2025-11-29 09:12:07.518086415 +0000 UTC m=+0.090870813 container create 43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 04:12:07 np0005539564 podman[316668]: 2025-11-29 09:12:07.466197539 +0000 UTC m=+0.038981977 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:12:07 np0005539564 systemd[1]: Started libpod-conmon-43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6.scope.
Nov 29 04:12:07 np0005539564 systemd[1]: Started libcrun container.
Nov 29 04:12:07 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79e47a00329907323475b83bb3bc65ed4919e21cd359a42f601e1b0e8b3cecef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:07 np0005539564 nova_compute[226295]: 2025-11-29 09:12:07.615 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:12:07 np0005539564 nova_compute[226295]: 2025-11-29 09:12:07.622 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:12:07 np0005539564 podman[316668]: 2025-11-29 09:12:07.758938768 +0000 UTC m=+0.331723146 container init 43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:12:07 np0005539564 podman[316668]: 2025-11-29 09:12:07.768399374 +0000 UTC m=+0.341183722 container start 43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:12:07 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[316683]: [NOTICE]   (316687) : New worker (316689) forked
Nov 29 04:12:07 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[316683]: [NOTICE]   (316687) : Loading success.
Nov 29 04:12:07 np0005539564 ceph-mgr[82125]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.081 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.187 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.244 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:08.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.588 226310 DEBUG nova.compute.manager [req-2987ec46-ee34-41b8-b9b6-9ae06b21ac9e req-b006bb55-aeaf-430a-9582-eee8a553733e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received event network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.589 226310 DEBUG oslo_concurrency.lockutils [req-2987ec46-ee34-41b8-b9b6-9ae06b21ac9e req-b006bb55-aeaf-430a-9582-eee8a553733e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.589 226310 DEBUG oslo_concurrency.lockutils [req-2987ec46-ee34-41b8-b9b6-9ae06b21ac9e req-b006bb55-aeaf-430a-9582-eee8a553733e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.590 226310 DEBUG oslo_concurrency.lockutils [req-2987ec46-ee34-41b8-b9b6-9ae06b21ac9e req-b006bb55-aeaf-430a-9582-eee8a553733e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.590 226310 DEBUG nova.compute.manager [req-2987ec46-ee34-41b8-b9b6-9ae06b21ac9e req-b006bb55-aeaf-430a-9582-eee8a553733e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Processing event network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.591 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.595 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407528.595465, 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.595 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.597 226310 DEBUG nova.virt.libvirt.driver [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.603 226310 INFO nova.virt.libvirt.driver [-] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Instance spawned successfully.#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.604 226310 INFO nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Took 7.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.604 226310 DEBUG nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.631 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.634 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.723 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:12:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:08.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:08 np0005539564 nova_compute[226295]: 2025-11-29 09:12:08.862 226310 INFO nova.compute.manager [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Took 21.79 seconds to build instance.#033[00m
Nov 29 04:12:09 np0005539564 nova_compute[226295]: 2025-11-29 09:12:09.080 226310 DEBUG oslo_concurrency.lockutils [None req-729df378-ceb5-409d-a219-71d961c3e58d 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:09 np0005539564 nova_compute[226295]: 2025-11-29 09:12:09.182 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:09.184 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:12:09 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:09.184 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:12:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:10.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:10 np0005539564 nova_compute[226295]: 2025-11-29 09:12:10.789 226310 DEBUG nova.compute.manager [req-f0088b10-8fa3-4831-b2dd-a62f68982e6a req-0d32bb4b-d8ae-4d52-8795-1ee6536c6d17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received event network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:12:10 np0005539564 nova_compute[226295]: 2025-11-29 09:12:10.790 226310 DEBUG oslo_concurrency.lockutils [req-f0088b10-8fa3-4831-b2dd-a62f68982e6a req-0d32bb4b-d8ae-4d52-8795-1ee6536c6d17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:10 np0005539564 nova_compute[226295]: 2025-11-29 09:12:10.791 226310 DEBUG oslo_concurrency.lockutils [req-f0088b10-8fa3-4831-b2dd-a62f68982e6a req-0d32bb4b-d8ae-4d52-8795-1ee6536c6d17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:10 np0005539564 nova_compute[226295]: 2025-11-29 09:12:10.791 226310 DEBUG oslo_concurrency.lockutils [req-f0088b10-8fa3-4831-b2dd-a62f68982e6a req-0d32bb4b-d8ae-4d52-8795-1ee6536c6d17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:10 np0005539564 nova_compute[226295]: 2025-11-29 09:12:10.791 226310 DEBUG nova.compute.manager [req-f0088b10-8fa3-4831-b2dd-a62f68982e6a req-0d32bb4b-d8ae-4d52-8795-1ee6536c6d17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] No waiting events found dispatching network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:12:10 np0005539564 nova_compute[226295]: 2025-11-29 09:12:10.791 226310 WARNING nova.compute.manager [req-f0088b10-8fa3-4831-b2dd-a62f68982e6a req-0d32bb4b-d8ae-4d52-8795-1ee6536c6d17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received unexpected event network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 for instance with vm_state active and task_state None.#033[00m
Nov 29 04:12:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:10.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:12.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:12.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:13 np0005539564 nova_compute[226295]: 2025-11-29 09:12:13.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:13 np0005539564 nova_compute[226295]: 2025-11-29 09:12:13.246 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:14 np0005539564 podman[316700]: 2025-11-29 09:12:14.539940115 +0000 UTC m=+0.075655840 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:12:14 np0005539564 podman[316701]: 2025-11-29 09:12:14.564433419 +0000 UTC m=+0.091934152 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:12:14 np0005539564 podman[316699]: 2025-11-29 09:12:14.596496606 +0000 UTC m=+0.136218739 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:12:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:14.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:15 np0005539564 nova_compute[226295]: 2025-11-29 09:12:15.485 226310 DEBUG nova.compute.manager [req-75b47c02-abf8-4bcd-90b8-034476d3888a req-b5ba7ebd-b9ac-45f5-b30a-602f8d9ab20b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received event network-changed-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:12:15 np0005539564 nova_compute[226295]: 2025-11-29 09:12:15.485 226310 DEBUG nova.compute.manager [req-75b47c02-abf8-4bcd-90b8-034476d3888a req-b5ba7ebd-b9ac-45f5-b30a-602f8d9ab20b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Refreshing instance network info cache due to event network-changed-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:12:15 np0005539564 nova_compute[226295]: 2025-11-29 09:12:15.486 226310 DEBUG oslo_concurrency.lockutils [req-75b47c02-abf8-4bcd-90b8-034476d3888a req-b5ba7ebd-b9ac-45f5-b30a-602f8d9ab20b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:12:15 np0005539564 nova_compute[226295]: 2025-11-29 09:12:15.486 226310 DEBUG oslo_concurrency.lockutils [req-75b47c02-abf8-4bcd-90b8-034476d3888a req-b5ba7ebd-b9ac-45f5-b30a-602f8d9ab20b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:12:15 np0005539564 nova_compute[226295]: 2025-11-29 09:12:15.487 226310 DEBUG nova.network.neutron [req-75b47c02-abf8-4bcd-90b8-034476d3888a req-b5ba7ebd-b9ac-45f5-b30a-602f8d9ab20b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Refreshing network info cache for port 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:12:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:16.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:17 np0005539564 nova_compute[226295]: 2025-11-29 09:12:17.256 226310 DEBUG nova.network.neutron [req-75b47c02-abf8-4bcd-90b8-034476d3888a req-b5ba7ebd-b9ac-45f5-b30a-602f8d9ab20b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updated VIF entry in instance network info cache for port 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:12:17 np0005539564 nova_compute[226295]: 2025-11-29 09:12:17.256 226310 DEBUG nova.network.neutron [req-75b47c02-abf8-4bcd-90b8-034476d3888a req-b5ba7ebd-b9ac-45f5-b30a-602f8d9ab20b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updating instance_info_cache with network_info: [{"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:12:17 np0005539564 nova_compute[226295]: 2025-11-29 09:12:17.526 226310 DEBUG oslo_concurrency.lockutils [req-75b47c02-abf8-4bcd-90b8-034476d3888a req-b5ba7ebd-b9ac-45f5-b30a-602f8d9ab20b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:12:18 np0005539564 nova_compute[226295]: 2025-11-29 09:12:18.191 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:18 np0005539564 nova_compute[226295]: 2025-11-29 09:12:18.247 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:18.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:18.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:19.188 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:20.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:20.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:22.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:22.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:23 np0005539564 nova_compute[226295]: 2025-11-29 09:12:23.220 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:23 np0005539564 nova_compute[226295]: 2025-11-29 09:12:23.248 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:24 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:24Z|00113|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.6
Nov 29 04:12:24 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:24Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:5c:17:01 10.100.0.6
Nov 29 04:12:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:24.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:24.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:26.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:26.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:27 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:27Z|00115|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.6
Nov 29 04:12:27 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:27Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:5c:17:01 10.100.0.6
Nov 29 04:12:28 np0005539564 nova_compute[226295]: 2025-11-29 09:12:28.222 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:28 np0005539564 nova_compute[226295]: 2025-11-29 09:12:28.251 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:28.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:28.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:29 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:29Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:17:01 10.100.0.6
Nov 29 04:12:29 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:29Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:17:01 10.100.0.6
Nov 29 04:12:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:30.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:30.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:32.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:32.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:33 np0005539564 nova_compute[226295]: 2025-11-29 09:12:33.225 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:33 np0005539564 nova_compute[226295]: 2025-11-29 09:12:33.252 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:34.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:36.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:38 np0005539564 nova_compute[226295]: 2025-11-29 09:12:38.254 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:12:38 np0005539564 nova_compute[226295]: 2025-11-29 09:12:38.258 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:12:38 np0005539564 nova_compute[226295]: 2025-11-29 09:12:38.258 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:12:38 np0005539564 nova_compute[226295]: 2025-11-29 09:12:38.259 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:12:38 np0005539564 nova_compute[226295]: 2025-11-29 09:12:38.275 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:38 np0005539564 nova_compute[226295]: 2025-11-29 09:12:38.276 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:12:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:38.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:38.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:40 np0005539564 nova_compute[226295]: 2025-11-29 09:12:40.033 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:40 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:12:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:40.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:41 np0005539564 nova_compute[226295]: 2025-11-29 09:12:41.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:41 np0005539564 nova_compute[226295]: 2025-11-29 09:12:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:42.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:43 np0005539564 nova_compute[226295]: 2025-11-29 09:12:43.277 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:43 np0005539564 nova_compute[226295]: 2025-11-29 09:12:43.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:12:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:44 np0005539564 nova_compute[226295]: 2025-11-29 09:12:44.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:44 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:44Z|00898|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 29 04:12:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:44.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:45 np0005539564 nova_compute[226295]: 2025-11-29 09:12:45.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:45 np0005539564 nova_compute[226295]: 2025-11-29 09:12:45.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:12:45 np0005539564 podman[317011]: 2025-11-29 09:12:45.530504639 +0000 UTC m=+0.081163660 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:12:45 np0005539564 podman[317012]: 2025-11-29 09:12:45.564009526 +0000 UTC m=+0.101877291 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:12:45 np0005539564 podman[317010]: 2025-11-29 09:12:45.579145036 +0000 UTC m=+0.132361866 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Nov 29 04:12:46 np0005539564 nova_compute[226295]: 2025-11-29 09:12:46.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:46 np0005539564 nova_compute[226295]: 2025-11-29 09:12:46.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:12:46 np0005539564 nova_compute[226295]: 2025-11-29 09:12:46.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:12:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:46.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:46 np0005539564 nova_compute[226295]: 2025-11-29 09:12:46.764 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:12:46 np0005539564 nova_compute[226295]: 2025-11-29 09:12:46.765 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:12:46 np0005539564 nova_compute[226295]: 2025-11-29 09:12:46.765 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:12:46 np0005539564 nova_compute[226295]: 2025-11-29 09:12:46.765 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:12:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:47 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:48 np0005539564 nova_compute[226295]: 2025-11-29 09:12:48.279 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:12:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:48 np0005539564 nova_compute[226295]: 2025-11-29 09:12:48.780 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:48 np0005539564 nova_compute[226295]: 2025-11-29 09:12:48.781 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:48 np0005539564 nova_compute[226295]: 2025-11-29 09:12:48.782 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:48 np0005539564 nova_compute[226295]: 2025-11-29 09:12:48.782 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:48 np0005539564 nova_compute[226295]: 2025-11-29 09:12:48.783 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:48 np0005539564 nova_compute[226295]: 2025-11-29 09:12:48.785 226310 INFO nova.compute.manager [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Terminating instance#033[00m
Nov 29 04:12:48 np0005539564 nova_compute[226295]: 2025-11-29 09:12:48.787 226310 DEBUG nova.compute.manager [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:12:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:48.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:49 np0005539564 kernel: tap7c54a8ab-26 (unregistering): left promiscuous mode
Nov 29 04:12:49 np0005539564 NetworkManager[48997]: <info>  [1764407569.2605] device (tap7c54a8ab-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.277 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:49Z|00899|binding|INFO|Releasing lport 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 from this chassis (sb_readonly=0)
Nov 29 04:12:49 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:49Z|00900|binding|INFO|Setting lport 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 down in Southbound
Nov 29 04:12:49 np0005539564 ovn_controller[130591]: 2025-11-29T09:12:49Z|00901|binding|INFO|Removing iface tap7c54a8ab-26 ovn-installed in OVS
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.281 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.304 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Nov 29 04:12:49 np0005539564 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000dd.scope: Consumed 16.778s CPU time.
Nov 29 04:12:49 np0005539564 systemd-machined[190128]: Machine qemu-103-instance-000000dd terminated.
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.359 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:17:01 10.100.0.6'], port_security=['fa:16:3e:5c:17:01 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '163ad31c-1a42-447f-bc30-3a8f2cc05a7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.360 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad unbound from our chassis#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.361 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.363 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6da00989-b2ba-44c6-8815-bdaef0759f9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.363 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace which is not needed anymore#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.413 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.419 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.442 226310 INFO nova.virt.libvirt.driver [-] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Instance destroyed successfully.#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.443 226310 DEBUG nova.objects.instance [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'resources' on Instance uuid 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:12:49 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[316683]: [NOTICE]   (316687) : haproxy version is 2.8.14-c23fe91
Nov 29 04:12:49 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[316683]: [NOTICE]   (316687) : path to executable is /usr/sbin/haproxy
Nov 29 04:12:49 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[316683]: [WARNING]  (316687) : Exiting Master process...
Nov 29 04:12:49 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[316683]: [WARNING]  (316687) : Exiting Master process...
Nov 29 04:12:49 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[316683]: [ALERT]    (316687) : Current worker (316689) exited with code 143 (Terminated)
Nov 29 04:12:49 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[316683]: [WARNING]  (316687) : All workers exited. Exiting... (0)
Nov 29 04:12:49 np0005539564 systemd[1]: libpod-43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6.scope: Deactivated successfully.
Nov 29 04:12:49 np0005539564 podman[317156]: 2025-11-29 09:12:49.622738363 +0000 UTC m=+0.142597264 container died 43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 04:12:49 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6-userdata-shm.mount: Deactivated successfully.
Nov 29 04:12:49 np0005539564 systemd[1]: var-lib-containers-storage-overlay-79e47a00329907323475b83bb3bc65ed4919e21cd359a42f601e1b0e8b3cecef-merged.mount: Deactivated successfully.
Nov 29 04:12:49 np0005539564 podman[317156]: 2025-11-29 09:12:49.664753181 +0000 UTC m=+0.184612082 container cleanup 43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:12:49 np0005539564 systemd[1]: libpod-conmon-43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6.scope: Deactivated successfully.
Nov 29 04:12:49 np0005539564 podman[317186]: 2025-11-29 09:12:49.741339924 +0000 UTC m=+0.048187785 container remove 43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.747 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[48f9f55c-fc8c-4f94-abd0-8aed6d334448]: (4, ('Sat Nov 29 09:12:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6)\n43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6\nSat Nov 29 09:12:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6)\n43e032e2efb3a92d3aa3271526f71d00651dead7378d935fbfef73e589c795c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.750 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[54c07e77-9b0d-4eac-9402-5a60c47b9305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.752 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.754 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 kernel: tap8aaf4606-90: left promiscuous mode
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.772 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.774 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.776 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[01f1dcfe-b0c7-4d30-b706-ba0b3b309990]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.798 226310 DEBUG nova.virt.libvirt.vif [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-176205723',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-176205723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-176205723',id=221,image_ref='9edb158f-7f78-4486-aa1c-e248407be0c7',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPlNv+UuArIm18l69A5HfxFG21WpyJW8ToVt4k7LsCCm4KdCfn7zzpB30MrFJcDQN8lOais3fDY7vdggDjEDoZEquToagz5NNpmyQCISG63suEZVNGLiZXnMZoQvz4f23Q==',key_name='tempest-keypair-1579184601',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:12:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-1m4xm20p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-531976395',image_owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:12:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.799 226310 DEBUG nova.network.os_vif_util [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.800 226310 DEBUG nova.network.os_vif_util [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:17:01,bridge_name='br-int',has_traffic_filtering=True,id=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c54a8ab-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.800 226310 DEBUG os_vif [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:17:01,bridge_name='br-int',has_traffic_filtering=True,id=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c54a8ab-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.800 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[f7bd6471-2ad8-4dfb-b897-0c4656936b08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.802 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.802 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c54a8ab-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.802 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[72e8bb2d-924a-4364-96c0-610d66363be4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.804 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.807 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:49 np0005539564 nova_compute[226295]: 2025-11-29 09:12:49.813 226310 INFO os_vif [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:17:01,bridge_name='br-int',has_traffic_filtering=True,id=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c54a8ab-26')#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.828 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[bf129c22-3394-47a9-b802-3e924798d74a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1050148, 'reachable_time': 41012, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317205, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.833 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:12:49 np0005539564 systemd[1]: run-netns-ovnmeta\x2d8aaf4606\x2d9df9\x2d4ad5\x2d9ade\x2df48fdc6cfaad.mount: Deactivated successfully.
Nov 29 04:12:49 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:49.834 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[ded6dff0-cd78-418a-a926-26870c456ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:12:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:50.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:50 np0005539564 nova_compute[226295]: 2025-11-29 09:12:50.675 226310 INFO nova.virt.libvirt.driver [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Deleting instance files /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9_del#033[00m
Nov 29 04:12:50 np0005539564 nova_compute[226295]: 2025-11-29 09:12:50.676 226310 INFO nova.virt.libvirt.driver [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Deletion of /var/lib/nova/instances/06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9_del complete#033[00m
Nov 29 04:12:50 np0005539564 nova_compute[226295]: 2025-11-29 09:12:50.760 226310 DEBUG nova.compute.manager [req-c78f1600-a8e4-4241-b851-fde408a1cf38 req-08695223-6d20-49d8-b630-27a07977f8d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received event network-vif-unplugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:12:50 np0005539564 nova_compute[226295]: 2025-11-29 09:12:50.760 226310 DEBUG oslo_concurrency.lockutils [req-c78f1600-a8e4-4241-b851-fde408a1cf38 req-08695223-6d20-49d8-b630-27a07977f8d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:50 np0005539564 nova_compute[226295]: 2025-11-29 09:12:50.760 226310 DEBUG oslo_concurrency.lockutils [req-c78f1600-a8e4-4241-b851-fde408a1cf38 req-08695223-6d20-49d8-b630-27a07977f8d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:50 np0005539564 nova_compute[226295]: 2025-11-29 09:12:50.760 226310 DEBUG oslo_concurrency.lockutils [req-c78f1600-a8e4-4241-b851-fde408a1cf38 req-08695223-6d20-49d8-b630-27a07977f8d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:50 np0005539564 nova_compute[226295]: 2025-11-29 09:12:50.761 226310 DEBUG nova.compute.manager [req-c78f1600-a8e4-4241-b851-fde408a1cf38 req-08695223-6d20-49d8-b630-27a07977f8d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] No waiting events found dispatching network-vif-unplugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:12:50 np0005539564 nova_compute[226295]: 2025-11-29 09:12:50.761 226310 DEBUG nova.compute.manager [req-c78f1600-a8e4-4241-b851-fde408a1cf38 req-08695223-6d20-49d8-b630-27a07977f8d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received event network-vif-unplugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:12:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:50.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:51 np0005539564 nova_compute[226295]: 2025-11-29 09:12:51.055 226310 INFO nova.compute.manager [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Took 2.27 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:12:51 np0005539564 nova_compute[226295]: 2025-11-29 09:12:51.056 226310 DEBUG oslo.service.loopingcall [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:12:51 np0005539564 nova_compute[226295]: 2025-11-29 09:12:51.056 226310 DEBUG nova.compute.manager [-] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:12:51 np0005539564 nova_compute[226295]: 2025-11-29 09:12:51.057 226310 DEBUG nova.network.neutron [-] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:12:51 np0005539564 nova_compute[226295]: 2025-11-29 09:12:51.842 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updating instance_info_cache with network_info: [{"id": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "address": "fa:16:3e:5c:17:01", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c54a8ab-26", "ovs_interfaceid": "7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:12:51 np0005539564 nova_compute[226295]: 2025-11-29 09:12:51.906 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:12:51 np0005539564 nova_compute[226295]: 2025-11-29 09:12:51.907 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:12:51 np0005539564 nova_compute[226295]: 2025-11-29 09:12:51.907 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:52.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:52.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:53 np0005539564 nova_compute[226295]: 2025-11-29 09:12:53.257 226310 DEBUG nova.compute.manager [req-7ed48b07-cf88-4c6b-90b8-6a48b0234c34 req-daa7441b-d452-43d9-a390-89bc6e832a44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received event network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:12:53 np0005539564 nova_compute[226295]: 2025-11-29 09:12:53.257 226310 DEBUG oslo_concurrency.lockutils [req-7ed48b07-cf88-4c6b-90b8-6a48b0234c34 req-daa7441b-d452-43d9-a390-89bc6e832a44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:53 np0005539564 nova_compute[226295]: 2025-11-29 09:12:53.258 226310 DEBUG oslo_concurrency.lockutils [req-7ed48b07-cf88-4c6b-90b8-6a48b0234c34 req-daa7441b-d452-43d9-a390-89bc6e832a44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:53 np0005539564 nova_compute[226295]: 2025-11-29 09:12:53.258 226310 DEBUG oslo_concurrency.lockutils [req-7ed48b07-cf88-4c6b-90b8-6a48b0234c34 req-daa7441b-d452-43d9-a390-89bc6e832a44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:53 np0005539564 nova_compute[226295]: 2025-11-29 09:12:53.258 226310 DEBUG nova.compute.manager [req-7ed48b07-cf88-4c6b-90b8-6a48b0234c34 req-daa7441b-d452-43d9-a390-89bc6e832a44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] No waiting events found dispatching network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:12:53 np0005539564 nova_compute[226295]: 2025-11-29 09:12:53.258 226310 WARNING nova.compute.manager [req-7ed48b07-cf88-4c6b-90b8-6a48b0234c34 req-daa7441b-d452-43d9-a390-89bc6e832a44 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received unexpected event network-vif-plugged-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 04:12:53 np0005539564 nova_compute[226295]: 2025-11-29 09:12:53.282 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:54 np0005539564 nova_compute[226295]: 2025-11-29 09:12:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:54.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:54 np0005539564 nova_compute[226295]: 2025-11-29 09:12:54.805 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:54.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:55 np0005539564 nova_compute[226295]: 2025-11-29 09:12:55.481 226310 DEBUG nova.network.neutron [-] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:12:55 np0005539564 nova_compute[226295]: 2025-11-29 09:12:55.499 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:55.500 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:12:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:55.501 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:12:55 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:12:55.502 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:55 np0005539564 nova_compute[226295]: 2025-11-29 09:12:55.592 226310 DEBUG nova.compute.manager [req-7c60f9c7-33c1-431d-8b64-b028f9d58152 req-2c8cb54d-2ba4-4e34-8c31-9573de2eed3d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Received event network-vif-deleted-7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:12:55 np0005539564 nova_compute[226295]: 2025-11-29 09:12:55.593 226310 INFO nova.compute.manager [req-7c60f9c7-33c1-431d-8b64-b028f9d58152 req-2c8cb54d-2ba4-4e34-8c31-9573de2eed3d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Neutron deleted interface 7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 04:12:55 np0005539564 nova_compute[226295]: 2025-11-29 09:12:55.593 226310 DEBUG nova.network.neutron [req-7c60f9c7-33c1-431d-8b64-b028f9d58152 req-2c8cb54d-2ba4-4e34-8c31-9573de2eed3d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:12:56 np0005539564 nova_compute[226295]: 2025-11-29 09:12:56.274 226310 INFO nova.compute.manager [-] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Took 5.22 seconds to deallocate network for instance.#033[00m
Nov 29 04:12:56 np0005539564 nova_compute[226295]: 2025-11-29 09:12:56.470 226310 DEBUG nova.compute.manager [req-7c60f9c7-33c1-431d-8b64-b028f9d58152 req-2c8cb54d-2ba4-4e34-8c31-9573de2eed3d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Detach interface failed, port_id=7c54a8ab-26bf-4245-9a31-cfc4e2a13ee4, reason: Instance 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 04:12:56 np0005539564 nova_compute[226295]: 2025-11-29 09:12:56.546 226310 INFO nova.compute.manager [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Nov 29 04:12:56 np0005539564 nova_compute[226295]: 2025-11-29 09:12:56.548 226310 DEBUG nova.compute.manager [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Deleting volume: 74e740e5-5b5d-4c86-ad39-4e66e4082491 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 04:12:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:56.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:56 np0005539564 nova_compute[226295]: 2025-11-29 09:12:56.853 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:56 np0005539564 nova_compute[226295]: 2025-11-29 09:12:56.855 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:56.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:56 np0005539564 nova_compute[226295]: 2025-11-29 09:12:56.924 226310 DEBUG oslo_concurrency.processutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:57 np0005539564 nova_compute[226295]: 2025-11-29 09:12:57.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:12:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2225647507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:12:57 np0005539564 nova_compute[226295]: 2025-11-29 09:12:57.382 226310 DEBUG oslo_concurrency.processutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:12:57 np0005539564 nova_compute[226295]: 2025-11-29 09:12:57.392 226310 DEBUG nova.compute.provider_tree [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:12:57 np0005539564 nova_compute[226295]: 2025-11-29 09:12:57.413 226310 DEBUG nova.scheduler.client.report [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:12:57 np0005539564 nova_compute[226295]: 2025-11-29 09:12:57.436 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:57 np0005539564 nova_compute[226295]: 2025-11-29 09:12:57.461 226310 INFO nova.scheduler.client.report [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Deleted allocations for instance 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9#033[00m
Nov 29 04:12:57 np0005539564 nova_compute[226295]: 2025-11-29 09:12:57.530 226310 DEBUG oslo_concurrency.lockutils [None req-e61b2895-a2ff-4dc2-8e95-d5d49de25415 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:58 np0005539564 nova_compute[226295]: 2025-11-29 09:12:58.285 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:58.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e433 e433: 3 total, 3 up, 3 in
Nov 29 04:12:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:12:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:58.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:59 np0005539564 nova_compute[226295]: 2025-11-29 09:12:59.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:59 np0005539564 nova_compute[226295]: 2025-11-29 09:12:59.370 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:59 np0005539564 nova_compute[226295]: 2025-11-29 09:12:59.370 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:59 np0005539564 nova_compute[226295]: 2025-11-29 09:12:59.371 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:59 np0005539564 nova_compute[226295]: 2025-11-29 09:12:59.371 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:12:59 np0005539564 nova_compute[226295]: 2025-11-29 09:12:59.372 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:59 np0005539564 nova_compute[226295]: 2025-11-29 09:12:59.808 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:12:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3712192792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:12:59 np0005539564 nova_compute[226295]: 2025-11-29 09:12:59.858 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.057 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.059 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4157MB free_disk=20.988109588623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.059 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.060 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.123 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.124 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.139 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:00.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:13:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4071930598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.616 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.623 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.639 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.661 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:13:00 np0005539564 nova_compute[226295]: 2025-11-29 09:13:00.661 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:00.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:02.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:03 np0005539564 nova_compute[226295]: 2025-11-29 09:13:03.287 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:03.796 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:03.796 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:03.797 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:04 np0005539564 nova_compute[226295]: 2025-11-29 09:13:04.440 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407569.4391513, 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:13:04 np0005539564 nova_compute[226295]: 2025-11-29 09:13:04.441 226310 INFO nova.compute.manager [-] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:13:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.004000106s ======
Nov 29 04:13:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000106s
Nov 29 04:13:04 np0005539564 nova_compute[226295]: 2025-11-29 09:13:04.736 226310 DEBUG nova.compute.manager [None req-2f84d88b-dee5-4305-9490-6f74eb234d63 - - - - - -] [instance: 06bed67f-1db5-4ea4-ad30-c92a6ac6d9c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:13:04 np0005539564 nova_compute[226295]: 2025-11-29 09:13:04.810 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:06.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:06.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e434 e434: 3 total, 3 up, 3 in
Nov 29 04:13:08 np0005539564 nova_compute[226295]: 2025-11-29 09:13:08.289 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:08.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e435 e435: 3 total, 3 up, 3 in
Nov 29 04:13:09 np0005539564 nova_compute[226295]: 2025-11-29 09:13:09.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:09 np0005539564 nova_compute[226295]: 2025-11-29 09:13:09.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:13:09 np0005539564 nova_compute[226295]: 2025-11-29 09:13:09.812 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:10.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:13 np0005539564 nova_compute[226295]: 2025-11-29 09:13:13.292 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:14.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:14 np0005539564 nova_compute[226295]: 2025-11-29 09:13:14.814 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:16 np0005539564 podman[317294]: 2025-11-29 09:13:16.512633429 +0000 UTC m=+0.064398945 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:13:16 np0005539564 podman[317295]: 2025-11-29 09:13:16.554093363 +0000 UTC m=+0.097042610 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:13:16 np0005539564 podman[317293]: 2025-11-29 09:13:16.568157493 +0000 UTC m=+0.122137688 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 04:13:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:16.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:16.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:17 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 e436: 3 total, 3 up, 3 in
Nov 29 04:13:18 np0005539564 nova_compute[226295]: 2025-11-29 09:13:18.294 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:18 np0005539564 nova_compute[226295]: 2025-11-29 09:13:18.420 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:18 np0005539564 nova_compute[226295]: 2025-11-29 09:13:18.420 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:13:18 np0005539564 nova_compute[226295]: 2025-11-29 09:13:18.481 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:13:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:18.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:13:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:18.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:13:19 np0005539564 nova_compute[226295]: 2025-11-29 09:13:19.817 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:20.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:20.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:22.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:22.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:23 np0005539564 nova_compute[226295]: 2025-11-29 09:13:23.343 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:24.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:24 np0005539564 nova_compute[226295]: 2025-11-29 09:13:24.819 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:26.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:28 np0005539564 nova_compute[226295]: 2025-11-29 09:13:28.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:28 np0005539564 nova_compute[226295]: 2025-11-29 09:13:28.345 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:13:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:13:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:28.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:29 np0005539564 nova_compute[226295]: 2025-11-29 09:13:29.821 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:30.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:13:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:32.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:13:32 np0005539564 ovn_controller[130591]: 2025-11-29T09:13:32Z|00902|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Nov 29 04:13:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:32.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:33 np0005539564 nova_compute[226295]: 2025-11-29 09:13:33.347 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:34.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:34 np0005539564 nova_compute[226295]: 2025-11-29 09:13:34.823 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:34.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:36.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:36 np0005539564 nova_compute[226295]: 2025-11-29 09:13:36.747 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:36 np0005539564 nova_compute[226295]: 2025-11-29 09:13:36.798 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "0d2ca216-c02c-4a51-9997-85652cee8fe5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:36 np0005539564 nova_compute[226295]: 2025-11-29 09:13:36.799 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:36 np0005539564 nova_compute[226295]: 2025-11-29 09:13:36.820 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:13:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:36.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:36 np0005539564 nova_compute[226295]: 2025-11-29 09:13:36.991 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:36 np0005539564 nova_compute[226295]: 2025-11-29 09:13:36.992 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.004 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.004 226310 INFO nova.compute.claims [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.113 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:13:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1794623106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.562 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.573 226310 DEBUG nova.compute.provider_tree [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.592 226310 DEBUG nova.scheduler.client.report [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.620 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.620 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.676 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.677 226310 DEBUG nova.network.neutron [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.712 226310 INFO nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.736 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.790 226310 INFO nova.virt.block_device [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Booting with volume 5acb1bf0-7995-4ac2-84e2-62745b9cdce6 at /dev/vda#033[00m
Nov 29 04:13:37 np0005539564 nova_compute[226295]: 2025-11-29 09:13:37.928 226310 DEBUG nova.policy [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ff561a95dc44b9fb9f7fd8fee80f589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.056 226310 DEBUG os_brick.utils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.058 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.074 231810 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.074 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[7d548279-3180-4b71-a81d-d29f6922fefb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.076 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.087 231810 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.088 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[71e92cb1-825f-40c6-98e7-948734881306]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d3b4384eec44', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.089 231810 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.106 231810 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.106 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[204ed95c-44de-4712-9bca-ca4204a3dc4f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.108 231810 DEBUG oslo.privsep.daemon [-] privsep: reply[4384365e-e570-4f3c-b422-1b3f199c940d]: (4, '2e858761-3292-4a17-b38f-a169c3064289') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.108 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.152 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "nvme version" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.155 226310 DEBUG os_brick.initiator.connectors.lightos [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.156 226310 DEBUG os_brick.initiator.connectors.lightos [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.156 226310 DEBUG os_brick.initiator.connectors.lightos [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.156 226310 DEBUG os_brick.utils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d3b4384eec44', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2e858761-3292-4a17-b38f-a169c3064289', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.157 226310 DEBUG nova.virt.block_device [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updating existing volume attachment record: 7d4168cd-a49d-4fc0-b0d0-6a6d60954648 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 04:13:38 np0005539564 nova_compute[226295]: 2025-11-29 09:13:38.350 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:38.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:38.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:39 np0005539564 nova_compute[226295]: 2025-11-29 09:13:39.824 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:39 np0005539564 nova_compute[226295]: 2025-11-29 09:13:39.853 226310 DEBUG nova.network.neutron [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Successfully created port: 7251ab30-4bf1-4c53-8064-90f21a5154e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:13:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:40.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:40 np0005539564 nova_compute[226295]: 2025-11-29 09:13:40.845 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:13:40 np0005539564 nova_compute[226295]: 2025-11-29 09:13:40.848 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:13:40 np0005539564 nova_compute[226295]: 2025-11-29 09:13:40.849 226310 INFO nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Creating image(s)#033[00m
Nov 29 04:13:40 np0005539564 nova_compute[226295]: 2025-11-29 09:13:40.849 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 04:13:40 np0005539564 nova_compute[226295]: 2025-11-29 09:13:40.850 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Ensure instance console log exists: /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:13:40 np0005539564 nova_compute[226295]: 2025-11-29 09:13:40.851 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:40 np0005539564 nova_compute[226295]: 2025-11-29 09:13:40.851 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:40 np0005539564 nova_compute[226295]: 2025-11-29 09:13:40.852 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:40.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.154 226310 DEBUG nova.network.neutron [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Successfully updated port: 7251ab30-4bf1-4c53-8064-90f21a5154e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.191 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.192 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquired lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.193 226310 DEBUG nova.network.neutron [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.273 226310 DEBUG nova.compute.manager [req-3150fd25-bc8c-484d-809b-1828e0394f54 req-7aa88a20-3d61-4d10-b15c-030ac80f258f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received event network-changed-7251ab30-4bf1-4c53-8064-90f21a5154e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.273 226310 DEBUG nova.compute.manager [req-3150fd25-bc8c-484d-809b-1828e0394f54 req-7aa88a20-3d61-4d10-b15c-030ac80f258f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Refreshing instance network info cache due to event network-changed-7251ab30-4bf1-4c53-8064-90f21a5154e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.274 226310 DEBUG oslo_concurrency.lockutils [req-3150fd25-bc8c-484d-809b-1828e0394f54 req-7aa88a20-3d61-4d10-b15c-030ac80f258f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:41 np0005539564 nova_compute[226295]: 2025-11-29 09:13:41.830 226310 DEBUG nova.network.neutron [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.647400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622647436, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1698, "num_deletes": 257, "total_data_size": 4085325, "memory_usage": 4130480, "flush_reason": "Manual Compaction"}
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Nov 29 04:13:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:42.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622687423, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 2662688, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90805, "largest_seqno": 92497, "table_properties": {"data_size": 2655422, "index_size": 4272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15104, "raw_average_key_size": 20, "raw_value_size": 2640926, "raw_average_value_size": 3516, "num_data_blocks": 187, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407478, "oldest_key_time": 1764407478, "file_creation_time": 1764407622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 40121 microseconds, and 7926 cpu microseconds.
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.687512) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 2662688 bytes OK
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.687542) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.691139) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.691164) EVENT_LOG_v1 {"time_micros": 1764407622691155, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.691188) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 4077574, prev total WAL file size 4077574, number of live WAL files 2.
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.693034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353234' seq:72057594037927935, type:22 .. '6C6F676D0033373735' seq:0, type:0; will stop at (end)
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(2600KB)], [186(11MB)]
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622693108, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 14977265, "oldest_snapshot_seqno": -1}
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 11564 keys, 14845837 bytes, temperature: kUnknown
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622893768, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 14845837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14771747, "index_size": 44089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28933, "raw_key_size": 306436, "raw_average_key_size": 26, "raw_value_size": 14569997, "raw_average_value_size": 1259, "num_data_blocks": 1674, "num_entries": 11564, "num_filter_entries": 11564, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.894165) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14845837 bytes
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.897072) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.6 rd, 74.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.7 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(11.2) write-amplify(5.6) OK, records in: 12097, records dropped: 533 output_compression: NoCompression
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.897091) EVENT_LOG_v1 {"time_micros": 1764407622897082, "job": 120, "event": "compaction_finished", "compaction_time_micros": 200728, "compaction_time_cpu_micros": 57746, "output_level": 6, "num_output_files": 1, "total_output_size": 14845837, "num_input_records": 12097, "num_output_records": 11564, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622897675, "job": 120, "event": "table_file_deletion", "file_number": 188}
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622900372, "job": 120, "event": "table_file_deletion", "file_number": 186}
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.692894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.900551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.900561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.900568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.900572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:13:42.900577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:42.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.051 226310 DEBUG nova.network.neutron [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updating instance_info_cache with network_info: [{"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.068 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Releasing lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.069 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Instance network_info: |[{"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.069 226310 DEBUG oslo_concurrency.lockutils [req-3150fd25-bc8c-484d-809b-1828e0394f54 req-7aa88a20-3d61-4d10-b15c-030ac80f258f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.070 226310 DEBUG nova.network.neutron [req-3150fd25-bc8c-484d-809b-1828e0394f54 req-7aa88a20-3d61-4d10-b15c-030ac80f258f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Refreshing network info cache for port 7251ab30-4bf1-4c53-8064-90f21a5154e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.075 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Start _get_guest_xml network_info=[{"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5acb1bf0-7995-4ac2-84e2-62745b9cdce6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5acb1bf0-7995-4ac2-84e2-62745b9cdce6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '0d2ca216-c02c-4a51-9997-85652cee8fe5', 'attached_at': '', 'detached_at': '', 'volume_id': '5acb1bf0-7995-4ac2-84e2-62745b9cdce6', 'serial': '5acb1bf0-7995-4ac2-84e2-62745b9cdce6'}, 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '7d4168cd-a49d-4fc0-b0d0-6a6d60954648', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.084 226310 WARNING nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.093 226310 DEBUG nova.virt.libvirt.host [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.094 226310 DEBUG nova.virt.libvirt.host [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.098 226310 DEBUG nova.virt.libvirt.host [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.099 226310 DEBUG nova.virt.libvirt.host [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.100 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.101 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.102 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.102 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.103 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.103 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.104 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.104 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.105 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.105 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.106 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.106 226310 DEBUG nova.virt.hardware [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.154 226310 DEBUG nova.storage.rbd_utils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 0d2ca216-c02c-4a51-9997-85652cee8fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.162 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.354 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:13:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2045064442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.619 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.650 226310 DEBUG nova.virt.libvirt.vif [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1674349468',display_name='tempest-TestVolumeBootPattern-server-1674349468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1674349468',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-g34jp9wl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:13:37Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=0d2ca216-c02c-4a51-9997-85652cee8fe5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.651 226310 DEBUG nova.network.os_vif_util [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.653 226310 DEBUG nova.network.os_vif_util [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:6c:a0,bridge_name='br-int',has_traffic_filtering=True,id=7251ab30-4bf1-4c53-8064-90f21a5154e9,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7251ab30-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.655 226310 DEBUG nova.objects.instance [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d2ca216-c02c-4a51-9997-85652cee8fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.688 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <uuid>0d2ca216-c02c-4a51-9997-85652cee8fe5</uuid>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <name>instance-000000de</name>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <memory>131072</memory>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <vcpu>1</vcpu>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <metadata>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <nova:name>tempest-TestVolumeBootPattern-server-1674349468</nova:name>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <nova:creationTime>2025-11-29 09:13:43</nova:creationTime>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <nova:flavor name="m1.nano">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <nova:memory>128</nova:memory>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <nova:disk>1</nova:disk>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <nova:swap>0</nova:swap>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      </nova:flavor>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <nova:owner>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <nova:user uuid="5ff561a95dc44b9fb9f7fd8fee80f589">tempest-TestVolumeBootPattern-531976395-project-member</nova:user>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <nova:project uuid="51af0a2ee11a460ab825a484e5c6f4a3">tempest-TestVolumeBootPattern-531976395</nova:project>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      </nova:owner>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <nova:ports>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <nova:port uuid="7251ab30-4bf1-4c53-8064-90f21a5154e9">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        </nova:port>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      </nova:ports>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </nova:instance>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  </metadata>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <sysinfo type="smbios">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <system>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <entry name="serial">0d2ca216-c02c-4a51-9997-85652cee8fe5</entry>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <entry name="uuid">0d2ca216-c02c-4a51-9997-85652cee8fe5</entry>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </system>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  </sysinfo>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <os>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <boot dev="hd"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <smbios mode="sysinfo"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  </os>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <features>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <acpi/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <apic/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <vmcoreinfo/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  </features>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <clock offset="utc">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <timer name="hpet" present="no"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  </clock>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <cpu mode="custom" match="exact">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <model>Nehalem</model>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  </cpu>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  <devices>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <disk type="network" device="cdrom">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <driver type="raw" cache="none"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="vms/0d2ca216-c02c-4a51-9997-85652cee8fe5_disk.config">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <target dev="sda" bus="sata"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <disk type="network" device="disk">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <source protocol="rbd" name="volumes/volume-5acb1bf0-7995-4ac2-84e2-62745b9cdce6">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      </source>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <auth username="openstack">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      </auth>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <target dev="vda" bus="virtio"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <serial>5acb1bf0-7995-4ac2-84e2-62745b9cdce6</serial>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </disk>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <interface type="ethernet">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <mac address="fa:16:3e:a7:6c:a0"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <mtu size="1442"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <target dev="tap7251ab30-4b"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </interface>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <serial type="pty">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <log file="/var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5/console.log" append="off"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </serial>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <video>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <model type="virtio"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </video>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <input type="tablet" bus="usb"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <rng model="virtio">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </rng>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <controller type="usb" index="0"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    <memballoon model="virtio">
Nov 29 04:13:43 np0005539564 nova_compute[226295]:      <stats period="10"/>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:    </memballoon>
Nov 29 04:13:43 np0005539564 nova_compute[226295]:  </devices>
Nov 29 04:13:43 np0005539564 nova_compute[226295]: </domain>
Nov 29 04:13:43 np0005539564 nova_compute[226295]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.689 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Preparing to wait for external event network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.690 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.691 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.691 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.693 226310 DEBUG nova.virt.libvirt.vif [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1674349468',display_name='tempest-TestVolumeBootPattern-server-1674349468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1674349468',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-g34jp9wl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:13:37Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=0d2ca216-c02c-4a51-9997-85652cee8fe5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.694 226310 DEBUG nova.network.os_vif_util [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.695 226310 DEBUG nova.network.os_vif_util [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:6c:a0,bridge_name='br-int',has_traffic_filtering=True,id=7251ab30-4bf1-4c53-8064-90f21a5154e9,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7251ab30-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.696 226310 DEBUG os_vif [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:6c:a0,bridge_name='br-int',has_traffic_filtering=True,id=7251ab30-4bf1-4c53-8064-90f21a5154e9,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7251ab30-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.697 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.698 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.699 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.704 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.704 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7251ab30-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.705 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7251ab30-4b, col_values=(('external_ids', {'iface-id': '7251ab30-4bf1-4c53-8064-90f21a5154e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:6c:a0', 'vm-uuid': '0d2ca216-c02c-4a51-9997-85652cee8fe5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.708 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:43 np0005539564 NetworkManager[48997]: <info>  [1764407623.7096] manager: (tap7251ab30-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.711 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.716 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.718 226310 INFO os_vif [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:6c:a0,bridge_name='br-int',has_traffic_filtering=True,id=7251ab30-4bf1-4c53-8064-90f21a5154e9,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7251ab30-4b')#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.788 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.788 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.789 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No VIF found with MAC fa:16:3e:a7:6c:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.789 226310 INFO nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Using config drive#033[00m
Nov 29 04:13:43 np0005539564 nova_compute[226295]: 2025-11-29 09:13:43.825 226310 DEBUG nova.storage.rbd_utils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 0d2ca216-c02c-4a51-9997-85652cee8fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.145 226310 INFO nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Creating config drive at /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5/disk.config#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.151 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphhcsa82_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.197 226310 DEBUG nova.network.neutron [req-3150fd25-bc8c-484d-809b-1828e0394f54 req-7aa88a20-3d61-4d10-b15c-030ac80f258f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updated VIF entry in instance network info cache for port 7251ab30-4bf1-4c53-8064-90f21a5154e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.198 226310 DEBUG nova.network.neutron [req-3150fd25-bc8c-484d-809b-1828e0394f54 req-7aa88a20-3d61-4d10-b15c-030ac80f258f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updating instance_info_cache with network_info: [{"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.221 226310 DEBUG oslo_concurrency.lockutils [req-3150fd25-bc8c-484d-809b-1828e0394f54 req-7aa88a20-3d61-4d10-b15c-030ac80f258f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.312 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphhcsa82_" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.346 226310 DEBUG nova.storage.rbd_utils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 0d2ca216-c02c-4a51-9997-85652cee8fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.350 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5/disk.config 0d2ca216-c02c-4a51-9997-85652cee8fe5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.544 226310 DEBUG oslo_concurrency.processutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5/disk.config 0d2ca216-c02c-4a51-9997-85652cee8fe5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.546 226310 INFO nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Deleting local config drive /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5/disk.config because it was imported into RBD.#033[00m
Nov 29 04:13:44 np0005539564 kernel: tap7251ab30-4b: entered promiscuous mode
Nov 29 04:13:44 np0005539564 ovn_controller[130591]: 2025-11-29T09:13:44Z|00903|binding|INFO|Claiming lport 7251ab30-4bf1-4c53-8064-90f21a5154e9 for this chassis.
Nov 29 04:13:44 np0005539564 ovn_controller[130591]: 2025-11-29T09:13:44Z|00904|binding|INFO|7251ab30-4bf1-4c53-8064-90f21a5154e9: Claiming fa:16:3e:a7:6c:a0 10.100.0.4
Nov 29 04:13:44 np0005539564 NetworkManager[48997]: <info>  [1764407624.6209] manager: (tap7251ab30-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.619 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.633 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.634 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:6c:a0 10.100.0.4'], port_security=['fa:16:3e:a7:6c:a0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0d2ca216-c02c-4a51-9997-85652cee8fe5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f464a39e-170e-4271-8e3e-71cb609233aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7251ab30-4bf1-4c53-8064-90f21a5154e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:13:44 np0005539564 ovn_controller[130591]: 2025-11-29T09:13:44Z|00905|binding|INFO|Setting lport 7251ab30-4bf1-4c53-8064-90f21a5154e9 ovn-installed in OVS
Nov 29 04:13:44 np0005539564 ovn_controller[130591]: 2025-11-29T09:13:44Z|00906|binding|INFO|Setting lport 7251ab30-4bf1-4c53-8064-90f21a5154e9 up in Southbound
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.636 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.636 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7251ab30-4bf1-4c53-8064-90f21a5154e9 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad bound to our chassis#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.638 139780 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.639 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:44 np0005539564 systemd-udevd[317494]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.658 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae4d415-05d7-4aa8-9aa4-e2d5e62de9f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.659 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8aaf4606-91 in ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.662 231140 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8aaf4606-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.663 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5db64f-acda-4a55-b9f7-480360ee26d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 systemd-machined[190128]: New machine qemu-104-instance-000000de.
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.665 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7f654470-3a23-47ea-9dc6-5ac3f364c99e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.677 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[5d452d81-51a6-40b1-acb9-283b954fab11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 NetworkManager[48997]: <info>  [1764407624.6790] device (tap7251ab30-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:13:44 np0005539564 systemd[1]: Started Virtual Machine qemu-104-instance-000000de.
Nov 29 04:13:44 np0005539564 NetworkManager[48997]: <info>  [1764407624.6799] device (tap7251ab30-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.704 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee4312d-2f47-4f09-9e3c-7b62bbf00a11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.737 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ccceb2-2acf-489e-a67d-b7fcb2089e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.744 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[9141bfcd-04bc-4a0b-913e-6f10128fd2a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 NetworkManager[48997]: <info>  [1764407624.7470] manager: (tap8aaf4606-90): new Veth device (/org/freedesktop/NetworkManager/Devices/420)
Nov 29 04:13:44 np0005539564 systemd-udevd[317498]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.785 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd98591-1e8d-4714-838f-67fa8a96f3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.789 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[18c83394-1ab4-4311-aadb-bbab71437ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 NetworkManager[48997]: <info>  [1764407624.8123] device (tap8aaf4606-90): carrier: link connected
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.818 231279 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdb9e74-9765-470c-a2dc-f76239552b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.834 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf95619-7649-424e-86be-6c110d6e5472]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1059949, 'reachable_time': 28641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317527, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.848 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[18065b20-bb23-4799-b5f5-e7e02f4d7095]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:8863'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1059949, 'tstamp': 1059949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317528, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.863 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[49e4956f-c607-47b0-8031-7b6424520b2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1059949, 'reachable_time': 28641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317529, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.893 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0e5330-713e-495b-96c4-2e04d5ab8790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.960 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[e53bc9e3-bd59-47c6-b931-c490737baac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.962 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.962 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.963 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8aaf4606-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.964 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 kernel: tap8aaf4606-90: entered promiscuous mode
Nov 29 04:13:44 np0005539564 NetworkManager[48997]: <info>  [1764407624.9661] manager: (tap8aaf4606-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.967 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8aaf4606-90, col_values=(('external_ids', {'iface-id': 'dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.968 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 ovn_controller[130591]: 2025-11-29T09:13:44Z|00907|binding|INFO|Releasing lport dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a from this chassis (sb_readonly=0)
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.994 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 nova_compute[226295]: 2025-11-29 09:13:44.995 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.995 139780 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.996 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[db72acac-2db9-43a9-becc-60b2b3c6cb7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.997 139780 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: global
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    log         /dev/log local0 debug
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    log-tag     haproxy-metadata-proxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    user        root
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    group       root
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    maxconn     1024
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    pidfile     /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    daemon
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: defaults
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    log global
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    mode http
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    option httplog
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    option dontlognull
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    option http-server-close
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    option forwardfor
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    retries                 3
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    timeout http-request    30s
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    timeout connect         30s
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    timeout client          32s
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    timeout server          32s
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    timeout http-keep-alive 30s
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: listen listener
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    bind 169.254.169.254:80
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]:    http-request add-header X-OVN-Network-ID 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:13:44 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:13:44.998 139780 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'env', 'PROCESS_TAG=haproxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:13:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:44.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.055 226310 DEBUG nova.compute.manager [req-9203399f-90f9-4ddc-a2a8-4afddd0d23b6 req-0348aab3-8831-490a-95b9-d1ae8a4c9227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received event network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.056 226310 DEBUG oslo_concurrency.lockutils [req-9203399f-90f9-4ddc-a2a8-4afddd0d23b6 req-0348aab3-8831-490a-95b9-d1ae8a4c9227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.056 226310 DEBUG oslo_concurrency.lockutils [req-9203399f-90f9-4ddc-a2a8-4afddd0d23b6 req-0348aab3-8831-490a-95b9-d1ae8a4c9227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.056 226310 DEBUG oslo_concurrency.lockutils [req-9203399f-90f9-4ddc-a2a8-4afddd0d23b6 req-0348aab3-8831-490a-95b9-d1ae8a4c9227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.056 226310 DEBUG nova.compute.manager [req-9203399f-90f9-4ddc-a2a8-4afddd0d23b6 req-0348aab3-8831-490a-95b9-d1ae8a4c9227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Processing event network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.400 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.401 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407625.401045, 0d2ca216-c02c-4a51-9997-85652cee8fe5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.401 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] VM Started (Lifecycle Event)#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.415 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.418 226310 INFO nova.virt.libvirt.driver [-] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Instance spawned successfully.#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.419 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.438 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.442 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:13:45 np0005539564 podman[317602]: 2025-11-29 09:13:45.444158324 +0000 UTC m=+0.072054432 container create f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.451 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.451 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.452 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.452 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.453 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.453 226310 DEBUG nova.virt.libvirt.driver [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.477 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.477 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407625.4044623, 0d2ca216-c02c-4a51-9997-85652cee8fe5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.478 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:13:45 np0005539564 systemd[1]: Started libpod-conmon-f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a.scope.
Nov 29 04:13:45 np0005539564 podman[317602]: 2025-11-29 09:13:45.407041969 +0000 UTC m=+0.034938437 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:13:45 np0005539564 systemd[1]: Started libcrun container.
Nov 29 04:13:45 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6033ce375c48631e8a5443e6dec2ebe88c244f2ab79968aa17ac2d3c1c4a4957/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:45 np0005539564 podman[317602]: 2025-11-29 09:13:45.531270684 +0000 UTC m=+0.159166832 container init f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:13:45 np0005539564 podman[317602]: 2025-11-29 09:13:45.537288116 +0000 UTC m=+0.165184234 container start f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:13:45 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[317618]: [NOTICE]   (317622) : New worker (317624) forked
Nov 29 04:13:45 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[317618]: [NOTICE]   (317622) : Loading success.
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.663 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.670 226310 DEBUG nova.virt.driver [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] Emitting event <LifecycleEvent: 1764407625.4066386, 0d2ca216-c02c-4a51-9997-85652cee8fe5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.671 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.675 226310 INFO nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Took 4.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.675 226310 DEBUG nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.685 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.687 226310 DEBUG nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.707 226310 INFO nova.compute.manager [None req-e49f4bad-989f-49c4-9b79-d471ba130cdc - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.729 226310 INFO nova.compute.manager [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Took 8.84 seconds to build instance.#033[00m
Nov 29 04:13:45 np0005539564 nova_compute[226295]: 2025-11-29 09:13:45.746 226310 DEBUG oslo_concurrency.lockutils [None req-ced9ad4c-3f3c-43b9-9f31-d0b7340dd69c 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:46.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:47.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:47 np0005539564 nova_compute[226295]: 2025-11-29 09:13:47.169 226310 DEBUG nova.compute.manager [req-3d3c68eb-9b1d-4d7a-97a7-05292ae3ec2d req-8b390b68-2559-49b5-a8fc-844f5679e458 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received event network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:13:47 np0005539564 nova_compute[226295]: 2025-11-29 09:13:47.169 226310 DEBUG oslo_concurrency.lockutils [req-3d3c68eb-9b1d-4d7a-97a7-05292ae3ec2d req-8b390b68-2559-49b5-a8fc-844f5679e458 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:47 np0005539564 nova_compute[226295]: 2025-11-29 09:13:47.170 226310 DEBUG oslo_concurrency.lockutils [req-3d3c68eb-9b1d-4d7a-97a7-05292ae3ec2d req-8b390b68-2559-49b5-a8fc-844f5679e458 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:47 np0005539564 nova_compute[226295]: 2025-11-29 09:13:47.171 226310 DEBUG oslo_concurrency.lockutils [req-3d3c68eb-9b1d-4d7a-97a7-05292ae3ec2d req-8b390b68-2559-49b5-a8fc-844f5679e458 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:47 np0005539564 nova_compute[226295]: 2025-11-29 09:13:47.171 226310 DEBUG nova.compute.manager [req-3d3c68eb-9b1d-4d7a-97a7-05292ae3ec2d req-8b390b68-2559-49b5-a8fc-844f5679e458 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] No waiting events found dispatching network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:13:47 np0005539564 nova_compute[226295]: 2025-11-29 09:13:47.172 226310 WARNING nova.compute.manager [req-3d3c68eb-9b1d-4d7a-97a7-05292ae3ec2d req-8b390b68-2559-49b5-a8fc-844f5679e458 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received unexpected event network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 for instance with vm_state active and task_state None.#033[00m
Nov 29 04:13:47 np0005539564 nova_compute[226295]: 2025-11-29 09:13:47.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:47 np0005539564 nova_compute[226295]: 2025-11-29 09:13:47.342 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:13:47 np0005539564 podman[317660]: 2025-11-29 09:13:47.462245622 +0000 UTC m=+0.069061631 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:13:47 np0005539564 podman[317659]: 2025-11-29 09:13:47.474861804 +0000 UTC m=+0.085777314 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:13:47 np0005539564 podman[317658]: 2025-11-29 09:13:47.494812954 +0000 UTC m=+0.119181469 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 04:13:48 np0005539564 podman[317865]: 2025-11-29 09:13:48.225287639 +0000 UTC m=+0.105435007 container exec 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 04:13:48 np0005539564 podman[317865]: 2025-11-29 09:13:48.313345033 +0000 UTC m=+0.193492391 container exec_died 9ac4e9705f9938ab4e650793d25958c994e2998851d8c9a227258aa8de056303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-1, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.355 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.500 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.502 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquired lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.502 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.502 226310 DEBUG nova.objects.instance [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0d2ca216-c02c-4a51-9997-85652cee8fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:13:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:48.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:48 np0005539564 nova_compute[226295]: 2025-11-29 09:13:48.708 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:49.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:49 np0005539564 nova_compute[226295]: 2025-11-29 09:13:49.611 226310 DEBUG nova.compute.manager [req-b1b5b5ab-6973-4fc0-95bb-682028782b40 req-77a692e7-2ee1-44e8-a731-3a95dcf88bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received event network-changed-7251ab30-4bf1-4c53-8064-90f21a5154e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:13:49 np0005539564 nova_compute[226295]: 2025-11-29 09:13:49.611 226310 DEBUG nova.compute.manager [req-b1b5b5ab-6973-4fc0-95bb-682028782b40 req-77a692e7-2ee1-44e8-a731-3a95dcf88bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Refreshing instance network info cache due to event network-changed-7251ab30-4bf1-4c53-8064-90f21a5154e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:13:49 np0005539564 nova_compute[226295]: 2025-11-29 09:13:49.611 226310 DEBUG oslo_concurrency.lockutils [req-b1b5b5ab-6973-4fc0-95bb-682028782b40 req-77a692e7-2ee1-44e8-a731-3a95dcf88bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:13:50 np0005539564 nova_compute[226295]: 2025-11-29 09:13:50.280 226310 DEBUG nova.network.neutron [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updating instance_info_cache with network_info: [{"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:13:50 np0005539564 nova_compute[226295]: 2025-11-29 09:13:50.296 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Releasing lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:13:50 np0005539564 nova_compute[226295]: 2025-11-29 09:13:50.297 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:13:50 np0005539564 nova_compute[226295]: 2025-11-29 09:13:50.297 226310 DEBUG oslo_concurrency.lockutils [req-b1b5b5ab-6973-4fc0-95bb-682028782b40 req-77a692e7-2ee1-44e8-a731-3a95dcf88bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:13:50 np0005539564 nova_compute[226295]: 2025-11-29 09:13:50.298 226310 DEBUG nova.network.neutron [req-b1b5b5ab-6973-4fc0-95bb-682028782b40 req-77a692e7-2ee1-44e8-a731-3a95dcf88bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Refreshing network info cache for port 7251ab30-4bf1-4c53-8064-90f21a5154e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:13:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:13:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:51.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:13:51 np0005539564 nova_compute[226295]: 2025-11-29 09:13:51.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:51 np0005539564 nova_compute[226295]: 2025-11-29 09:13:51.418 226310 DEBUG nova.network.neutron [req-b1b5b5ab-6973-4fc0-95bb-682028782b40 req-77a692e7-2ee1-44e8-a731-3a95dcf88bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updated VIF entry in instance network info cache for port 7251ab30-4bf1-4c53-8064-90f21a5154e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:13:51 np0005539564 nova_compute[226295]: 2025-11-29 09:13:51.420 226310 DEBUG nova.network.neutron [req-b1b5b5ab-6973-4fc0-95bb-682028782b40 req-77a692e7-2ee1-44e8-a731-3a95dcf88bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updating instance_info_cache with network_info: [{"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:13:51 np0005539564 nova_compute[226295]: 2025-11-29 09:13:51.446 226310 DEBUG oslo_concurrency.lockutils [req-b1b5b5ab-6973-4fc0-95bb-682028782b40 req-77a692e7-2ee1-44e8-a731-3a95dcf88bd0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0d2ca216-c02c-4a51-9997-85652cee8fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:13:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:13:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:52 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:13:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:53 np0005539564 nova_compute[226295]: 2025-11-29 09:13:53.359 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:53 np0005539564 nova_compute[226295]: 2025-11-29 09:13:53.710 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:54 np0005539564 nova_compute[226295]: 2025-11-29 09:13:54.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:54.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:55.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:58 np0005539564 nova_compute[226295]: 2025-11-29 09:13:58.361 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:58.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:58 np0005539564 nova_compute[226295]: 2025-11-29 09:13:58.712 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:13:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:59.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:59 np0005539564 nova_compute[226295]: 2025-11-29 09:13:59.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:59 np0005539564 ovn_controller[130591]: 2025-11-29T09:13:59Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:6c:a0 10.100.0.4
Nov 29 04:13:59 np0005539564 ovn_controller[130591]: 2025-11-29T09:13:59Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:6c:a0 10.100.0.4
Nov 29 04:14:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:00.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.376 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.376 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.377 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:14:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4222022024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.880 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.978 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:14:01 np0005539564 nova_compute[226295]: 2025-11-29 09:14:01.978 226310 DEBUG nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.178 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.179 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3930MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.179 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.180 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.269 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Instance 0d2ca216-c02c-4a51-9997-85652cee8fe5 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.270 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.270 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.306 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:02.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:14:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1345061501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.749 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.755 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.799 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.864 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:14:02 np0005539564 nova_compute[226295]: 2025-11-29 09:14:02.865 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:03.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:03 np0005539564 nova_compute[226295]: 2025-11-29 09:14:03.364 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:03 np0005539564 nova_compute[226295]: 2025-11-29 09:14:03.742 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:03.797 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:03.798 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:03.799 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:04.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:05.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.482 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "0d2ca216-c02c-4a51-9997-85652cee8fe5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.483 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.483 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.484 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.484 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.487 226310 INFO nova.compute.manager [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Terminating instance#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.489 226310 DEBUG nova.compute.manager [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:14:05 np0005539564 kernel: tap7251ab30-4b (unregistering): left promiscuous mode
Nov 29 04:14:05 np0005539564 NetworkManager[48997]: <info>  [1764407645.5711] device (tap7251ab30-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.585 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.588 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539564 ovn_controller[130591]: 2025-11-29T09:14:05Z|00908|binding|INFO|Releasing lport 7251ab30-4bf1-4c53-8064-90f21a5154e9 from this chassis (sb_readonly=0)
Nov 29 04:14:05 np0005539564 ovn_controller[130591]: 2025-11-29T09:14:05Z|00909|binding|INFO|Setting lport 7251ab30-4bf1-4c53-8064-90f21a5154e9 down in Southbound
Nov 29 04:14:05 np0005539564 ovn_controller[130591]: 2025-11-29T09:14:05Z|00910|binding|INFO|Removing iface tap7251ab30-4b ovn-installed in OVS
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.590 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:05.596 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:6c:a0 10.100.0.4'], port_security=['fa:16:3e:a7:6c:a0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0d2ca216-c02c-4a51-9997-85652cee8fe5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f464a39e-170e-4271-8e3e-71cb609233aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>], logical_port=7251ab30-4bf1-4c53-8064-90f21a5154e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3bafe85a90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:14:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:05.598 139780 INFO neutron.agent.ovn.metadata.agent [-] Port 7251ab30-4bf1-4c53-8064-90f21a5154e9 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad unbound from our chassis#033[00m
Nov 29 04:14:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:05.599 139780 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:14:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:05.601 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84cafc-883d-430c-a6e3-d83a12adeb94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:05.602 139780 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace which is not needed anymore#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.609 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539564 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000de.scope: Deactivated successfully.
Nov 29 04:14:05 np0005539564 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000de.scope: Consumed 14.789s CPU time.
Nov 29 04:14:05 np0005539564 systemd-machined[190128]: Machine qemu-104-instance-000000de terminated.
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.739 226310 INFO nova.virt.libvirt.driver [-] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Instance destroyed successfully.#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.740 226310 DEBUG nova.objects.instance [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'resources' on Instance uuid 0d2ca216-c02c-4a51-9997-85652cee8fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.759 226310 DEBUG nova.virt.libvirt.vif [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1674349468',display_name='tempest-TestVolumeBootPattern-server-1674349468',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1674349468',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:13:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-g34jp9wl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:13:45Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=0d2ca216-c02c-4a51-9997-85652cee8fe5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.759 226310 DEBUG nova.network.os_vif_util [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "address": "fa:16:3e:a7:6c:a0", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7251ab30-4b", "ovs_interfaceid": "7251ab30-4bf1-4c53-8064-90f21a5154e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.761 226310 DEBUG nova.network.os_vif_util [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:6c:a0,bridge_name='br-int',has_traffic_filtering=True,id=7251ab30-4bf1-4c53-8064-90f21a5154e9,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7251ab30-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.761 226310 DEBUG os_vif [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:6c:a0,bridge_name='br-int',has_traffic_filtering=True,id=7251ab30-4bf1-4c53-8064-90f21a5154e9,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7251ab30-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.764 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.764 226310 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7251ab30-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.767 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.769 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.774 226310 INFO os_vif [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:6c:a0,bridge_name='br-int',has_traffic_filtering=True,id=7251ab30-4bf1-4c53-8064-90f21a5154e9,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7251ab30-4b')#033[00m
Nov 29 04:14:05 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[317618]: [NOTICE]   (317622) : haproxy version is 2.8.14-c23fe91
Nov 29 04:14:05 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[317618]: [NOTICE]   (317622) : path to executable is /usr/sbin/haproxy
Nov 29 04:14:05 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[317618]: [WARNING]  (317622) : Exiting Master process...
Nov 29 04:14:05 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[317618]: [WARNING]  (317622) : Exiting Master process...
Nov 29 04:14:05 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[317618]: [ALERT]    (317622) : Current worker (317624) exited with code 143 (Terminated)
Nov 29 04:14:05 np0005539564 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[317618]: [WARNING]  (317622) : All workers exited. Exiting... (0)
Nov 29 04:14:05 np0005539564 systemd[1]: libpod-f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a.scope: Deactivated successfully.
Nov 29 04:14:05 np0005539564 podman[318242]: 2025-11-29 09:14:05.824694413 +0000 UTC m=+0.078124277 container died f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 04:14:05 np0005539564 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a-userdata-shm.mount: Deactivated successfully.
Nov 29 04:14:05 np0005539564 systemd[1]: var-lib-containers-storage-overlay-6033ce375c48631e8a5443e6dec2ebe88c244f2ab79968aa17ac2d3c1c4a4957-merged.mount: Deactivated successfully.
Nov 29 04:14:05 np0005539564 podman[318242]: 2025-11-29 09:14:05.868239132 +0000 UTC m=+0.121668996 container cleanup f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:14:05 np0005539564 systemd[1]: libpod-conmon-f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a.scope: Deactivated successfully.
Nov 29 04:14:05 np0005539564 podman[318295]: 2025-11-29 09:14:05.980662467 +0000 UTC m=+0.051560238 container remove f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:14:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:05.986 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[95b6800c-4da8-4082-bc6d-3d68facbfc04]: (4, ('Sat Nov 29 09:14:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a)\nf5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a\nSat Nov 29 09:14:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (f5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a)\nf5a270fe2006f632ca155d5eb1ab28d8efba7b2c5dc92e46a163a963c4fa488a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:05.988 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[7c30a3ef-f79f-4e1f-9840-449d5980b594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:05 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:05.989 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:05 np0005539564 nova_compute[226295]: 2025-11-29 09:14:05.991 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539564 kernel: tap8aaf4606-90: left promiscuous mode
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.007 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:06.011 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9f292a-cb4c-4491-8970-b729f97c8c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:06.028 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[27de4b58-f632-430c-a88b-65ed47aa4246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:06.029 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[76850a11-28e0-4fc7-8d8c-8ac5a9307524]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:06.057 231140 DEBUG oslo.privsep.daemon [-] privsep: reply[524f934a-5803-4779-9403-14ef364392e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1059941, 'reachable_time': 34343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318310, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:06 np0005539564 systemd[1]: run-netns-ovnmeta\x2d8aaf4606\x2d9df9\x2d4ad5\x2d9ade\x2df48fdc6cfaad.mount: Deactivated successfully.
Nov 29 04:14:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:06.061 139895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:14:06 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:06.062 139895 DEBUG oslo.privsep.daemon [-] privsep: reply[ea490b98-631d-4c4b-bf47-61ac87b4f33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.104 226310 INFO nova.virt.libvirt.driver [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Deleting instance files /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5_del#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.105 226310 INFO nova.virt.libvirt.driver [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Deletion of /var/lib/nova/instances/0d2ca216-c02c-4a51-9997-85652cee8fe5_del complete#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.166 226310 INFO nova.compute.manager [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.167 226310 DEBUG oslo.service.loopingcall [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.167 226310 DEBUG nova.compute.manager [-] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.167 226310 DEBUG nova.network.neutron [-] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.191 226310 DEBUG nova.compute.manager [req-148c2e65-c4a0-49ab-83a5-23ba584b4992 req-6b021093-8d64-441c-8bf0-f4ec85455832 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received event network-vif-unplugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.191 226310 DEBUG oslo_concurrency.lockutils [req-148c2e65-c4a0-49ab-83a5-23ba584b4992 req-6b021093-8d64-441c-8bf0-f4ec85455832 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.192 226310 DEBUG oslo_concurrency.lockutils [req-148c2e65-c4a0-49ab-83a5-23ba584b4992 req-6b021093-8d64-441c-8bf0-f4ec85455832 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.192 226310 DEBUG oslo_concurrency.lockutils [req-148c2e65-c4a0-49ab-83a5-23ba584b4992 req-6b021093-8d64-441c-8bf0-f4ec85455832 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.192 226310 DEBUG nova.compute.manager [req-148c2e65-c4a0-49ab-83a5-23ba584b4992 req-6b021093-8d64-441c-8bf0-f4ec85455832 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] No waiting events found dispatching network-vif-unplugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.193 226310 DEBUG nova.compute.manager [req-148c2e65-c4a0-49ab-83a5-23ba584b4992 req-6b021093-8d64-441c-8bf0-f4ec85455832 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received event network-vif-unplugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:14:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:06.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:06 np0005539564 nova_compute[226295]: 2025-11-29 09:14:06.992 226310 DEBUG nova.network.neutron [-] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:14:07 np0005539564 nova_compute[226295]: 2025-11-29 09:14:07.018 226310 INFO nova.compute.manager [-] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Took 0.85 seconds to deallocate network for instance.#033[00m
Nov 29 04:14:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:07.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:07 np0005539564 nova_compute[226295]: 2025-11-29 09:14:07.267 226310 INFO nova.compute.manager [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Nov 29 04:14:07 np0005539564 nova_compute[226295]: 2025-11-29 09:14:07.327 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:07 np0005539564 nova_compute[226295]: 2025-11-29 09:14:07.328 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:07 np0005539564 nova_compute[226295]: 2025-11-29 09:14:07.412 226310 DEBUG oslo_concurrency.processutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:07.468 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:14:07 np0005539564 nova_compute[226295]: 2025-11-29 09:14:07.468 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:07 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:07.472 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:14:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:14:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3230745112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:14:07 np0005539564 nova_compute[226295]: 2025-11-29 09:14:07.959 226310 DEBUG oslo_concurrency.processutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:07 np0005539564 nova_compute[226295]: 2025-11-29 09:14:07.968 226310 DEBUG nova.compute.provider_tree [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.000 226310 DEBUG nova.scheduler.client.report [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.028 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.061 226310 INFO nova.scheduler.client.report [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Deleted allocations for instance 0d2ca216-c02c-4a51-9997-85652cee8fe5#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.366 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.396 226310 DEBUG oslo_concurrency.lockutils [None req-530f84f6-4042-441a-8dfd-343c4d6917df 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:08 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:14:08.475 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.524 226310 DEBUG nova.compute.manager [req-54e1a7ff-0ec1-4f27-a496-46eb2d680a77 req-ea640ce0-9c8b-4afd-9b77-8048a498f72d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received event network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.525 226310 DEBUG oslo_concurrency.lockutils [req-54e1a7ff-0ec1-4f27-a496-46eb2d680a77 req-ea640ce0-9c8b-4afd-9b77-8048a498f72d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.525 226310 DEBUG oslo_concurrency.lockutils [req-54e1a7ff-0ec1-4f27-a496-46eb2d680a77 req-ea640ce0-9c8b-4afd-9b77-8048a498f72d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.525 226310 DEBUG oslo_concurrency.lockutils [req-54e1a7ff-0ec1-4f27-a496-46eb2d680a77 req-ea640ce0-9c8b-4afd-9b77-8048a498f72d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d2ca216-c02c-4a51-9997-85652cee8fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.525 226310 DEBUG nova.compute.manager [req-54e1a7ff-0ec1-4f27-a496-46eb2d680a77 req-ea640ce0-9c8b-4afd-9b77-8048a498f72d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] No waiting events found dispatching network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.526 226310 WARNING nova.compute.manager [req-54e1a7ff-0ec1-4f27-a496-46eb2d680a77 req-ea640ce0-9c8b-4afd-9b77-8048a498f72d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received unexpected event network-vif-plugged-7251ab30-4bf1-4c53-8064-90f21a5154e9 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 04:14:08 np0005539564 nova_compute[226295]: 2025-11-29 09:14:08.526 226310 DEBUG nova.compute.manager [req-54e1a7ff-0ec1-4f27-a496-46eb2d680a77 req-ea640ce0-9c8b-4afd-9b77-8048a498f72d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Received event network-vif-deleted-7251ab30-4bf1-4c53-8064-90f21a5154e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:14:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:08.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:09.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:10.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:10 np0005539564 nova_compute[226295]: 2025-11-29 09:14:10.769 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:13.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:13 np0005539564 nova_compute[226295]: 2025-11-29 09:14:13.370 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:15 np0005539564 nova_compute[226295]: 2025-11-29 09:14:15.773 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:16.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:17.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:18 np0005539564 nova_compute[226295]: 2025-11-29 09:14:18.408 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:18 np0005539564 podman[318338]: 2025-11-29 09:14:18.543232734 +0000 UTC m=+0.079141264 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 04:14:18 np0005539564 podman[318337]: 2025-11-29 09:14:18.550214114 +0000 UTC m=+0.088369345 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 04:14:18 np0005539564 podman[318336]: 2025-11-29 09:14:18.581202253 +0000 UTC m=+0.120196476 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Nov 29 04:14:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:18.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:19.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:20 np0005539564 nova_compute[226295]: 2025-11-29 09:14:20.737 226310 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407645.7355843, 0d2ca216-c02c-4a51-9997-85652cee8fe5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:14:20 np0005539564 nova_compute[226295]: 2025-11-29 09:14:20.738 226310 INFO nova.compute.manager [-] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:14:20 np0005539564 nova_compute[226295]: 2025-11-29 09:14:20.762 226310 DEBUG nova.compute.manager [None req-0dbe12ad-bfde-4580-8bac-0bce40d6e754 - - - - - -] [instance: 0d2ca216-c02c-4a51-9997-85652cee8fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:14:20 np0005539564 nova_compute[226295]: 2025-11-29 09:14:20.777 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:21.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:22.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:23.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:23 np0005539564 nova_compute[226295]: 2025-11-29 09:14:23.411 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:24.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:25.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:25 np0005539564 nova_compute[226295]: 2025-11-29 09:14:25.781 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:27.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:14:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2632729154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:14:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:14:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2632729154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:14:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:28 np0005539564 nova_compute[226295]: 2025-11-29 09:14:28.446 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:28.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:29.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:30.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:30 np0005539564 nova_compute[226295]: 2025-11-29 09:14:30.788 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:31.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:32.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:33.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:33 np0005539564 nova_compute[226295]: 2025-11-29 09:14:33.449 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:34.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:35.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:35 np0005539564 nova_compute[226295]: 2025-11-29 09:14:35.793 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:36.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:37.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:37 np0005539564 nova_compute[226295]: 2025-11-29 09:14:37.858 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:38 np0005539564 nova_compute[226295]: 2025-11-29 09:14:38.451 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:38.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:39.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:40.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:40 np0005539564 nova_compute[226295]: 2025-11-29 09:14:40.798 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:42 np0005539564 nova_compute[226295]: 2025-11-29 09:14:42.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:42 np0005539564 nova_compute[226295]: 2025-11-29 09:14:42.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:42.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:43.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:43 np0005539564 nova_compute[226295]: 2025-11-29 09:14:43.453 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:44 np0005539564 nova_compute[226295]: 2025-11-29 09:14:44.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:44.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:45.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:45 np0005539564 nova_compute[226295]: 2025-11-29 09:14:45.803 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:46.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:47.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:47 np0005539564 nova_compute[226295]: 2025-11-29 09:14:47.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:47 np0005539564 nova_compute[226295]: 2025-11-29 09:14:47.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:14:48 np0005539564 nova_compute[226295]: 2025-11-29 09:14:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:48 np0005539564 nova_compute[226295]: 2025-11-29 09:14:48.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:14:48 np0005539564 nova_compute[226295]: 2025-11-29 09:14:48.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:14:48 np0005539564 nova_compute[226295]: 2025-11-29 09:14:48.376 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:14:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:48 np0005539564 nova_compute[226295]: 2025-11-29 09:14:48.456 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:48.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:49.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:49 np0005539564 podman[318398]: 2025-11-29 09:14:49.535660296 +0000 UTC m=+0.079324249 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:14:49 np0005539564 podman[318397]: 2025-11-29 09:14:49.546417908 +0000 UTC m=+0.102771834 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:14:49 np0005539564 podman[318396]: 2025-11-29 09:14:49.547297462 +0000 UTC m=+0.098735675 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:14:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:50.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:50 np0005539564 nova_compute[226295]: 2025-11-29 09:14:50.805 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:51.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.212667) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691212779, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 965, "num_deletes": 251, "total_data_size": 1976643, "memory_usage": 2009152, "flush_reason": "Manual Compaction"}
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691232085, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 1292982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92502, "largest_seqno": 93462, "table_properties": {"data_size": 1288530, "index_size": 2103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9930, "raw_average_key_size": 19, "raw_value_size": 1279631, "raw_average_value_size": 2564, "num_data_blocks": 91, "num_entries": 499, "num_filter_entries": 499, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407623, "oldest_key_time": 1764407623, "file_creation_time": 1764407691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 19476 microseconds, and 8932 cpu microseconds.
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.232152) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 1292982 bytes OK
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.232187) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.235672) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.235697) EVENT_LOG_v1 {"time_micros": 1764407691235688, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.235729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1971845, prev total WAL file size 1971845, number of live WAL files 2.
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.236839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(1262KB)], [189(14MB)]
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691236890, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 16138819, "oldest_snapshot_seqno": -1}
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 11544 keys, 14096351 bytes, temperature: kUnknown
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691354465, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 14096351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14023049, "index_size": 43365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 306688, "raw_average_key_size": 26, "raw_value_size": 13822506, "raw_average_value_size": 1197, "num_data_blocks": 1637, "num_entries": 11544, "num_filter_entries": 11544, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.354815) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 14096351 bytes
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.356273) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.1 rd, 119.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.2 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(23.4) write-amplify(10.9) OK, records in: 12063, records dropped: 519 output_compression: NoCompression
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.356302) EVENT_LOG_v1 {"time_micros": 1764407691356289, "job": 122, "event": "compaction_finished", "compaction_time_micros": 117676, "compaction_time_cpu_micros": 33273, "output_level": 6, "num_output_files": 1, "total_output_size": 14096351, "num_input_records": 12063, "num_output_records": 11544, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691356869, "job": 122, "event": "table_file_deletion", "file_number": 191}
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691361757, "job": 122, "event": "table_file_deletion", "file_number": 189}
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.236709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.361976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.361984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.361986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.361987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:14:51.361989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:52.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:53 np0005539564 nova_compute[226295]: 2025-11-29 09:14:53.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:53 np0005539564 nova_compute[226295]: 2025-11-29 09:14:53.459 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:54 np0005539564 nova_compute[226295]: 2025-11-29 09:14:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:54.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:55.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:55 np0005539564 nova_compute[226295]: 2025-11-29 09:14:55.835 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:56.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:57.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:57 np0005539564 ovn_controller[130591]: 2025-11-29T09:14:57Z|00911|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Nov 29 04:14:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:58 np0005539564 nova_compute[226295]: 2025-11-29 09:14:58.508 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:58.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:14:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:59.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:59 np0005539564 podman[318732]: 2025-11-29 09:14:59.925099033 +0000 UTC m=+0.066881633 container create 808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:14:59 np0005539564 systemd[1]: Started libpod-conmon-808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072.scope.
Nov 29 04:14:59 np0005539564 podman[318732]: 2025-11-29 09:14:59.89619686 +0000 UTC m=+0.037979560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:00 np0005539564 systemd[1]: Started libcrun container.
Nov 29 04:15:00 np0005539564 podman[318732]: 2025-11-29 09:15:00.02063574 +0000 UTC m=+0.162418450 container init 808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 04:15:00 np0005539564 podman[318732]: 2025-11-29 09:15:00.029854889 +0000 UTC m=+0.171637499 container start 808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:15:00 np0005539564 podman[318732]: 2025-11-29 09:15:00.033148329 +0000 UTC m=+0.174930969 container attach 808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:15:00 np0005539564 admiring_varahamihira[318748]: 167 167
Nov 29 04:15:00 np0005539564 systemd[1]: libpod-808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072.scope: Deactivated successfully.
Nov 29 04:15:00 np0005539564 podman[318732]: 2025-11-29 09:15:00.037945429 +0000 UTC m=+0.179728059 container died 808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_varahamihira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 04:15:00 np0005539564 systemd[1]: var-lib-containers-storage-overlay-967e5cc9f7a3199c1bae6e779167404e30a6ae27a5360326cc6c0e975da784b4-merged.mount: Deactivated successfully.
Nov 29 04:15:00 np0005539564 podman[318732]: 2025-11-29 09:15:00.089775033 +0000 UTC m=+0.231557673 container remove 808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:15:00 np0005539564 systemd[1]: libpod-conmon-808c5c8fe515613a89b641a54aa9057f17d8f2f32888adc5c65bcfe0e4c31072.scope: Deactivated successfully.
Nov 29 04:15:00 np0005539564 podman[318771]: 2025-11-29 09:15:00.326695549 +0000 UTC m=+0.058919227 container create 1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_galois, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:15:00 np0005539564 systemd[1]: Started libpod-conmon-1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa.scope.
Nov 29 04:15:00 np0005539564 podman[318771]: 2025-11-29 09:15:00.309814782 +0000 UTC m=+0.042038480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:00 np0005539564 systemd[1]: Started libcrun container.
Nov 29 04:15:00 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fb1ecb3150e168c0c8205bf1dd9263452809190ce701739ea319a5f3f092aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:00 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fb1ecb3150e168c0c8205bf1dd9263452809190ce701739ea319a5f3f092aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:00 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fb1ecb3150e168c0c8205bf1dd9263452809190ce701739ea319a5f3f092aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:00 np0005539564 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fb1ecb3150e168c0c8205bf1dd9263452809190ce701739ea319a5f3f092aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:00 np0005539564 podman[318771]: 2025-11-29 09:15:00.429442832 +0000 UTC m=+0.161666540 container init 1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_galois, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:15:00 np0005539564 podman[318771]: 2025-11-29 09:15:00.442185088 +0000 UTC m=+0.174408806 container start 1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 04:15:00 np0005539564 podman[318771]: 2025-11-29 09:15:00.446872234 +0000 UTC m=+0.179095952 container attach 1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_galois, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 04:15:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:00.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:00 np0005539564 nova_compute[226295]: 2025-11-29 09:15:00.839 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:01.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:01 np0005539564 nova_compute[226295]: 2025-11-29 09:15:01.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 e437: 3 total, 3 up, 3 in
Nov 29 04:15:01 np0005539564 boring_galois[318788]: [
Nov 29 04:15:01 np0005539564 boring_galois[318788]:    {
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        "available": false,
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        "ceph_device": false,
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        "lsm_data": {},
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        "lvs": [],
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        "path": "/dev/sr0",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        "rejected_reasons": [
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "Insufficient space (<5GB)",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "Has a FileSystem"
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        ],
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        "sys_api": {
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "actuators": null,
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "device_nodes": "sr0",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "devname": "sr0",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "human_readable_size": "482.00 KB",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "id_bus": "ata",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "model": "QEMU DVD-ROM",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "nr_requests": "2",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "parent": "/dev/sr0",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "partitions": {},
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "path": "/dev/sr0",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "removable": "1",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "rev": "2.5+",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "ro": "0",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "rotational": "1",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "sas_address": "",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "sas_device_handle": "",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "scheduler_mode": "mq-deadline",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "sectors": 0,
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "sectorsize": "2048",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "size": 493568.0,
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "support_discard": "2048",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "type": "disk",
Nov 29 04:15:01 np0005539564 boring_galois[318788]:            "vendor": "QEMU"
Nov 29 04:15:01 np0005539564 boring_galois[318788]:        }
Nov 29 04:15:01 np0005539564 boring_galois[318788]:    }
Nov 29 04:15:01 np0005539564 boring_galois[318788]: ]
Nov 29 04:15:01 np0005539564 systemd[1]: libpod-1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa.scope: Deactivated successfully.
Nov 29 04:15:01 np0005539564 podman[318771]: 2025-11-29 09:15:01.707901538 +0000 UTC m=+1.440125236 container died 1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_galois, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:15:01 np0005539564 systemd[1]: libpod-1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa.scope: Consumed 1.288s CPU time.
Nov 29 04:15:01 np0005539564 systemd[1]: var-lib-containers-storage-overlay-d6fb1ecb3150e168c0c8205bf1dd9263452809190ce701739ea319a5f3f092aa-merged.mount: Deactivated successfully.
Nov 29 04:15:01 np0005539564 podman[318771]: 2025-11-29 09:15:01.783594548 +0000 UTC m=+1.515818226 container remove 1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 04:15:01 np0005539564 systemd[1]: libpod-conmon-1ae31a80ae1db22a5c0dd7c01e0be4dca87bf91bffe67197dcce1813eb8634fa.scope: Deactivated successfully.
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.377 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.378 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.378 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.379 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:15:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:02.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:15:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3335757106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.844 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.990 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.992 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4138MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.992 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:15:02 np0005539564 nova_compute[226295]: 2025-11-29 09:15:02.992 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:15:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:03.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:03 np0005539564 nova_compute[226295]: 2025-11-29 09:15:03.510 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:15:03.798 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:15:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:15:03.798 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:15:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:15:03.799 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:15:03 np0005539564 nova_compute[226295]: 2025-11-29 09:15:03.951 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:15:03 np0005539564 nova_compute[226295]: 2025-11-29 09:15:03.951 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:15:03 np0005539564 nova_compute[226295]: 2025-11-29 09:15:03.980 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:15:04 np0005539564 nova_compute[226295]: 2025-11-29 09:15:04.009 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:15:04 np0005539564 nova_compute[226295]: 2025-11-29 09:15:04.010 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:15:04 np0005539564 nova_compute[226295]: 2025-11-29 09:15:04.026 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:15:04 np0005539564 nova_compute[226295]: 2025-11-29 09:15:04.055 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:15:04 np0005539564 nova_compute[226295]: 2025-11-29 09:15:04.070 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:15:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:15:04 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/616702941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:15:04 np0005539564 nova_compute[226295]: 2025-11-29 09:15:04.561 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:15:04 np0005539564 nova_compute[226295]: 2025-11-29 09:15:04.567 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:15:04 np0005539564 nova_compute[226295]: 2025-11-29 09:15:04.739 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:15:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:04.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:05 np0005539564 nova_compute[226295]: 2025-11-29 09:15:05.039 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:15:05 np0005539564 nova_compute[226295]: 2025-11-29 09:15:05.040 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:15:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:05 np0005539564 nova_compute[226295]: 2025-11-29 09:15:05.846 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:06.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:07.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:08 np0005539564 nova_compute[226295]: 2025-11-29 09:15:08.513 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:08.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:09.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:10.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:10 np0005539564 nova_compute[226295]: 2025-11-29 09:15:10.851 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:11.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:12.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:13 np0005539564 nova_compute[226295]: 2025-11-29 09:15:13.516 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:14.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:15 np0005539564 nova_compute[226295]: 2025-11-29 09:15:15.855 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:16.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:17.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:18 np0005539564 nova_compute[226295]: 2025-11-29 09:15:18.518 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:19.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:15:20.152 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:15:20 np0005539564 nova_compute[226295]: 2025-11-29 09:15:20.151 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:20 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:15:20.153 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:15:20 np0005539564 podman[320098]: 2025-11-29 09:15:20.543321389 +0000 UTC m=+0.078574779 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 29 04:15:20 np0005539564 podman[320097]: 2025-11-29 09:15:20.560785542 +0000 UTC m=+0.107683017 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 04:15:20 np0005539564 podman[320096]: 2025-11-29 09:15:20.590770284 +0000 UTC m=+0.136108518 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 04:15:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:20 np0005539564 nova_compute[226295]: 2025-11-29 09:15:20.856 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:21 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:15:21.155 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:15:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:21.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:22.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:23.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:23 np0005539564 nova_compute[226295]: 2025-11-29 09:15:23.520 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:24.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:25.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:25 np0005539564 nova_compute[226295]: 2025-11-29 09:15:25.860 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:26.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:27.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:28 np0005539564 nova_compute[226295]: 2025-11-29 09:15:28.561 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:28.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:29.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:30.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:30 np0005539564 nova_compute[226295]: 2025-11-29 09:15:30.863 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:31.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:32.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:33.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:33 np0005539564 nova_compute[226295]: 2025-11-29 09:15:33.624 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:34.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:35.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:35 np0005539564 nova_compute[226295]: 2025-11-29 09:15:35.900 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:36.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:37.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:38 np0005539564 nova_compute[226295]: 2025-11-29 09:15:38.627 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:38.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:39.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:40 np0005539564 nova_compute[226295]: 2025-11-29 09:15:40.033 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:40 np0005539564 nova_compute[226295]: 2025-11-29 09:15:40.904 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:41.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:42 np0005539564 nova_compute[226295]: 2025-11-29 09:15:42.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:43.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:43 np0005539564 nova_compute[226295]: 2025-11-29 09:15:43.630 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:44 np0005539564 nova_compute[226295]: 2025-11-29 09:15:44.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:44.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:45.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:45 np0005539564 nova_compute[226295]: 2025-11-29 09:15:45.930 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:46.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:47.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:47 np0005539564 nova_compute[226295]: 2025-11-29 09:15:47.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:47 np0005539564 nova_compute[226295]: 2025-11-29 09:15:47.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:15:48 np0005539564 nova_compute[226295]: 2025-11-29 09:15:48.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:48 np0005539564 nova_compute[226295]: 2025-11-29 09:15:48.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:15:48 np0005539564 nova_compute[226295]: 2025-11-29 09:15:48.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:15:48 np0005539564 nova_compute[226295]: 2025-11-29 09:15:48.421 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:15:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:48 np0005539564 nova_compute[226295]: 2025-11-29 09:15:48.633 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:48.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:49.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:50.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:50 np0005539564 nova_compute[226295]: 2025-11-29 09:15:50.934 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:51.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:51 np0005539564 podman[320158]: 2025-11-29 09:15:51.533013937 +0000 UTC m=+0.080519111 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 04:15:51 np0005539564 podman[320159]: 2025-11-29 09:15:51.535865484 +0000 UTC m=+0.072120664 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:15:51 np0005539564 podman[320157]: 2025-11-29 09:15:51.570824801 +0000 UTC m=+0.117550244 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 04:15:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:15:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:52.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:15:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:53.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:53 np0005539564 ovn_controller[130591]: 2025-11-29T09:15:53Z|00912|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 04:15:53 np0005539564 nova_compute[226295]: 2025-11-29 09:15:53.635 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:54 np0005539564 nova_compute[226295]: 2025-11-29 09:15:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:54.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:55.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:55 np0005539564 nova_compute[226295]: 2025-11-29 09:15:55.979 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:56 np0005539564 nova_compute[226295]: 2025-11-29 09:15:56.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:56.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:57.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:15:58.466 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:15:58 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:15:58.467 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:15:58 np0005539564 nova_compute[226295]: 2025-11-29 09:15:58.470 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:58 np0005539564 nova_compute[226295]: 2025-11-29 09:15:58.638 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:58.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:15:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:59.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:00.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:00 np0005539564 nova_compute[226295]: 2025-11-29 09:16:00.983 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:01.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e438 e438: 3 total, 3 up, 3 in
Nov 29 04:16:02 np0005539564 nova_compute[226295]: 2025-11-29 09:16:02.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:02 np0005539564 nova_compute[226295]: 2025-11-29 09:16:02.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:02 np0005539564 nova_compute[226295]: 2025-11-29 09:16:02.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:02 np0005539564 nova_compute[226295]: 2025-11-29 09:16:02.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:02 np0005539564 nova_compute[226295]: 2025-11-29 09:16:02.372 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:02 np0005539564 nova_compute[226295]: 2025-11-29 09:16:02.373 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:16:02 np0005539564 nova_compute[226295]: 2025-11-29 09:16:02.373 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:16:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:16:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1495066546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:16:02 np0005539564 nova_compute[226295]: 2025-11-29 09:16:02.820 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:16:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:02.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.028 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.030 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4162MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.030 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.031 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.131 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.132 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.150 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:16:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:03.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:16:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2643356207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.608 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.617 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.671 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.672 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.673 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:03 np0005539564 nova_compute[226295]: 2025-11-29 09:16:03.674 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:16:03.799 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:16:03.799 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:16:03.800 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:04 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:16:04.469 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:16:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:04.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:05.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:05 np0005539564 nova_compute[226295]: 2025-11-29 09:16:05.987 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:06.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 e439: 3 total, 3 up, 3 in
Nov 29 04:16:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:08 np0005539564 nova_compute[226295]: 2025-11-29 09:16:08.674 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:08.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:09.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:10 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:10.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:10 np0005539564 nova_compute[226295]: 2025-11-29 09:16:10.991 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:11 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:16:11 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:11 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:16:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:12.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:13.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:13 np0005539564 nova_compute[226295]: 2025-11-29 09:16:13.677 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:14.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:15 np0005539564 nova_compute[226295]: 2025-11-29 09:16:15.236 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:15.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:15 np0005539564 nova_compute[226295]: 2025-11-29 09:16:15.385 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:15 np0005539564 nova_compute[226295]: 2025-11-29 09:16:15.994 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:16.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:17 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:17.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:18 np0005539564 nova_compute[226295]: 2025-11-29 09:16:18.680 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:18.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:19.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:20.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:21 np0005539564 nova_compute[226295]: 2025-11-29 09:16:21.040 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:21.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:22 np0005539564 podman[320573]: 2025-11-29 09:16:22.532879729 +0000 UTC m=+0.070813899 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 04:16:22 np0005539564 podman[320571]: 2025-11-29 09:16:22.537466814 +0000 UTC m=+0.088178550 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 04:16:22 np0005539564 podman[320572]: 2025-11-29 09:16:22.553077606 +0000 UTC m=+0.103942836 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 04:16:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:22.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:23 np0005539564 nova_compute[226295]: 2025-11-29 09:16:23.682 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:24.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:26 np0005539564 nova_compute[226295]: 2025-11-29 09:16:26.044 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:26.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:27.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:16:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3062304403' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:16:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:16:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3062304403' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:16:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:28 np0005539564 nova_compute[226295]: 2025-11-29 09:16:28.685 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:28.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:30.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:31 np0005539564 nova_compute[226295]: 2025-11-29 09:16:31.047 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:31.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:32.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:33.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:33 np0005539564 nova_compute[226295]: 2025-11-29 09:16:33.687 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:34.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:36 np0005539564 nova_compute[226295]: 2025-11-29 09:16:36.051 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:36.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:37.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:38 np0005539564 nova_compute[226295]: 2025-11-29 09:16:38.668 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:38 np0005539564 nova_compute[226295]: 2025-11-29 09:16:38.688 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:38.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:39.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:40.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:41 np0005539564 nova_compute[226295]: 2025-11-29 09:16:41.055 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:41.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:42 np0005539564 nova_compute[226295]: 2025-11-29 09:16:42.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:42.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:43.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:43 np0005539564 nova_compute[226295]: 2025-11-29 09:16:43.691 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:45.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:46 np0005539564 nova_compute[226295]: 2025-11-29 09:16:46.058 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:46 np0005539564 nova_compute[226295]: 2025-11-29 09:16:46.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:46.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:47.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:47 np0005539564 nova_compute[226295]: 2025-11-29 09:16:47.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:48 np0005539564 nova_compute[226295]: 2025-11-29 09:16:48.693 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:49 np0005539564 nova_compute[226295]: 2025-11-29 09:16:49.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:49 np0005539564 nova_compute[226295]: 2025-11-29 09:16:49.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:16:49 np0005539564 nova_compute[226295]: 2025-11-29 09:16:49.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:16:49 np0005539564 nova_compute[226295]: 2025-11-29 09:16:49.370 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:16:49 np0005539564 nova_compute[226295]: 2025-11-29 09:16:49.370 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:49 np0005539564 nova_compute[226295]: 2025-11-29 09:16:49.370 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:16:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:51 np0005539564 nova_compute[226295]: 2025-11-29 09:16:51.062 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:51.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:52.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:53.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:53 np0005539564 podman[320636]: 2025-11-29 09:16:53.503995186 +0000 UTC m=+0.058486466 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 04:16:53 np0005539564 podman[320637]: 2025-11-29 09:16:53.505483595 +0000 UTC m=+0.055287777 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:16:53 np0005539564 podman[320635]: 2025-11-29 09:16:53.540877874 +0000 UTC m=+0.096442162 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:16:53 np0005539564 nova_compute[226295]: 2025-11-29 09:16:53.694 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:54 np0005539564 ovn_controller[130591]: 2025-11-29T09:16:54Z|00913|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 04:16:54 np0005539564 nova_compute[226295]: 2025-11-29 09:16:54.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:54.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:55.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:56 np0005539564 nova_compute[226295]: 2025-11-29 09:16:56.066 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:56 np0005539564 nova_compute[226295]: 2025-11-29 09:16:56.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:56.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:57.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:58 np0005539564 nova_compute[226295]: 2025-11-29 09:16:58.698 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:58.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:16:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:00 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:00 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:00 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:00.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:01 np0005539564 nova_compute[226295]: 2025-11-29 09:17:01.072 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:01.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:02 np0005539564 nova_compute[226295]: 2025-11-29 09:17:02.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:02 np0005539564 nova_compute[226295]: 2025-11-29 09:17:02.367 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:17:02 np0005539564 nova_compute[226295]: 2025-11-29 09:17:02.367 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:17:02 np0005539564 nova_compute[226295]: 2025-11-29 09:17:02.367 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:17:02 np0005539564 nova_compute[226295]: 2025-11-29 09:17:02.367 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:17:02 np0005539564 nova_compute[226295]: 2025-11-29 09:17:02.368 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:17:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:17:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1974728699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:17:02 np0005539564 nova_compute[226295]: 2025-11-29 09:17:02.863 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:17:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:02.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.067 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.069 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4176MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.069 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.069 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.133 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.133 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.314 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:17:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:17:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:03.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:17:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.698 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:17:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4053162053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.772 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.780 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:17:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:17:03.800 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:17:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:17:03.800 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:17:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:17:03.800 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.809 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.811 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:17:03 np0005539564 nova_compute[226295]: 2025-11-29 09:17:03.811 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:17:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:04.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:05 np0005539564 nova_compute[226295]: 2025-11-29 09:17:05.811 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:06 np0005539564 nova_compute[226295]: 2025-11-29 09:17:06.075 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:06.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:07.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.711788) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827711861, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 1706, "num_deletes": 252, "total_data_size": 3932775, "memory_usage": 3995128, "flush_reason": "Manual Compaction"}
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827734304, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 1604665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93467, "largest_seqno": 95168, "table_properties": {"data_size": 1599186, "index_size": 2682, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14502, "raw_average_key_size": 21, "raw_value_size": 1587073, "raw_average_value_size": 2310, "num_data_blocks": 119, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407692, "oldest_key_time": 1764407692, "file_creation_time": 1764407827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 22609 microseconds, and 10858 cpu microseconds.
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.734405) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 1604665 bytes OK
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.734433) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.737054) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.737078) EVENT_LOG_v1 {"time_micros": 1764407827737070, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.737103) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3925045, prev total WAL file size 3925045, number of live WAL files 2.
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.739215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323638' seq:72057594037927935, type:22 .. '6D6772737461740033353230' seq:0, type:0; will stop at (end)
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(1567KB)], [192(13MB)]
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827739281, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 15701016, "oldest_snapshot_seqno": -1}
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11768 keys, 12760764 bytes, temperature: kUnknown
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827890961, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 12760764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12688810, "index_size": 41441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29445, "raw_key_size": 311544, "raw_average_key_size": 26, "raw_value_size": 12487009, "raw_average_value_size": 1061, "num_data_blocks": 1562, "num_entries": 11768, "num_filter_entries": 11768, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.891250) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 12760764 bytes
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.892249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.5 rd, 84.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(17.7) write-amplify(8.0) OK, records in: 12231, records dropped: 463 output_compression: NoCompression
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.892269) EVENT_LOG_v1 {"time_micros": 1764407827892259, "job": 124, "event": "compaction_finished", "compaction_time_micros": 151743, "compaction_time_cpu_micros": 58474, "output_level": 6, "num_output_files": 1, "total_output_size": 12760764, "num_input_records": 12231, "num_output_records": 11768, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827892624, "job": 124, "event": "table_file_deletion", "file_number": 194}
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827895011, "job": 124, "event": "table_file_deletion", "file_number": 192}
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.739092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.895041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.895044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.895047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.895048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:17:07.895050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:08 np0005539564 nova_compute[226295]: 2025-11-29 09:17:08.701 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:17:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:08.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:17:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:10.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:11 np0005539564 nova_compute[226295]: 2025-11-29 09:17:11.089 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:17:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:12.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:17:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:17:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:17:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:13 np0005539564 nova_compute[226295]: 2025-11-29 09:17:13.703 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:14.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:15.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:16 np0005539564 nova_compute[226295]: 2025-11-29 09:17:16.092 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:16.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:17 np0005539564 systemd-logind[785]: New session 60 of user zuul.
Nov 29 04:17:17 np0005539564 systemd[1]: Started Session 60 of User zuul.
Nov 29 04:17:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:17.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:17:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:18 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:17:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:18 np0005539564 nova_compute[226295]: 2025-11-29 09:17:18.722 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:19.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:20.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:21 np0005539564 nova_compute[226295]: 2025-11-29 09:17:21.095 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:17:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:21.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:17:22 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:17:22 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1515366372' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:17:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:22.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:23.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:23 np0005539564 nova_compute[226295]: 2025-11-29 09:17:23.724 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:24 np0005539564 podman[321133]: 2025-11-29 09:17:24.503164961 +0000 UTC m=+0.060442998 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:17:24 np0005539564 podman[321134]: 2025-11-29 09:17:24.521715463 +0000 UTC m=+0.069677528 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 04:17:24 np0005539564 podman[321132]: 2025-11-29 09:17:24.538243711 +0000 UTC m=+0.094858700 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 04:17:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:17:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:24.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:17:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:25 np0005539564 ovs-vsctl[321222]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 04:17:26 np0005539564 nova_compute[226295]: 2025-11-29 09:17:26.097 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:26 np0005539564 virtqemud[225880]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 04:17:26 np0005539564 virtqemud[225880]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 04:17:26 np0005539564 virtqemud[225880]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:17:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:26.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:27 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: cache status {prefix=cache status} (starting...)
Nov 29 04:17:27 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:27 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: client ls {prefix=client ls} (starting...)
Nov 29 04:17:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:27 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:27.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:27 np0005539564 lvm[321559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 04:17:27 np0005539564 lvm[321559]: VG ceph_vg0 finished
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:28 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:17:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3817662681' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:17:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3321029869' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:17:28 np0005539564 nova_compute[226295]: 2025-11-29 09:17:28.725 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 04:17:28 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:28.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:29 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 04:17:29 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 04:17:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2372799268' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 04:17:29 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: ops {prefix=ops} (starting...)
Nov 29 04:17:29 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 04:17:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2093559059' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 04:17:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 04:17:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1528657275' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 04:17:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:17:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3795383077' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:17:29 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: session ls {prefix=session ls} (starting...)
Nov 29 04:17:29 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:17:30 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: status {prefix=status} (starting...)
Nov 29 04:17:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:17:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3001446612' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:17:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:17:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/466745283' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:17:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:30.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:31 np0005539564 nova_compute[226295]: 2025-11-29 09:17:31.101 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:17:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/454002165' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:17:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:31.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 04:17:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3416001044' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 04:17:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:17:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2364403902' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:17:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 04:17:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4252873416' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 04:17:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 04:17:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1981851145' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 04:17:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:17:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/952766660' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:17:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 04:17:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1259470410' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 04:17:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:32.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:17:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1506877392' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:17:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:17:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2273924913' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:17:33 np0005539564 nova_compute[226295]: 2025-11-29 09:17:33.728 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:17:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/131710880' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fdd9000/0x0/0x1bfc00000, data 0x2e85abf/0x30a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 93929472 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fdd9000/0x0/0x1bfc00000, data 0x2e85abf/0x30a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 93929472 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 93929472 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4654765 data_alloc: 218103808 data_used: 13443072
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 93929472 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 93921280 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fdd9000/0x0/0x1bfc00000, data 0x2e85abf/0x30a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447389696 unmapped: 93904896 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447389696 unmapped: 93904896 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447389696 unmapped: 93904896 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.433160782s of 21.683403015s, submitted: 75
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4751848 data_alloc: 218103808 data_used: 13443072
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba517f41e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447143936 unmapped: 94150656 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba54440000 session 0x55ba51729c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba6483fc00 session 0x55ba513c1e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba51832800 session 0x55ba513ed680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba50514800 session 0x55ba50ef7a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba51818000 session 0x55ba517daf00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba58549800 session 0x55ba516f92c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447152128 unmapped: 94142464 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f372000/0x0/0x1bfc00000, data 0x38edabf/0x3b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba50a45680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 445931520 unmapped: 95363072 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a0040000/0x0/0x1bfc00000, data 0x2883abf/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 445931520 unmapped: 95363072 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a0040000/0x0/0x1bfc00000, data 0x2883a9c/0x2aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 445931520 unmapped: 95363072 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570194 data_alloc: 218103808 data_used: 7852032
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 445931520 unmapped: 95363072 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba51840800 session 0x55ba50cbc1e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446087168 unmapped: 95207424 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446087168 unmapped: 95207424 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446095360 unmapped: 95199232 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446095360 unmapped: 95199232 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a03b9000/0x0/0x1bfc00000, data 0x28a7a9c/0x2ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4649850 data_alloc: 234881024 data_used: 18362368
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446095360 unmapped: 95199232 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a03b9000/0x0/0x1bfc00000, data 0x28a7a9c/0x2ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a03b9000/0x0/0x1bfc00000, data 0x28a7a9c/0x2ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446095360 unmapped: 95199232 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446095360 unmapped: 95199232 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446103552 unmapped: 95191040 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446103552 unmapped: 95191040 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a03b9000/0x0/0x1bfc00000, data 0x28a7a9c/0x2ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4649850 data_alloc: 234881024 data_used: 18362368
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446103552 unmapped: 95191040 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446103552 unmapped: 95191040 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446103552 unmapped: 95191040 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a03b9000/0x0/0x1bfc00000, data 0x28a7a9c/0x2ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446103552 unmapped: 95191040 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.301401138s of 18.487977982s, submitted: 47
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4725300 data_alloc: 234881024 data_used: 19259392
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fbfa000/0x0/0x1bfc00000, data 0x305ea9c/0x327c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x3067a9c/0x3285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x3067a9c/0x3285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4725300 data_alloc: 234881024 data_used: 19259392
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x3067a9c/0x3285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4725316 data_alloc: 234881024 data_used: 19259392
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x3067a9c/0x3285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 447381504 unmapped: 93913088 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.349430084s of 13.534937859s, submitted: 54
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba5183c000 session 0x55ba516f8780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba4ec32400 session 0x55ba51357e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446455808 unmapped: 94838784 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 ms_handle_reset con 0x55ba64841c00 session 0x55ba4f6554a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446455808 unmapped: 94838784 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fbf9000/0x0/0x1bfc00000, data 0x3067a9c/0x3285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 446455808 unmapped: 94838784 heap: 541294592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba4e5c54a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 ms_handle_reset con 0x55ba51840800 session 0x55ba513ed2c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 ms_handle_reset con 0x55ba51818000 session 0x55ba50ffd4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 ms_handle_reset con 0x55ba51818000 session 0x55ba50ef6000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba4f2b0000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 ms_handle_reset con 0x55ba51840800 session 0x55ba50ffc780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 ms_handle_reset con 0x55ba64841c00 session 0x55ba517da960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4954111 data_alloc: 234881024 data_used: 26558464
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 ms_handle_reset con 0x55ba58549800 session 0x55ba51051a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 454729728 unmapped: 98254848 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 412 ms_handle_reset con 0x55ba4ec32400 session 0x55ba516f9a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 412 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba50ef6d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 412 ms_handle_reset con 0x55ba51840800 session 0x55ba50da0d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 412 ms_handle_reset con 0x55ba64841c00 session 0x55ba51772780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 412 ms_handle_reset con 0x55ba51832800 session 0x55ba50ffd4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 412 ms_handle_reset con 0x55ba4ec32400 session 0x55ba516f8780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 454754304 unmapped: 98230272 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 412 heartbeat osd_stat(store_statfs(0x19dc1a000/0x0/0x1bfc00000, data 0x5040424/0x5263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 ms_handle_reset con 0x55ba51818000 session 0x55ba4f2b1860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 454778880 unmapped: 98205696 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 ms_handle_reset con 0x55ba50478000 session 0x55ba509bc3c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 ms_handle_reset con 0x55ba55640c00 session 0x55ba510f2780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 ms_handle_reset con 0x55ba51839000 session 0x55ba4f6b4f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 454778880 unmapped: 98205696 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 454778880 unmapped: 98205696 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009678 data_alloc: 234881024 data_used: 26570752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 454778880 unmapped: 98205696 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 heartbeat osd_stat(store_statfs(0x19dc15000/0x0/0x1bfc00000, data 0x50420fb/0x5267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 450797568 unmapped: 102187008 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 ms_handle_reset con 0x55ba51839000 session 0x55ba513edc20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.498824120s of 10.000556946s, submitted: 117
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444194816 unmapped: 108789760 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 ms_handle_reset con 0x55ba50474000 session 0x55ba517f5680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 108773376 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 heartbeat osd_stat(store_statfs(0x19eb8a000/0x0/0x1bfc00000, data 0x3df611e/0x401c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 108773376 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4806078 data_alloc: 218103808 data_used: 13213696
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 108773376 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 108773376 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 108773376 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 heartbeat osd_stat(store_statfs(0x19ee5e000/0x0/0x1bfc00000, data 0x3df7c5d/0x401f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 heartbeat osd_stat(store_statfs(0x19ee5e000/0x0/0x1bfc00000, data 0x3df7c5d/0x401f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 108773376 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 108773376 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4806238 data_alloc: 218103808 data_used: 13217792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 108773376 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 heartbeat osd_stat(store_statfs(0x19ee5e000/0x0/0x1bfc00000, data 0x3df7c5d/0x401f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 108765184 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 108765184 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba5135d400 session 0x55ba509614a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba51838800 session 0x55ba50ece960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba4f84ec00 session 0x55ba510f3c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba50474000 session 0x55ba4f856b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba5135d400 session 0x55ba509bd0e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba51838800 session 0x55ba50da14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba51839000 session 0x55ba51773680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.243946075s of 11.344313622s, submitted: 37
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 86147072 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba50478800 session 0x55ba4de583c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472203264 unmapped: 80781312 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055626 data_alloc: 251658240 data_used: 37928960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 473227264 unmapped: 79757312 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 ms_handle_reset con 0x55ba50478800 session 0x55ba50cbd4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 heartbeat osd_stat(store_statfs(0x19d602000/0x0/0x1bfc00000, data 0x564bc80/0x5874000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 473251840 unmapped: 79732736 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 473260032 unmapped: 79724544 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 415 ms_handle_reset con 0x55ba4f7db400 session 0x55ba517f5860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841151 data_alloc: 218103808 data_used: 16994304
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f0b3000/0x0/0x1bfc00000, data 0x3ba2859/0x3dc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,1])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841967 data_alloc: 218103808 data_used: 17055744
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f0b3000/0x0/0x1bfc00000, data 0x3ba2859/0x3dc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.270178795s of 15.809450150s, submitted: 140
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4859981 data_alloc: 234881024 data_used: 18599936
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f0b1000/0x0/0x1bfc00000, data 0x3ba4398/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4859949 data_alloc: 234881024 data_used: 18604032
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457940992 unmapped: 95043584 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f0b2000/0x0/0x1bfc00000, data 0x3ba4398/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51841000 session 0x55ba50ef6b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba5182ec00 session 0x55ba513c1a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50dbc400 session 0x55ba4f75d680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4f7db400 session 0x55ba510f23c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457949184 unmapped: 95035392 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50478800 session 0x55ba50da0960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50472c00 session 0x55ba517f4d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba5183a800 session 0x55ba510361e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba543a8c00 session 0x55ba506a85a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4f7db400 session 0x55ba513ec3c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 94715904 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50474000 session 0x55ba506eaf00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba50ef6000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 94715904 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50472c00 session 0x55ba510f23c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x30643fa/0x328d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 101228544 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4722654 data_alloc: 218103808 data_used: 10686464
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 101228544 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x30643d7/0x328c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 101228544 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50473c00 session 0x55ba510f2780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 101228544 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50473c00 session 0x55ba4f2b1860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 101228544 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x30643d7/0x328c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba50ffd4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 101228544 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.178627014s of 15.445807457s, submitted: 85
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4f7db400 session 0x55ba50da0d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4727232 data_alloc: 218103808 data_used: 10686464
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbcd000/0x0/0x1bfc00000, data 0x30883e7/0x32b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbcb000/0x0/0x1bfc00000, data 0x30893e7/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506f9c00 session 0x55ba516f9c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba581a1000 session 0x55ba4f856960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbcb000/0x0/0x1bfc00000, data 0x30893e7/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4754580 data_alloc: 234881024 data_used: 16433152
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba513c12c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbcb000/0x0/0x1bfc00000, data 0x30893e7/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 451919872 unmapped: 101064704 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba53eeb000 session 0x55ba50da14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 452116480 unmapped: 100868096 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.850829124s of 10.051016808s, submitted: 37
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4f2cf800 session 0x55ba509612c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba54c45c00 session 0x55ba50cbd0e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851802 data_alloc: 234881024 data_used: 16433152
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453099520 unmapped: 99885056 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453099520 unmapped: 99885056 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453099520 unmapped: 99885056 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19ef9c000/0x0/0x1bfc00000, data 0x3cb8449/0x3ee2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19ef9c000/0x0/0x1bfc00000, data 0x3cb8449/0x3ee2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 95633408 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 95633408 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902350 data_alloc: 234881024 data_used: 16961536
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eb09000/0x0/0x1bfc00000, data 0x4143449/0x436d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 95633408 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457433088 unmapped: 95551488 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457433088 unmapped: 95551488 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eb09000/0x0/0x1bfc00000, data 0x4143449/0x436d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457433088 unmapped: 95551488 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eb09000/0x0/0x1bfc00000, data 0x4143449/0x436d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457433088 unmapped: 95551488 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902366 data_alloc: 234881024 data_used: 16961536
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba5046f400 session 0x55ba503f3e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457433088 unmapped: 95551488 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba509bc000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.494491577s of 11.821664810s, submitted: 93
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50472c00 session 0x55ba516f9a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50474000 session 0x55ba51051a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456572928 unmapped: 96411648 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50510000 session 0x55ba509bc5a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4f2ce800 session 0x55ba4f6b5c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba54441400 session 0x55ba516f8d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456581120 unmapped: 96403456 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457179136 unmapped: 95805440 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f23e000/0x0/0x1bfc00000, data 0x3a173d7/0x3c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459317248 unmapped: 93667328 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4901786 data_alloc: 234881024 data_used: 26595328
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459317248 unmapped: 93667328 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459317248 unmapped: 93667328 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f23e000/0x0/0x1bfc00000, data 0x3a173d7/0x3c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba56f29c00 session 0x55ba516f92c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4f2ce400 session 0x55ba50a83a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459317248 unmapped: 93667328 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50478000 session 0x55ba4eab0000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459350016 unmapped: 93634560 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459350016 unmapped: 93634560 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4757614 data_alloc: 234881024 data_used: 23871488
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459350016 unmapped: 93634560 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459350016 unmapped: 93634560 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459358208 unmapped: 93626368 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0202000/0x0/0x1bfc00000, data 0x2a553a4/0x2c7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459358208 unmapped: 93626368 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 93618176 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.340258598s of 13.721208572s, submitted: 92
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4814452 data_alloc: 234881024 data_used: 23883776
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463273984 unmapped: 89710592 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f7b9000/0x0/0x1bfc00000, data 0x349f3a4/0x36c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4855156 data_alloc: 234881024 data_used: 24772608
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f7b9000/0x0/0x1bfc00000, data 0x349f3a4/0x36c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f777000/0x0/0x1bfc00000, data 0x34e03a4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.609848976s of 10.333658218s, submitted: 105
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4f2ce800 session 0x55ba506ea5a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba510374a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4855276 data_alloc: 234881024 data_used: 24813568
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 89169920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f778000/0x0/0x1bfc00000, data 0x34e03a4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 89161728 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 89161728 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 89161728 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4855276 data_alloc: 234881024 data_used: 24813568
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 89153536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51837800 session 0x55ba51037a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f778000/0x0/0x1bfc00000, data 0x34e03a4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 89153536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 89153536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 89153536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 89153536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.933814049s of 10.036593437s, submitted: 22
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba54c45000 session 0x55ba50ecf4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4926302 data_alloc: 234881024 data_used: 24813568
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f778000/0x0/0x1bfc00000, data 0x34e03a4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4ec2a400 session 0x55ba517f5a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506fec00 session 0x55ba517f43c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506fec00 session 0x55ba4de592c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba513561e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 89137152 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 89137152 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 89137152 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 89137152 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eed5000/0x0/0x1bfc00000, data 0x3d833a4/0x3fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 89137152 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 89137152 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4926302 data_alloc: 234881024 data_used: 24813568
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 89137152 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463863808 unmapped: 89120768 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eed5000/0x0/0x1bfc00000, data 0x3d833a4/0x3fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463872000 unmapped: 89112576 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba5183d000 session 0x55ba51050b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50516000 session 0x55ba510503c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463872000 unmapped: 89112576 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506ff000 session 0x55ba51050780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506ff000 session 0x55ba4f6b5e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463872000 unmapped: 89112576 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4926302 data_alloc: 234881024 data_used: 24813568
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.819707870s of 10.965402603s, submitted: 11
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba4f6b41e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464101376 unmapped: 88883200 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eed5000/0x0/0x1bfc00000, data 0x3d833a4/0x3fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467025920 unmapped: 85958656 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eed5000/0x0/0x1bfc00000, data 0x3d833a4/0x3fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467025920 unmapped: 85958656 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eed5000/0x0/0x1bfc00000, data 0x3d833a4/0x3fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992015 data_alloc: 234881024 data_used: 31223808
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eed5000/0x0/0x1bfc00000, data 0x3d833a4/0x3fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992015 data_alloc: 234881024 data_used: 31223808
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467058688 unmapped: 85925888 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.058201790s of 12.248942375s, submitted: 7
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469475328 unmapped: 83509248 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e927000/0x0/0x1bfc00000, data 0x43293a4/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 83386368 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 83197952 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5048411 data_alloc: 251658240 data_used: 32256000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469803008 unmapped: 83181568 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e925000/0x0/0x1bfc00000, data 0x43293a4/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5050987 data_alloc: 251658240 data_used: 32579584
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e925000/0x0/0x1bfc00000, data 0x43293a4/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5050987 data_alloc: 251658240 data_used: 32579584
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e925000/0x0/0x1bfc00000, data 0x43293a4/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e925000/0x0/0x1bfc00000, data 0x43293a4/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e925000/0x0/0x1bfc00000, data 0x43293a4/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e925000/0x0/0x1bfc00000, data 0x43293a4/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469860352 unmapped: 83124224 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e925000/0x0/0x1bfc00000, data 0x43293a4/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469868544 unmapped: 83116032 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.254507065s of 17.510116577s, submitted: 55
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469147648 unmapped: 83836928 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5046639 data_alloc: 251658240 data_used: 32575488
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469147648 unmapped: 83836928 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469147648 unmapped: 83836928 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469147648 unmapped: 83836928 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469147648 unmapped: 83836928 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e927000/0x0/0x1bfc00000, data 0x432e3a4/0x4554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469147648 unmapped: 83836928 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5046639 data_alloc: 251658240 data_used: 32575488
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19e927000/0x0/0x1bfc00000, data 0x432e3a4/0x4554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba52ee7c00 session 0x55ba50ffdc20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba64840c00 session 0x55ba4f6543c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469196800 unmapped: 83787776 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51839000 session 0x55ba50ffde00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50516000 session 0x55ba517f43c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506fec00 session 0x55ba51357a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470245376 unmapped: 82739200 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51839000 session 0x55ba509bcd20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462266368 unmapped: 90718208 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba4f8545a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462274560 unmapped: 90710016 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f773000/0x0/0x1bfc00000, data 0x34e53a4/0x370b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [1])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506ff000 session 0x55ba50ece780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4592641 data_alloc: 218103808 data_used: 8294400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4592641 data_alloc: 218103808 data_used: 8294400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 64K writes, 253K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 64K writes, 23K syncs, 2.71 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4901 writes, 19K keys, 4901 commit groups, 1.0 writes per commit group, ingest: 20.92 MB, 0.03 MB/s#012Interval WAL: 4901 writes, 1995 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ba4d07f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453558272 unmapped: 99426304 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4592641 data_alloc: 218103808 data_used: 8294400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4592641 data_alloc: 218103808 data_used: 8294400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4592641 data_alloc: 218103808 data_used: 8294400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a68000/0x0/0x1bfc00000, data 0x1e26342/0x204b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4592641 data_alloc: 218103808 data_used: 8294400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.491134644s of 36.457984924s, submitted: 81
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba581a1400 session 0x55ba516f9860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50479800 session 0x55ba4de583c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba53eeb000 session 0x55ba506a85a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51833800 session 0x55ba510f3680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51841000 session 0x55ba513c10e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453566464 unmapped: 99418112 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a92000/0x0/0x1bfc00000, data 0x21c63a4/0x23ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453574656 unmapped: 99409920 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51841000 session 0x55ba513c0d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453582848 unmapped: 99401728 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4625688 data_alloc: 218103808 data_used: 8294400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50700000 session 0x55ba4f855c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506fe800 session 0x55ba513ecd20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453582848 unmapped: 99401728 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba56d30400 session 0x55ba51728f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453582848 unmapped: 99401728 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a92000/0x0/0x1bfc00000, data 0x21c63a4/0x23ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a92000/0x0/0x1bfc00000, data 0x21c63a4/0x23ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4653077 data_alloc: 218103808 data_used: 12005376
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a92000/0x0/0x1bfc00000, data 0x21c63a4/0x23ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a92000/0x0/0x1bfc00000, data 0x21c63a4/0x23ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4653077 data_alloc: 218103808 data_used: 12005376
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a92000/0x0/0x1bfc00000, data 0x21c63a4/0x23ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 453591040 unmapped: 99393536 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50dbc000 session 0x55ba51728780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506fe800 session 0x55ba506421e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50700000 session 0x55ba513ec000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51841000 session 0x55ba51051680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.603981018s of 16.744344711s, submitted: 30
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba56d30400 session 0x55ba50a44b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba54c45c00 session 0x55ba4eab0960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba506fe800 session 0x55ba4f856960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 98017280 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50700000 session 0x55ba516f9a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51841000 session 0x55ba506ea5a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 98017280 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458375168 unmapped: 94609408 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4777680 data_alloc: 218103808 data_used: 12148736
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458383360 unmapped: 94601216 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbe4000/0x0/0x1bfc00000, data 0x306a416/0x3292000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458383360 unmapped: 94601216 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458383360 unmapped: 94601216 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba5183c400 session 0x55ba51051c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458383360 unmapped: 94601216 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba59eb6c00 session 0x55ba517f5a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50700000 session 0x55ba510510e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458391552 unmapped: 94593024 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4781786 data_alloc: 218103808 data_used: 12460032
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51841000 session 0x55ba50ffc000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbe4000/0x0/0x1bfc00000, data 0x306a416/0x3292000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457973760 unmapped: 95010816 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fbc7000/0x0/0x1bfc00000, data 0x308e439/0x32b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457998336 unmapped: 94986240 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457998336 unmapped: 94986240 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 457998336 unmapped: 94986240 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.337440491s of 11.841253281s, submitted: 123
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba5183cc00 session 0x55ba50a83a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50515c00 session 0x55ba517da780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456728576 unmapped: 96256000 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4700969 data_alloc: 218103808 data_used: 11616256
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a099c000/0x0/0x1bfc00000, data 0x22b93d7/0x24e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba51838400 session 0x55ba50961860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456753152 unmapped: 96231424 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456753152 unmapped: 96231424 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456753152 unmapped: 96231424 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456753152 unmapped: 96231424 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a099c000/0x0/0x1bfc00000, data 0x22b93d7/0x24e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456753152 unmapped: 96231424 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4673647 data_alloc: 218103808 data_used: 11616256
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456753152 unmapped: 96231424 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456753152 unmapped: 96231424 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 456753152 unmapped: 96231424 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a099c000/0x0/0x1bfc00000, data 0x22b93d7/0x24e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459481088 unmapped: 93503488 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a097c000/0x0/0x1bfc00000, data 0x22d93d7/0x2501000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.881479263s of 10.264846802s, submitted: 86
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 460005376 unmapped: 92979200 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4738373 data_alloc: 218103808 data_used: 11882496
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459743232 unmapped: 93241344 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0126000/0x0/0x1bfc00000, data 0x2b2f3d7/0x2d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4752877 data_alloc: 218103808 data_used: 12423168
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0126000/0x0/0x1bfc00000, data 0x2b2f3d7/0x2d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0126000/0x0/0x1bfc00000, data 0x2b2f3d7/0x2d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0106000/0x0/0x1bfc00000, data 0x2b503d7/0x2d78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4751505 data_alloc: 218103808 data_used: 12427264
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459784192 unmapped: 93200384 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0106000/0x0/0x1bfc00000, data 0x2b503d7/0x2d78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459792384 unmapped: 93192192 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0106000/0x0/0x1bfc00000, data 0x2b503d7/0x2d78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459792384 unmapped: 93192192 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459792384 unmapped: 93192192 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459792384 unmapped: 93192192 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4751505 data_alloc: 218103808 data_used: 12427264
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459792384 unmapped: 93192192 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459792384 unmapped: 93192192 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0106000/0x0/0x1bfc00000, data 0x2b503d7/0x2d78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459792384 unmapped: 93192192 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.968158722s of 19.130559921s, submitted: 34
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459792384 unmapped: 93192192 heap: 552984576 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba53eed400 session 0x55ba50ef74a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba54440400 session 0x55ba506d0960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50474400 session 0x55ba4eab14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0103000/0x0/0x1bfc00000, data 0x2b533d7/0x2d7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50514c00 session 0x55ba517daf00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba4ec32000 session 0x55ba513ede00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50474400 session 0x55ba517f45a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50514c00 session 0x55ba50ecf680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461012992 unmapped: 95641600 heap: 556654592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4848525 data_alloc: 218103808 data_used: 12427264
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba53eed400 session 0x55ba4f8561e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba54440400 session 0x55ba510365a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461012992 unmapped: 95641600 heap: 556654592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f46c000/0x0/0x1bfc00000, data 0x37e93e7/0x3a12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461012992 unmapped: 95641600 heap: 556654592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461012992 unmapped: 95641600 heap: 556654592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f45b000/0x0/0x1bfc00000, data 0x37fa3e7/0x3a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba5062f800 session 0x55ba510f23c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461012992 unmapped: 95641600 heap: 556654592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 ms_handle_reset con 0x55ba50514c00 session 0x55ba4f740960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461012992 unmapped: 95641600 heap: 556654592 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4848577 data_alloc: 218103808 data_used: 12435456
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 ms_handle_reset con 0x55ba54c44800 session 0x55ba4f854d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 ms_handle_reset con 0x55ba50474400 session 0x55ba510f2780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 heartbeat osd_stat(store_statfs(0x19f45b000/0x0/0x1bfc00000, data 0x37fa3e7/0x3a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 ms_handle_reset con 0x55ba58548400 session 0x55ba50ffcb40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 ms_handle_reset con 0x55ba506fec00 session 0x55ba510372c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 ms_handle_reset con 0x55ba50474400 session 0x55ba50ffcf00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 ms_handle_reset con 0x55ba54c44800 session 0x55ba510514a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 ms_handle_reset con 0x55ba58548400 session 0x55ba51728960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467140608 unmapped: 92946432 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 418 ms_handle_reset con 0x55ba50514c00 session 0x55ba508f6000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467148800 unmapped: 92938240 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 419 ms_handle_reset con 0x55ba51833800 session 0x55ba516f92c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468180992 unmapped: 91906048 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 419 ms_handle_reset con 0x55ba50474400 session 0x55ba50cbc1e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 419 ms_handle_reset con 0x55ba50514c00 session 0x55ba513c1a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 419 ms_handle_reset con 0x55ba51833800 session 0x55ba51729a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471064576 unmapped: 89022464 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471097344 unmapped: 88989696 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114396 data_alloc: 234881024 data_used: 30732288
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 419 heartbeat osd_stat(store_statfs(0x19e665000/0x0/0x1bfc00000, data 0x45e89d4/0x4817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471097344 unmapped: 88989696 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471097344 unmapped: 88989696 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.004964828s of 13.604809761s, submitted: 86
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba4f2b14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba52ee7000 session 0x55ba4e560960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba506d14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5096810 data_alloc: 234881024 data_used: 30732288
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50474400 session 0x55ba51356960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba51833800 session 0x55ba513570e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50514c00 session 0x55ba509bc000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50476400 session 0x55ba4e5c4d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba50ef72c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50474400 session 0x55ba513ed4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19e253000/0x0/0x1bfc00000, data 0x45ea513/0x481a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19e253000/0x0/0x1bfc00000, data 0x45ea513/0x481a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466681856 unmapped: 93405184 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467853312 unmapped: 92233728 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179178 data_alloc: 234881024 data_used: 31653888
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5183cc00 session 0x55ba4f2b14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae0000/0x0/0x1bfc00000, data 0x4d55513/0x4f85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50477400 session 0x55ba51729a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 92225536 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50476c00 session 0x55ba513c1a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba50cbc1e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae0000/0x0/0x1bfc00000, data 0x4d55513/0x4f85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467189760 unmapped: 92897280 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467189760 unmapped: 92897280 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae7000/0x0/0x1bfc00000, data 0x4d55546/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5206733 data_alloc: 251658240 data_used: 35938304
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.922373772s of 13.203535080s, submitted: 82
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5207869 data_alloc: 251658240 data_used: 36007936
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae7000/0x0/0x1bfc00000, data 0x4d55546/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae7000/0x0/0x1bfc00000, data 0x4d55546/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae7000/0x0/0x1bfc00000, data 0x4d55546/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468992000 unmapped: 91095040 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5208669 data_alloc: 251658240 data_used: 36028416
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.064604759s of 10.069568634s, submitted: 1
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469016576 unmapped: 91070464 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469082112 unmapped: 91004928 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19daa1000/0x0/0x1bfc00000, data 0x4d9b546/0x4fcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469082112 unmapped: 91004928 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba54441c00 session 0x55ba510f34a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba58549400 session 0x55ba513ec000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5135dc00 session 0x55ba4f856b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5062ec00 session 0x55ba4eab0960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba4f856960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 90685440 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5135dc00 session 0x55ba51772780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba54441c00 session 0x55ba4f755e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba58549400 session 0x55ba4f2b0000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba56d30400 session 0x55ba510501e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 90685440 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5253967 data_alloc: 251658240 data_used: 37294080
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 90685440 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470417408 unmapped: 89669632 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19d76d000/0x0/0x1bfc00000, data 0x50ce556/0x5301000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470417408 unmapped: 89669632 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19d76d000/0x0/0x1bfc00000, data 0x50ce556/0x5301000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470417408 unmapped: 89669632 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50510000 session 0x55ba51728d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470417408 unmapped: 89669632 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5263943 data_alloc: 251658240 data_used: 37695488
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470425600 unmapped: 89661440 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 89358336 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.657601357s of 11.733363152s, submitted: 21
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 89358336 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19d76c000/0x0/0x1bfc00000, data 0x50cf556/0x5302000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 89317376 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470810624 unmapped: 89276416 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280571 data_alloc: 251658240 data_used: 38965248
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470884352 unmapped: 89202688 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470884352 unmapped: 89202688 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19e7ac000/0x0/0x1bfc00000, data 0x50cf556/0x5302000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470884352 unmapped: 89202688 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470884352 unmapped: 89202688 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 89186304 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280571 data_alloc: 251658240 data_used: 38965248
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475119616 unmapped: 84967424 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475119616 unmapped: 84967424 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.770074844s of 10.655957222s, submitted: 265
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476471296 unmapped: 83615744 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19df2f000/0x0/0x1bfc00000, data 0x5944556/0x5b77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478388224 unmapped: 81698816 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478568448 unmapped: 81518592 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5392649 data_alloc: 251658240 data_used: 40611840
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478568448 unmapped: 81518592 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478568448 unmapped: 81518592 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 81371136 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dd7b000/0x0/0x1bfc00000, data 0x5af8556/0x5d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5046ec00 session 0x55ba50872d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478732288 unmapped: 81354752 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 421 ms_handle_reset con 0x55ba50477c00 session 0x55ba4eab1680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 421 ms_handle_reset con 0x55ba5182f800 session 0x55ba517f54a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 421 ms_handle_reset con 0x55ba51831000 session 0x55ba506eb860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 421 ms_handle_reset con 0x55ba5046ec00 session 0x55ba4f8565a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5669524 data_alloc: 251658240 data_used: 43790336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477396992 unmapped: 95297536 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477413376 unmapped: 95281152 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 422 ms_handle_reset con 0x55ba50477c00 session 0x55ba50ffda40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477413376 unmapped: 95281152 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba50510000 session 0x55ba517dba40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba4f2cf800 session 0x55ba50ecf0e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba53eed400 session 0x55ba50872960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.476616859s of 10.182755470s, submitted: 181
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477437952 unmapped: 95256576 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba4f2cf800 session 0x55ba4f2b1860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 heartbeat osd_stat(store_statfs(0x19be2e000/0x0/0x1bfc00000, data 0x7a48ac1/0x7c7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba5046ec00 session 0x55ba517f4000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba50477c00 session 0x55ba50872960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba50510000 session 0x55ba50ffda40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477446144 unmapped: 95248384 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5558206 data_alloc: 251658240 data_used: 40476672
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477446144 unmapped: 95248384 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba51833800 session 0x55ba517f54a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477454336 unmapped: 95240192 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba50474400 session 0x55ba510514a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba50477400 session 0x55ba4f6b4f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477454336 unmapped: 95240192 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba51818000 session 0x55ba51036b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba6483f400 session 0x55ba513574a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 425 ms_handle_reset con 0x55ba4f2cf800 session 0x55ba50960780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 425 ms_handle_reset con 0x55ba5046ec00 session 0x55ba513570e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 95207424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fbd9000/0x0/0x1bfc00000, data 0x395d2a2/0x3b92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 95207424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5012763 data_alloc: 234881024 data_used: 30158848
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 95207424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 95207424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 426 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 95191040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.857404709s of 10.224763870s, submitted: 142
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 95182848 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 ms_handle_reset con 0x55ba57dc7000 session 0x55ba4f854960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 108871680 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a0cfc000/0x0/0x1bfc00000, data 0x2b7ba1c/0x2db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841558 data_alloc: 234881024 data_used: 15929344
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 ms_handle_reset con 0x55ba4ec32400 session 0x55ba4f856d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 ms_handle_reset con 0x55ba5183c400 session 0x55ba4e5603c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 108871680 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461807616 unmapped: 110886912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 ms_handle_reset con 0x55ba53ffd800 session 0x55ba513c1860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a1a1d000/0x0/0x1bfc00000, data 0x1e5d9aa/0x2091000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a1a1d000/0x0/0x1bfc00000, data 0x1e39987/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4687336 data_alloc: 218103808 data_used: 7942144
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a1a1d000/0x0/0x1bfc00000, data 0x1e39987/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.512023926s of 10.001841545s, submitted: 76
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4691334 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461824000 unmapped: 110870528 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461824000 unmapped: 110870528 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4691334 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4691334 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4691334 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d32c00 session 0x55ba50872780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50515c00 session 0x55ba51772960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51833c00 session 0x55ba51773860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50d7e800 session 0x55ba510f30e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.948165894s of 19.958507538s, submitted: 13
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3d000/0x0/0x1bfc00000, data 0x1e3b4d6/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703400 session 0x55ba517db4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703400 session 0x55ba50961c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50474400 session 0x55ba513ed0e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc7c00 session 0x55ba516f81e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffc000 session 0x55ba517dbe00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462979072 unmapped: 109715456 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3d000/0x0/0x1bfc00000, data 0x1e3b4d6/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4743313 data_alloc: 218103808 data_used: 7954432
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462979072 unmapped: 109715456 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a0273000/0x0/0x1bfc00000, data 0x2465538/0x269b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462987264 unmapped: 109707264 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462987264 unmapped: 109707264 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50475000 session 0x55ba50961a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462987264 unmapped: 109707264 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50474400 session 0x55ba4f6b4b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a0273000/0x0/0x1bfc00000, data 0x2465538/0x269b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d31800 session 0x55ba516f83c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462987264 unmapped: 109707264 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba58548000 session 0x55ba517f41e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4746371 data_alloc: 218103808 data_used: 7954432
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a024e000/0x0/0x1bfc00000, data 0x2489548/0x26c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463142912 unmapped: 109551616 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a024e000/0x0/0x1bfc00000, data 0x2489548/0x26c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784611 data_alloc: 218103808 data_used: 13299712
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a024e000/0x0/0x1bfc00000, data 0x2489548/0x26c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a024e000/0x0/0x1bfc00000, data 0x2489548/0x26c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784611 data_alloc: 218103808 data_used: 13299712
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463446016 unmapped: 109248512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.262138367s of 19.385234833s, submitted: 32
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463675392 unmapped: 109019136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463683584 unmapped: 109010944 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 108879872 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.747936249s of 36.910617828s, submitted: 60
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51830000 session 0x55ba517f4960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703c00 session 0x55ba513ecd20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50474400 session 0x55ba513ede00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51830000 session 0x55ba506d0960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d31800 session 0x55ba51036000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463740928 unmapped: 108953600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4882866 data_alloc: 218103808 data_used: 13570048
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463740928 unmapped: 108953600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4882866 data_alloc: 218103808 data_used: 13570048
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5135cc00 session 0x55ba4f211860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463757312 unmapped: 108937216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900626 data_alloc: 234881024 data_used: 16113664
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.276296616s of 11.377257347s, submitted: 36
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4935618 data_alloc: 234881024 data_used: 20955136
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4935618 data_alloc: 234881024 data_used: 20955136
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.254467010s of 11.266688347s, submitted: 13
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465641472 unmapped: 107053056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466649088 unmapped: 106045440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 105570304 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ed27000/0x0/0x1bfc00000, data 0x39a15aa/0x3bd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 105570304 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5019204 data_alloc: 234881024 data_used: 21573632
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db7b000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021254 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.256912231s of 12.541369438s, submitted: 93
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468672512 unmapped: 104022016 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468672512 unmapped: 104022016 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50514400 session 0x55ba517f52c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54441000 session 0x55ba51729e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba59eb7000 session 0x55ba510f32c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468688896 unmapped: 104005632 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468688896 unmapped: 104005632 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.090114594s of 36.097347260s, submitted: 2
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021254 data_alloc: 234881024 data_used: 21962752
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468705280 unmapped: 103989248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468705280 unmapped: 103989248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5023718 data_alloc: 234881024 data_used: 21950464
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468705280 unmapped: 103989248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468705280 unmapped: 103989248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eedc00 session 0x55ba506a85a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54c45800 session 0x55ba513ecd20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50516000 session 0x55ba50ef7e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7f4000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4820992 data_alloc: 218103808 data_used: 13570048
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7f4000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7f4000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4820992 data_alloc: 218103808 data_used: 13570048
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.146137238s of 18.246082306s, submitted: 44
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506f8000 session 0x55ba509612c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046cc00 session 0x55ba50ef72c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7f4000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [1])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50516000 session 0x55ba4f854960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461619200 unmapped: 111075328 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461619200 unmapped: 111075328 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461619200 unmapped: 111075328 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 111058944 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461676544 unmapped: 111017984 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046ec00 session 0x55ba506434a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba4f741c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54440000 session 0x55ba51729a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba64840000 session 0x55ba506a8f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.831707001s of 57.919052124s, submitted: 26
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046ec00 session 0x55ba517dba40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50516000 session 0x55ba4f856b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54440000 session 0x55ba4f654d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba50a45680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50510c00 session 0x55ba506ea5a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4764199 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50479000 session 0x55ba517f4f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461701120 unmapped: 110993408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046c800 session 0x55ba51037860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461701120 unmapped: 110993408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461701120 unmapped: 110993408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50d7e400 session 0x55ba506d14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4764199 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eeb000 session 0x55ba51356960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461709312 unmapped: 110985216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461709312 unmapped: 110985216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809397 data_alloc: 234881024 data_used: 14049280
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f00f000/0x0/0x1bfc00000, data 0x25294e6/0x275f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50513000 session 0x55ba513572c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54c44800 session 0x55ba517732c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f00f000/0x0/0x1bfc00000, data 0x25294e6/0x275f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.345306396s of 16.414997101s, submitted: 11
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f00f000/0x0/0x1bfc00000, data 0x25294e6/0x275f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50513000 session 0x55ba51357860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4714942 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4714942 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51833c00 session 0x55ba513561e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183a400 session 0x55ba509614a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffc000 session 0x55ba506a81e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffc000 session 0x55ba51773c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.194999695s of 12.239532471s, submitted: 13
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e095400 session 0x55ba510f34a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50513000 session 0x55ba513ed0e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51833c00 session 0x55ba4f6554a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183a400 session 0x55ba4e560b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183a400 session 0x55ba50873a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4741531 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50473400 session 0x55ba510f3e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50477400 session 0x55ba51357860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4741531 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5135c400 session 0x55ba517732c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba51356960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba581a1400 session 0x55ba506d14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 113631232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757023 data_alloc: 218103808 data_used: 10113024
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 113631232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 113631232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.513194084s of 13.582759857s, submitted: 17
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183c400 session 0x55ba51037860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba517f5680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51838c00 session 0x55ba510f32c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.745201111s of 35.829093933s, submitted: 25
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba4de592c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5182fc00 session 0x55ba509bc960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5182fc00 session 0x55ba509bd860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba513572c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51838c00 session 0x55ba4e5c4d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 114507776 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6fe000/0x0/0x1bfc00000, data 0x1e3b4ef/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4767417 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f119000/0x0/0x1bfc00000, data 0x2420528/0x2655000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f119000/0x0/0x1bfc00000, data 0x2420528/0x2655000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50514000 session 0x55ba509bde00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458350592 unmapped: 114343936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4771654 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458350592 unmapped: 114343936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f0f4000/0x0/0x1bfc00000, data 0x244454b/0x267a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4815334 data_alloc: 234881024 data_used: 14049280
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f0f4000/0x0/0x1bfc00000, data 0x244454b/0x267a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f0f4000/0x0/0x1bfc00000, data 0x244454b/0x267a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4815334 data_alloc: 234881024 data_used: 14049280
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 113786880 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 113786880 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f0f4000/0x0/0x1bfc00000, data 0x244454b/0x267a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 113786880 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.488601685s of 19.623565674s, submitted: 43
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902710 data_alloc: 234881024 data_used: 15384576
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902870 data_alloc: 234881024 data_used: 15388672
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4903030 data_alloc: 234881024 data_used: 15392768
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4903030 data_alloc: 234881024 data_used: 15392768
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.422891617s of 17.684936523s, submitted: 89
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba4f6550e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183b400 session 0x55ba51036000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba506d01e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.598426819s of 38.765792847s, submitted: 52
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801077 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba58549c00 session 0x55ba510503c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51841c00 session 0x55ba50cbc960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba52ee7400 session 0x55ba50872960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba508734a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183b400 session 0x55ba513ec000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801093 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50517400 session 0x55ba50960f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffc000 session 0x55ba503f3680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483e000 session 0x55ba4f6541e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba510f21e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 110616576 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 110616576 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4860738 data_alloc: 234881024 data_used: 16179200
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4860738 data_alloc: 234881024 data_used: 16179200
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183ac00 session 0x55ba50ecf0e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d33400 session 0x55ba513ec780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046ec00 session 0x55ba50da1a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50515800 session 0x55ba50da0d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.848850250s of 20.036869049s, submitted: 42
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046ec00 session 0x55ba50da1e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba50ece3c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183ac00 session 0x55ba50a45c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d33400 session 0x55ba4e5c41e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d32400 session 0x55ba4f7552c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462823424 unmapped: 109871104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909052 data_alloc: 234881024 data_used: 16179200
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e58e000/0x0/0x1bfc00000, data 0x2b9a538/0x2dd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 104792064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183d800 session 0x55ba50872000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4986610 data_alloc: 234881024 data_used: 18104320
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19dc88000/0x0/0x1bfc00000, data 0x34a0538/0x36d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5024998 data_alloc: 234881024 data_used: 23425024
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19dc67000/0x0/0x1bfc00000, data 0x34c1538/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5024998 data_alloc: 234881024 data_used: 23425024
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19dc67000/0x0/0x1bfc00000, data 0x34c1538/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.104579926s of 19.442375183s, submitted: 130
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472252416 unmapped: 100442112 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472260608 unmapped: 100433920 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5097550 data_alloc: 234881024 data_used: 24334336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bd000/0x0/0x1bfc00000, data 0x3d63538/0x3f99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bf000/0x0/0x1bfc00000, data 0x3d69538/0x3f9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5093534 data_alloc: 234881024 data_used: 24481792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bf000/0x0/0x1bfc00000, data 0x3d69538/0x3f9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bf000/0x0/0x1bfc00000, data 0x3d69538/0x3f9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5093550 data_alloc: 234881024 data_used: 24481792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471597056 unmapped: 101097472 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.481681824s of 13.751366615s, submitted: 73
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bf000/0x0/0x1bfc00000, data 0x3d69538/0x3f9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470589440 unmapped: 102105088 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51833400 session 0x55ba517dba40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d32c00 session 0x55ba513c1860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x3d6f538/0x3fa5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 102096896 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba55640800 session 0x55ba50ffd680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 68K writes, 266K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 68K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3263 writes, 13K keys, 3263 commit groups, 1.0 writes per commit group, ingest: 13.75 MB, 0.02 MB/s#012Interval WAL: 3263 writes, 1301 syncs, 2.51 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 102096896 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4947026 data_alloc: 234881024 data_used: 18104320
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 102096896 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 102096896 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e19f000/0x0/0x1bfc00000, data 0x2f8a528/0x31bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470605824 unmapped: 102088704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470605824 unmapped: 102088704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 102080512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4947026 data_alloc: 234881024 data_used: 18104320
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eea800 session 0x55ba508734a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba517daf00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466214912 unmapped: 106479616 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eea800 session 0x55ba4f6b5c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba581a1c00 session 0x55ba513c0d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba4eab0000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f7da400 session 0x55ba50ef6960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f7da400 session 0x55ba4f210780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.441654205s of 34.627315521s, submitted: 64
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eea800 session 0x55ba506d1a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba581a1c00 session 0x55ba509603c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba517290e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba4f6b4b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba516f92c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecbb000/0x0/0x1bfc00000, data 0x246f4c6/0x26a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801494 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51839c00 session 0x55ba506d0780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecbb000/0x0/0x1bfc00000, data 0x246f4c6/0x26a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0ca400 session 0x55ba51773860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecbb000/0x0/0x1bfc00000, data 0x246f4c6/0x26a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba58549400 session 0x55ba50eced20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba51356f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841252 data_alloc: 218103808 data_used: 13230080
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecba000/0x0/0x1bfc00000, data 0x246f4d6/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecba000/0x0/0x1bfc00000, data 0x246f4d6/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841252 data_alloc: 218103808 data_used: 13230080
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecba000/0x0/0x1bfc00000, data 0x246f4d6/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 106463232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 106463232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecba000/0x0/0x1bfc00000, data 0x246f4d6/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 106463232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 106463232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841252 data_alloc: 218103808 data_used: 13230080
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.628616333s of 18.725591660s, submitted: 11
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465870848 unmapped: 106823680 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e0fa000/0x0/0x1bfc00000, data 0x30274d6/0x325c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e0f9000/0x0/0x1bfc00000, data 0x302f4d6/0x3264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4932514 data_alloc: 218103808 data_used: 13729792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e0f9000/0x0/0x1bfc00000, data 0x302f4d6/0x3264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e0f9000/0x0/0x1bfc00000, data 0x302f4d6/0x3264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4932530 data_alloc: 218103808 data_used: 13729792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506fb000 session 0x55ba506a8f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51838000 session 0x55ba517f41e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.447827339s of 13.035610199s, submitted: 77
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50473800 session 0x55ba51773680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ee000/0x0/0x1bfc00000, data 0x1e3b4d6/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 107798528 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 107798528 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 107790336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 107790336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 107782144 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 107782144 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 107782144 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 107773952 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 107773952 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 107773952 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464928768 unmapped: 107765760 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba517f4f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50472c00 session 0x55ba4de583c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50472c00 session 0x55ba510501e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50473800 session 0x55ba509bc1e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.206523895s of 30.257295609s, submitted: 18
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506fb000 session 0x55ba51051860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51838000 session 0x55ba4f856d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba4f2b14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba517da960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50472c00 session 0x55ba510370e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecee000/0x0/0x1bfc00000, data 0x243b528/0x2670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802880 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecee000/0x0/0x1bfc00000, data 0x243b528/0x2670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802880 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 107741184 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 107741184 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50d7fc00 session 0x55ba50a45680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 107593728 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896188736s of 10.002918243s, submitted: 30
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 107585536 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851265 data_alloc: 218103808 data_used: 14245888
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506fe000 session 0x55ba50ffcf00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50514c00 session 0x55ba510503c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851133 data_alloc: 218103808 data_used: 14245888
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.686688423s of 10.693693161s, submitted: 2
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851265 data_alloc: 218103808 data_used: 14245888
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50700800 session 0x55ba50a45c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2ce400 session 0x55ba4eab0f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851133 data_alloc: 218103808 data_used: 14245888
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465518592 unmapped: 107175936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465518592 unmapped: 107175936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc7400 session 0x55ba50da0d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.676322937s of 10.815813065s, submitted: 3
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba59eb7000 session 0x55ba510f2d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465526784 unmapped: 107167744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851133 data_alloc: 218103808 data_used: 14245888
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [0,0,1])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465543168 unmapped: 107151360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f800 session 0x55ba513ec780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465567744 unmapped: 107126784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465633280 unmapped: 107061248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465657856 unmapped: 107036672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465666048 unmapped: 107028480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465666048 unmapped: 107028480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465690624 unmapped: 107003904 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f800 session 0x55ba50ecf680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba506a8b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba59eb6800 session 0x55ba50960f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba543a9400 session 0x55ba509603c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465690624 unmapped: 107003904 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.144412994s of 48.635864258s, submitted: 293
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51837c00 session 0x55ba517290e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba4e5c4d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f800 session 0x55ba4f856b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba543a9400 session 0x55ba516f9a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba59eb6800 session 0x55ba4f740f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4820033 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465797120 unmapped: 106897408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4820033 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba509605a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4827873 data_alloc: 218103808 data_used: 9003008
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465838080 unmapped: 106856448 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861473 data_alloc: 218103808 data_used: 13762560
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466673664 unmapped: 106020864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466673664 unmapped: 106020864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861473 data_alloc: 218103808 data_used: 13762560
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466673664 unmapped: 106020864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466681856 unmapped: 106012672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.520923615s of 23.669008255s, submitted: 13
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e5cf000/0x0/0x1bfc00000, data 0x2b5a4d6/0x2d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4916477 data_alloc: 218103808 data_used: 14352384
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4933071 data_alloc: 218103808 data_used: 15302656
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4933071 data_alloc: 218103808 data_used: 15302656
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468615168 unmapped: 104079360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468615168 unmapped: 104079360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468615168 unmapped: 104079360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4933071 data_alloc: 218103808 data_used: 15302656
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468615168 unmapped: 104079360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468623360 unmapped: 104071168 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468623360 unmapped: 104071168 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468623360 unmapped: 104071168 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468631552 unmapped: 104062976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4933071 data_alloc: 218103808 data_used: 15302656
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4934031 data_alloc: 218103808 data_used: 15327232
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba4f75dc20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba51728f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.447341919s of 32.665538788s, submitted: 58
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55f000/0x0/0x1bfc00000, data 0x2bcb4c6/0x2dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55f000/0x0/0x1bfc00000, data 0x2bcb4c6/0x2dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4932785 data_alloc: 218103808 data_used: 15327232
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffd000 session 0x55ba4f6b4b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464478208 unmapped: 108216320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464478208 unmapped: 108216320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464478208 unmapped: 108216320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464519168 unmapped: 108175360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464519168 unmapped: 108175360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464527360 unmapped: 108167168 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464551936 unmapped: 108142592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba503f3680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56f29c00 session 0x55ba51729680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba517f4b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba516f8d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 64.227188110s of 64.287414551s, submitted: 18
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464560128 unmapped: 108134400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffd000 session 0x55ba516f8960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba50ece780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba581a1000 session 0x55ba516f94a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba506434a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba509612c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7ae000/0x0/0x1bfc00000, data 0x297b4ef/0x2bb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464723968 unmapped: 111648768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4864266 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464723968 unmapped: 111648768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e76d000/0x0/0x1bfc00000, data 0x29bc528/0x2bf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464723968 unmapped: 111648768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e76d000/0x0/0x1bfc00000, data 0x29bc528/0x2bf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51840800 session 0x55ba4f210780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba51050f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50510000 session 0x55ba517f5c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba51357860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba51357e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 111476736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 111476736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51840800 session 0x55ba50960b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba4f2b0b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50471000 session 0x55ba50ffc3c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba510372c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba50ef63c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467075072 unmapped: 109297664 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba64840800 session 0x55ba4f75d4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51840800 session 0x55ba509bc780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5135dc00 session 0x55ba516f9680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba50ef65a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba50ef6000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51840800 session 0x55ba4f211860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba64840800 session 0x55ba51729e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e1c9000/0x0/0x1bfc00000, data 0x2f5e55b/0x3195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4926132 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 111460352 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183ac00 session 0x55ba4f2b0000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba51051a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 111362048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465797120 unmapped: 110575616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465797120 unmapped: 110575616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.331954956s of 11.006346703s, submitted: 72
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 110141440 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba543a8000 session 0x55ba51356b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5032053 data_alloc: 234881024 data_used: 21610496
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466681856 unmapped: 109690880 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e181000/0x0/0x1bfc00000, data 0x2fa655b/0x31dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466681856 unmapped: 109690880 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 107888640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 107888640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d33000 session 0x55ba510f34a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba52ee6400 session 0x55ba516f81e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468492288 unmapped: 107880448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51837800 session 0x55ba513563c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006935 data_alloc: 234881024 data_used: 21606400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468500480 unmapped: 107872256 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468500480 unmapped: 107872256 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e457000/0x0/0x1bfc00000, data 0x2cd154b/0x2f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 473186304 unmapped: 103186432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475029504 unmapped: 101343232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d31400 session 0x55ba510f25a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703400 session 0x55ba510512c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.492013931s of 10.009179115s, submitted: 200
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 100859904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506fb000 session 0x55ba50ffd860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5076969 data_alloc: 234881024 data_used: 20606976
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 100859904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 100859904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba513c0780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f000 session 0x55ba4f655a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d86d000/0x0/0x1bfc00000, data 0x38bb54b/0x3af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475521024 unmapped: 100851712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f000 session 0x55ba51356b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 107012096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 107012096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 107012096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 107012096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 79.260581970s of 79.400283813s, submitted: 48
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba55640800 session 0x55ba509612c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 106962944 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 106954752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 106954752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 106954752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 106954752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50479400 session 0x55ba517f4b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 106938368 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51831800 session 0x55ba4e560d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183b400 session 0x55ba4f2b0f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.520866394s of 22.554546356s, submitted: 7
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183b400 session 0x55ba513ec5a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f28b000/0x0/0x1bfc00000, data 0x1e9f4c6/0x20d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469450752 unmapped: 106921984 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f28b000/0x0/0x1bfc00000, data 0x1e9f4c6/0x20d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f28b000/0x0/0x1bfc00000, data 0x1e9f4c6/0x20d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4808003 data_alloc: 218103808 data_used: 8212480
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470400 session 0x55ba51729e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470c00 session 0x55ba513ede00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6400 session 0x55ba51051680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f28b000/0x0/0x1bfc00000, data 0x1e9f4c6/0x20d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4805151 data_alloc: 218103808 data_used: 8212480
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469467136 unmapped: 106905600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469467136 unmapped: 106905600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 106889216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 106889216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 106889216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4805151 data_alloc: 218103808 data_used: 8212480
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 106881024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 106881024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 106881024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.000583649s of 17.263948441s, submitted: 29
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703400 session 0x55ba4f6550e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470400 session 0x55ba510361e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 106864640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 106864640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 106864640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 106864640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 106831872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 106831872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 106831872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 106815488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51831800 session 0x55ba510365a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54c44800 session 0x55ba51356d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813353 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813353 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470736896 unmapped: 105635840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470736896 unmapped: 105635840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.340122223s of 37.416938782s, submitted: 22
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813353 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50d7e000 session 0x55ba51728b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814827 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814827 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470777856 unmapped: 105594880 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814827 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814827 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470810624 unmapped: 105562112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470810624 unmapped: 105562112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470810624 unmapped: 105562112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.894979477s of 23.938385010s, submitted: 2
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483e400 session 0x55ba50a83a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470827008 unmapped: 105545728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483e400 session 0x55ba4f6541e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814258 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470827008 unmapped: 105545728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470843392 unmapped: 105529344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470843392 unmapped: 105529344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470400 session 0x55ba509bc3c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470843392 unmapped: 105529344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ef4f000/0x0/0x1bfc00000, data 0x21db4c6/0x240f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470843392 unmapped: 105529344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ef4f000/0x0/0x1bfc00000, data 0x21db4c6/0x240f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843364 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ef4f000/0x0/0x1bfc00000, data 0x21db4c6/0x240f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843364 data_alloc: 218103808 data_used: 11358208
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470859776 unmapped: 105512960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470876160 unmapped: 105496576 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.625185013s of 13.770231247s, submitted: 26
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 105472000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 105472000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 105472000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 105472000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470908928 unmapped: 105463808 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470908928 unmapped: 105463808 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.368459702s of 24.372455597s, submitted: 1
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba64841400 session 0x55ba50872d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba55640800 session 0x55ba4f855860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850156 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850156 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470966272 unmapped: 105406464 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850156 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470966272 unmapped: 105406464 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba5183c400 session 0x55ba51051a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba5183c400 session 0x55ba51357680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850156 data_alloc: 218103808 data_used: 11366400
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba506fb800 session 0x55ba4f2b14a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba6483ec00 session 0x55ba4f2b0780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470999040 unmapped: 105373696 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4877196 data_alloc: 234881024 data_used: 15167488
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba53eebc00 session 0x55ba516f9c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba5183a800 session 0x55ba50872000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.965549469s of 27.977008820s, submitted: 3
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4876844 data_alloc: 234881024 data_used: 15167488
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba506fb800 session 0x55ba50872b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4876844 data_alloc: 234881024 data_used: 15167488
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4877164 data_alloc: 234881024 data_used: 15175680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba50513000 session 0x55ba50960d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.959621429s of 12.253915787s, submitted: 4
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba64840800 session 0x55ba50a44000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4875587 data_alloc: 234881024 data_used: 15171584
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba58548800 session 0x55ba4f854960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471031808 unmapped: 105340928 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471031808 unmapped: 105340928 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 430 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x21dedcc/0x2415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 430 ms_handle_reset con 0x55ba51830000 session 0x55ba50642d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4826903 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f2e8000/0x0/0x1bfc00000, data 0x1e3edcc/0x2075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.197291374s of 11.322454453s, submitted: 33
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba57dc7000 session 0x55ba50ffd4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467435520 unmapped: 108937216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba5a0c8000 session 0x55ba51050780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467435520 unmapped: 108937216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4826037 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19f2e5000/0x0/0x1bfc00000, data 0x1e4090b/0x2078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467435520 unmapped: 108937216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba506ff400 session 0x55ba51356000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 108920832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 108920832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19f2e5000/0x0/0x1bfc00000, data 0x1e4090b/0x2078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 108920832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba4ec33400 session 0x55ba4f6b4f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba543a8000 session 0x55ba50872780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 108920832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba5183b400 session 0x55ba506a8960
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 69K writes, 272K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 69K writes, 26K syncs, 2.69 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1858 writes, 6311 keys, 1858 commit groups, 1.0 writes per commit group, ingest: 5.23 MB, 0.01 MB/s#012Interval WAL: 1858 writes, 811 syncs, 2.29 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467025920 unmapped: 109346816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467025920 unmapped: 109346816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: mgrc ms_handle_reset ms_handle_reset con 0x55ba581a1800
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2945860420
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2945860420,v1:192.168.122.100:6801/2945860420]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba50474400 session 0x55ba516f85a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba50514400 session 0x55ba516f9860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba51832000 session 0x55ba510f30e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467050496 unmapped: 109322240 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.196346283s of 36.326320648s, submitted: 34
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467410944 unmapped: 108961792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba50703400 session 0x55ba51773a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4874144 data_alloc: 218103808 data_used: 11378688
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467410944 unmapped: 108961792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467247104 unmapped: 109125632 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4916036 data_alloc: 234881024 data_used: 17240064
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467910656 unmapped: 108462080 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4916356 data_alloc: 234881024 data_used: 17248256
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467910656 unmapped: 108462080 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467910656 unmapped: 108462080 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467918848 unmapped: 108453888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.444561005s of 13.455645561s, submitted: 2
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 106881024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968198 data_alloc: 234881024 data_used: 17678336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19e882000/0x0/0x1bfc00000, data 0x289690b/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968198 data_alloc: 234881024 data_used: 17678336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19e882000/0x0/0x1bfc00000, data 0x289690b/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19e882000/0x0/0x1bfc00000, data 0x289690b/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968198 data_alloc: 234881024 data_used: 17678336
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19e882000/0x0/0x1bfc00000, data 0x289690b/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.635722160s of 14.073546410s, submitted: 48
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 105775104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba6483e800 session 0x55ba4f855860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba5062ec00 session 0x55ba4e561e00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470605824 unmapped: 105766912 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x28986d4/0x2ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470605824 unmapped: 105766912 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968153 data_alloc: 234881024 data_used: 17690624
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x28986d4/0x2ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968153 data_alloc: 234881024 data_used: 17690624
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470630400 unmapped: 105742336 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x28986d4/0x2ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.553018570s of 12.511919975s, submitted: 12
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4969055 data_alloc: 234881024 data_used: 17690624
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba517730e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba53eea800 session 0x55ba4f6550e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x2898736/0x2ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x2898736/0x2ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972077 data_alloc: 234881024 data_used: 17690624
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x2898736/0x2ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e889000/0x0/0x1bfc00000, data 0x289d736/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972077 data_alloc: 234881024 data_used: 17690624
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e889000/0x0/0x1bfc00000, data 0x289d736/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e889000/0x0/0x1bfc00000, data 0x289d736/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470654976 unmapped: 105717760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e889000/0x0/0x1bfc00000, data 0x289d736/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972077 data_alloc: 234881024 data_used: 17690624
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470654976 unmapped: 105717760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.641451836s of 16.856966019s, submitted: 4
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba5062ec00 session 0x55ba51356b40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e85f000/0x0/0x1bfc00000, data 0x28c7736/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471023616 unmapped: 105349120 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4980975 data_alloc: 234881024 data_used: 17821696
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471031808 unmapped: 105340928 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4995083 data_alloc: 234881024 data_used: 17821696
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4995083 data_alloc: 234881024 data_used: 17821696
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.333244324s of 15.649279594s, submitted: 16
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e7fe000/0x0/0x1bfc00000, data 0x2955736/0x2b60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5009298 data_alloc: 234881024 data_used: 18518016
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e791000/0x0/0x1bfc00000, data 0x29c2736/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018442 data_alloc: 234881024 data_used: 18518016
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e791000/0x0/0x1bfc00000, data 0x29c2736/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.420621872s of 12.544328690s, submitted: 16
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e790000/0x0/0x1bfc00000, data 0x29c3736/0x2bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018662 data_alloc: 234881024 data_used: 18518016
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e790000/0x0/0x1bfc00000, data 0x29c3736/0x2bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5016690 data_alloc: 234881024 data_used: 18522112
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e78f000/0x0/0x1bfc00000, data 0x29c4736/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e78f000/0x0/0x1bfc00000, data 0x29c4736/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e78f000/0x0/0x1bfc00000, data 0x29c4736/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5016690 data_alloc: 234881024 data_used: 18522112
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba4e095400 session 0x55ba50da0000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e78f000/0x0/0x1bfc00000, data 0x29c4736/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba53eec400 session 0x55ba4f655a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.420714378s of 14.430052757s, submitted: 2
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba50d7e000 session 0x55ba51037c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba64841000 session 0x55ba51773860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5008552 data_alloc: 234881024 data_used: 18391040
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e7b9000/0x0/0x1bfc00000, data 0x299a736/0x2ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e3a9000/0x0/0x1bfc00000, data 0x299a736/0x2ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5008712 data_alloc: 234881024 data_used: 18399232
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba58549400 session 0x55ba50ef7a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba51836c00 session 0x55ba517290e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba51839c00 session 0x55ba4f856d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472088576 unmapped: 104284160 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.028098106s of 10.027087212s, submitted: 321
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 handle_osd_map epochs [433,433], i have 433, src has [1,433]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba5062f800 session 0x55ba50ece1e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472129536 unmapped: 104243200 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 heartbeat osd_stat(store_statfs(0x19e478000/0x0/0x1bfc00000, data 0x289b381/0x2ad6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472129536 unmapped: 104243200 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba5183b800 session 0x55ba50a45c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba63fe3c00 session 0x55ba50a83c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba5062f800 session 0x55ba517283c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4987718 data_alloc: 234881024 data_used: 18157568
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472137728 unmapped: 104235008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472137728 unmapped: 104235008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472145920 unmapped: 104226816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba51836c00 session 0x55ba509601e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472162304 unmapped: 104210432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472162304 unmapped: 104210432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850128 data_alloc: 218103808 data_used: 11419648
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 heartbeat osd_stat(store_statfs(0x19eecf000/0x0/0x1bfc00000, data 0x1e44381/0x207f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba51839c00 session 0x55ba510f21e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472178688 unmapped: 104194048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472178688 unmapped: 104194048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472178688 unmapped: 104194048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472178688 unmapped: 104194048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.924956322s of 10.500700951s, submitted: 66
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 435 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba513ede00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 435 heartbeat osd_stat(store_statfs(0x19eec9000/0x0/0x1bfc00000, data 0x1e47b4f/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4856000 data_alloc: 218103808 data_used: 11431936
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 435 heartbeat osd_stat(store_statfs(0x19eec9000/0x0/0x1bfc00000, data 0x1e47b4f/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 435 heartbeat osd_stat(store_statfs(0x19eec9000/0x0/0x1bfc00000, data 0x1e47b4f/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4856000 data_alloc: 218103808 data_used: 11431936
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 435 heartbeat osd_stat(store_statfs(0x19eec9000/0x0/0x1bfc00000, data 0x1e47b4f/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472211456 unmapped: 104161280 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472211456 unmapped: 104161280 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472211456 unmapped: 104161280 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4859134 data_alloc: 218103808 data_used: 11436032
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472219648 unmapped: 104153088 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19eec6000/0x0/0x1bfc00000, data 0x1e4968e/0x2087000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472219648 unmapped: 104153088 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba50a44f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba4ec2a400 session 0x55ba4e5c54a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471736320 unmapped: 104636416 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.230606079s of 15.310397148s, submitted: 37
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba51834400 session 0x55ba517f45a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19eec6000/0x0/0x1bfc00000, data 0x1e4969e/0x2088000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4863922 data_alloc: 218103808 data_used: 11436032
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba5046f400 session 0x55ba50cbd4a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba6483e000 session 0x55ba51357860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19eec6000/0x0/0x1bfc00000, data 0x1e4969e/0x2088000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba6483e000 session 0x55ba513ec1e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4921366 data_alloc: 218103808 data_used: 11436032
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba4ec2a400 session 0x55ba50ffc5a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e757000/0x0/0x1bfc00000, data 0x25b869e/0x27f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4921366 data_alloc: 218103808 data_used: 11436032
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e757000/0x0/0x1bfc00000, data 0x25b869e/0x27f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e757000/0x0/0x1bfc00000, data 0x25b869e/0x27f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471785472 unmapped: 104587264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4921366 data_alloc: 218103808 data_used: 11436032
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471785472 unmapped: 104587264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471785472 unmapped: 104587264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471785472 unmapped: 104587264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471793664 unmapped: 104579072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.705083847s of 19.859975815s, submitted: 20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba51841000 session 0x55ba517f4d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4928906 data_alloc: 218103808 data_used: 12001280
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4979466 data_alloc: 234881024 data_used: 19152896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4979466 data_alloc: 234881024 data_used: 19152896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.998488426s of 13.005092621s, submitted: 1
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e32a000/0x0/0x1bfc00000, data 0x29e569e/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [0,0,0,0,12])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e32a000/0x0/0x1bfc00000, data 0x29e569e/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477741056 unmapped: 98631680 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5098728 data_alloc: 234881024 data_used: 21159936
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbd0000/0x0/0x1bfc00000, data 0x313f69e/0x337e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba50517000 session 0x55ba509bda40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba54440800 session 0x55ba51037c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477822976 unmapped: 98549760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba4ec2a400 session 0x55ba513561e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5083916 data_alloc: 234881024 data_used: 21078016
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477822976 unmapped: 98549760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477822976 unmapped: 98549760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477831168 unmapped: 98541568 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477831168 unmapped: 98541568 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477831168 unmapped: 98541568 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5083916 data_alloc: 234881024 data_used: 21078016
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477831168 unmapped: 98541568 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5083916 data_alloc: 234881024 data_used: 21078016
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba64840800 session 0x55ba50872780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477847552 unmapped: 98525184 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5084076 data_alloc: 234881024 data_used: 21082112
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477847552 unmapped: 98525184 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5084716 data_alloc: 234881024 data_used: 21143552
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5084716 data_alloc: 234881024 data_used: 21143552
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.302429199s of 35.634357452s, submitted: 121
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.235605240s of 28.248983383s, submitted: 3
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5107274 data_alloc: 234881024 data_used: 23556096
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba55640800 session 0x55ba513c0780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba58548c00 session 0x55ba513c0000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba59eb6000 session 0x55ba509603c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbeb000/0x0/0x1bfc00000, data 0x31222f7/0x3362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5129572 data_alloc: 234881024 data_used: 23556096
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbe5000/0x0/0x1bfc00000, data 0x33582f7/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477896704 unmapped: 98476032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbe5000/0x0/0x1bfc00000, data 0x33582f7/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5129572 data_alloc: 234881024 data_used: 23556096
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbe5000/0x0/0x1bfc00000, data 0x33582f7/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbe5000/0x0/0x1bfc00000, data 0x33582f7/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5129572 data_alloc: 234881024 data_used: 23556096
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 98459648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 98459648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.124782562s of 17.165224075s, submitted: 9
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba51835c00 session 0x55ba50642d20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477945856 unmapped: 98426880 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5136046 data_alloc: 234881024 data_used: 23564288
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5136046 data_alloc: 234881024 data_used: 23564288
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5136046 data_alloc: 234881024 data_used: 23564288
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.188096046s of 13.255754471s, submitted: 12
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479272960 unmapped: 97099776 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184579 data_alloc: 234881024 data_used: 24928256
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184755 data_alloc: 234881024 data_used: 24928256
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.518165588s of 11.585093498s, submitted: 14
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184403 data_alloc: 234881024 data_used: 24928256
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184227 data_alloc: 234881024 data_used: 24928256
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5185283 data_alloc: 234881024 data_used: 24928256
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.545027733s of 13.577043533s, submitted: 11
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba53eec800 session 0x55ba51772f00
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba50d7fc00 session 0x55ba4f7550e0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba4f2ce800 session 0x55ba50ef7680
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 97075200 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 97075200 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479305728 unmapped: 97067008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba54c44c00 session 0x55ba517294a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba4f2ce800 session 0x55ba51356780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dae1000/0x0/0x1bfc00000, data 0x375f2f7/0x346c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 97042432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba50d7fc00 session 0x55ba510514a0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 438 ms_handle_reset con 0x55ba50dbc400 session 0x55ba50a83a40
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5154045 data_alloc: 234881024 data_used: 24698880
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479346688 unmapped: 97026048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479346688 unmapped: 97026048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 438 ms_handle_reset con 0x55ba50703000 session 0x55ba4eab0000
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 438 ms_handle_reset con 0x55ba5183c800 session 0x55ba50da1c20
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 438 ms_handle_reset con 0x55ba4f2ce800 session 0x55ba516f92c0
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 438 heartbeat osd_stat(store_statfs(0x19dbe4000/0x0/0x1bfc00000, data 0x3128fa4/0x336a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5142867 data_alloc: 234881024 data_used: 24698880
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.259385109s of 10.704581261s, submitted: 73
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 ms_handle_reset con 0x55ba51831400 session 0x55ba516f8780
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 ms_handle_reset con 0x55ba4ec2b000 session 0x55ba51773860
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 98451456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 98451456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 98451456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894835 data_alloc: 218103808 data_used: 11468800
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894835 data_alloc: 218103808 data_used: 11468800
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 98361344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 98361344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 98361344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'config show' '{prefix=config show}'
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477757440 unmapped: 98615296 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477274112 unmapped: 99098624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476971008 unmapped: 99401728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:17:34 np0005539564 ceph-osd[79212]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:17:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:17:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2873818589' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:17:34 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:17:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:17:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3228355095' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:17:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:35.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 04:17:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/101951362' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 04:17:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:35.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:36 np0005539564 nova_compute[226295]: 2025-11-29 09:17:36.103 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 04:17:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/674154820' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 04:17:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 04:17:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/201732818' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 04:17:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:37.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/956463298' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 04:17:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:37.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1701076098' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1609830633' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3657199559' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1453176421' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2109516250' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 04:17:38 np0005539564 nova_compute[226295]: 2025-11-29 09:17:38.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1073022535' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 04:17:38 np0005539564 systemd[1]: Starting Hostname Service...
Nov 29 04:17:38 np0005539564 nova_compute[226295]: 2025-11-29 09:17:38.730 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:38 np0005539564 systemd[1]: Started Hostname Service.
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1337990301' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 04:17:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/291863460' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 04:17:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:39.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 04:17:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2953156169' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 04:17:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:39.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 04:17:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1685737166' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 04:17:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 04:17:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3653193851' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 04:17:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 04:17:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3920128576' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:41.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:41 np0005539564 nova_compute[226295]: 2025-11-29 09:17:41.110 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 04:17:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2006648945' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 04:17:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:41.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 04:17:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/56438089' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 04:17:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:17:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2730671749' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 04:17:42 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3801093389' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 04:17:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:43.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:17:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:43.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 04:17:43 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2504882129' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 04:17:43 np0005539564 nova_compute[226295]: 2025-11-29 09:17:43.738 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:44 np0005539564 nova_compute[226295]: 2025-11-29 09:17:44.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 04:17:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3402498730' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 04:17:44 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 04:17:44 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1719245278' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 04:17:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:45.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 04:17:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2897488850' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 04:17:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:45.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:45 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 04:17:45 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2823251554' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 04:17:46 np0005539564 nova_compute[226295]: 2025-11-29 09:17:46.114 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:46 np0005539564 nova_compute[226295]: 2025-11-29 09:17:46.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 04:17:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3886331604' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 04:17:46 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 04:17:46 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4060416169' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 04:17:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:47.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:47 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 04:17:47 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/498813386' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 04:17:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:48 np0005539564 nova_compute[226295]: 2025-11-29 09:17:48.739 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 04:17:48 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3959855484' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 04:17:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:49.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 04:17:49 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4139312844' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 04:17:49 np0005539564 nova_compute[226295]: 2025-11-29 09:17:49.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:49 np0005539564 nova_compute[226295]: 2025-11-29 09:17:49.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:17:49 np0005539564 nova_compute[226295]: 2025-11-29 09:17:49.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:17:49 np0005539564 nova_compute[226295]: 2025-11-29 09:17:49.430 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:17:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:49 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 04:17:49 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4216702883' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 04:17:50 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 04:17:50 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/963325396' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 04:17:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:51.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 04:17:51 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3984160017' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 04:17:51 np0005539564 nova_compute[226295]: 2025-11-29 09:17:51.121 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 04:17:51 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2164508129' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 04:17:51 np0005539564 nova_compute[226295]: 2025-11-29 09:17:51.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:51 np0005539564 nova_compute[226295]: 2025-11-29 09:17:51.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:17:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:51.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:51 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 04:17:51 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/780921308' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 04:17:52 np0005539564 ovs-appctl[325602]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 04:17:52 np0005539564 ovs-appctl[325608]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 04:17:52 np0005539564 ovs-appctl[325614]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 04:17:52 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:17:52 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/494945659' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:17:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:53.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 29 04:17:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3032159976' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 04:17:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:53 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4165202095' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:53 np0005539564 nova_compute[226295]: 2025-11-29 09:17:53.741 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:54 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:17:54 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1723737267' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:17:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:55.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 29 04:17:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3645652316' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 04:17:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 29 04:17:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1108924346' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 04:17:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:55 np0005539564 podman[326582]: 2025-11-29 09:17:55.518086243 +0000 UTC m=+0.073242075 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:17:55 np0005539564 podman[326583]: 2025-11-29 09:17:55.540593392 +0000 UTC m=+0.083933174 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:17:55 np0005539564 podman[326579]: 2025-11-29 09:17:55.546730878 +0000 UTC m=+0.102136917 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:17:55 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:55 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3006706261' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:56 np0005539564 nova_compute[226295]: 2025-11-29 09:17:56.124 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:56 np0005539564 nova_compute[226295]: 2025-11-29 09:17:56.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:57.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:57 np0005539564 nova_compute[226295]: 2025-11-29 09:17:57.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:57 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 29 04:17:57 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2541776718' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 04:17:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 29 04:17:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2025109886' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 04:17:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:58 np0005539564 nova_compute[226295]: 2025-11-29 09:17:58.745 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:58 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1138314140' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:17:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:59.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:59 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 29 04:17:59 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3906095358' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 04:18:00 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 29 04:18:00 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1888875257' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 04:18:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:01.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:01 np0005539564 nova_compute[226295]: 2025-11-29 09:18:01.141 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:01 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 29 04:18:01 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1843533178' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 04:18:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:01.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:02 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:18:02 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2818360036' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:18:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:03.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 29 04:18:03 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4166184950' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 04:18:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:03.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:03 np0005539564 nova_compute[226295]: 2025-11-29 09:18:03.749 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:18:03.801 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:18:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:18:03.801 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:18:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:18:03.802 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:18:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 04:18:04 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2795749512' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 04:18:04 np0005539564 nova_compute[226295]: 2025-11-29 09:18:04.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:04 np0005539564 nova_compute[226295]: 2025-11-29 09:18:04.380 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:18:04 np0005539564 nova_compute[226295]: 2025-11-29 09:18:04.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:18:04 np0005539564 nova_compute[226295]: 2025-11-29 09:18:04.381 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:18:04 np0005539564 nova_compute[226295]: 2025-11-29 09:18:04.381 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:18:04 np0005539564 nova_compute[226295]: 2025-11-29 09:18:04.381 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:18:04 np0005539564 virtqemud[225880]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:18:04 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:18:04 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3785548121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:18:04 np0005539564 nova_compute[226295]: 2025-11-29 09:18:04.897 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:18:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:05.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:05 np0005539564 nova_compute[226295]: 2025-11-29 09:18:05.092 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:18:05 np0005539564 nova_compute[226295]: 2025-11-29 09:18:05.093 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3997MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:18:05 np0005539564 nova_compute[226295]: 2025-11-29 09:18:05.094 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:18:05 np0005539564 nova_compute[226295]: 2025-11-29 09:18:05.094 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:18:05 np0005539564 systemd[1]: Starting Time & Date Service...
Nov 29 04:18:05 np0005539564 nova_compute[226295]: 2025-11-29 09:18:05.504 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:18:05 np0005539564 nova_compute[226295]: 2025-11-29 09:18:05.505 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:18:05 np0005539564 systemd[1]: Started Time & Date Service.
Nov 29 04:18:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:05 np0005539564 nova_compute[226295]: 2025-11-29 09:18:05.540 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:18:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:18:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/267296023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:18:06 np0005539564 nova_compute[226295]: 2025-11-29 09:18:06.036 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:18:06 np0005539564 nova_compute[226295]: 2025-11-29 09:18:06.044 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:18:06 np0005539564 nova_compute[226295]: 2025-11-29 09:18:06.067 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:18:06 np0005539564 nova_compute[226295]: 2025-11-29 09:18:06.070 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:18:06 np0005539564 nova_compute[226295]: 2025-11-29 09:18:06.070 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:18:06 np0005539564 nova_compute[226295]: 2025-11-29 09:18:06.142 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:07.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:07.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:08 np0005539564 nova_compute[226295]: 2025-11-29 09:18:08.070 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:08 np0005539564 nova_compute[226295]: 2025-11-29 09:18:08.758 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:09.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:09.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.297097) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890297135, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 1274, "num_deletes": 251, "total_data_size": 2247664, "memory_usage": 2272272, "flush_reason": "Manual Compaction"}
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890321268, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 1482108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95173, "largest_seqno": 96442, "table_properties": {"data_size": 1475611, "index_size": 3314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 19076, "raw_average_key_size": 22, "raw_value_size": 1461003, "raw_average_value_size": 1743, "num_data_blocks": 143, "num_entries": 838, "num_filter_entries": 838, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407828, "oldest_key_time": 1764407828, "file_creation_time": 1764407890, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 24234 microseconds, and 4847 cpu microseconds.
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.321330) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 1482108 bytes OK
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.321356) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.323613) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.323630) EVENT_LOG_v1 {"time_micros": 1764407890323624, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.323652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 2240571, prev total WAL file size 2240571, number of live WAL files 2.
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.324451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(1447KB)], [195(12MB)]
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890324524, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 14242872, "oldest_snapshot_seqno": -1}
Nov 29 04:18:10 np0005539564 nova_compute[226295]: 2025-11-29 09:18:10.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:10 np0005539564 nova_compute[226295]: 2025-11-29 09:18:10.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 12091 keys, 12290431 bytes, temperature: kUnknown
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890484317, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 12290431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12216970, "index_size": 42116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 321158, "raw_average_key_size": 26, "raw_value_size": 12010201, "raw_average_value_size": 993, "num_data_blocks": 1580, "num_entries": 12091, "num_filter_entries": 12091, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764407890, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.484589) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 12290431 bytes
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.486669) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.1 rd, 76.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(17.9) write-amplify(8.3) OK, records in: 12606, records dropped: 515 output_compression: NoCompression
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.486685) EVENT_LOG_v1 {"time_micros": 1764407890486677, "job": 126, "event": "compaction_finished", "compaction_time_micros": 159899, "compaction_time_cpu_micros": 59735, "output_level": 6, "num_output_files": 1, "total_output_size": 12290431, "num_input_records": 12606, "num_output_records": 12091, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890487007, "job": 126, "event": "table_file_deletion", "file_number": 197}
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890488689, "job": 126, "event": "table_file_deletion", "file_number": 195}
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.324361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.488717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.488727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.488729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.488730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:18:10.488732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:11.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:11 np0005539564 nova_compute[226295]: 2025-11-29 09:18:11.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:11.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:13.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:13.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:13 np0005539564 nova_compute[226295]: 2025-11-29 09:18:13.759 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:18:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:15.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:18:16 np0005539564 nova_compute[226295]: 2025-11-29 09:18:16.151 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:17.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:17.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:18 np0005539564 nova_compute[226295]: 2025-11-29 09:18:18.375 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:18 np0005539564 nova_compute[226295]: 2025-11-29 09:18:18.375 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:18:18 np0005539564 nova_compute[226295]: 2025-11-29 09:18:18.427 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:18:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:18 np0005539564 nova_compute[226295]: 2025-11-29 09:18:18.760 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:19.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:18:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:19.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:18:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:18:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:21.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:18:21 np0005539564 nova_compute[226295]: 2025-11-29 09:18:21.155 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:21.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:23.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:23.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:23 np0005539564 nova_compute[226295]: 2025-11-29 09:18:23.762 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:25.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:26 np0005539564 podman[327705]: 2025-11-29 09:18:26.043628329 +0000 UTC m=+0.086398650 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 04:18:26 np0005539564 podman[327704]: 2025-11-29 09:18:26.044490373 +0000 UTC m=+0.086214776 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 04:18:26 np0005539564 podman[327703]: 2025-11-29 09:18:26.07949463 +0000 UTC m=+0.121765188 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 04:18:26 np0005539564 nova_compute[226295]: 2025-11-29 09:18:26.156 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:27.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:27.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:28 np0005539564 nova_compute[226295]: 2025-11-29 09:18:28.763 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:29.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 04:18:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 04:18:29 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:18:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:18:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:29.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:18:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:31.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:31 np0005539564 nova_compute[226295]: 2025-11-29 09:18:31.161 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:31.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:32 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:18:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:33.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:33 np0005539564 nova_compute[226295]: 2025-11-29 09:18:33.765 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:35.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:35 np0005539564 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 04:18:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:35.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:35 np0005539564 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 04:18:36 np0005539564 nova_compute[226295]: 2025-11-29 09:18:36.183 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:36 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:18:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:18:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:37.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:38 np0005539564 nova_compute[226295]: 2025-11-29 09:18:38.795 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:18:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:18:39 np0005539564 nova_compute[226295]: 2025-11-29 09:18:39.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:39.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:40 np0005539564 nova_compute[226295]: 2025-11-29 09:18:40.537 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:41.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:41 np0005539564 nova_compute[226295]: 2025-11-29 09:18:41.188 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:41.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:43.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:43 np0005539564 nova_compute[226295]: 2025-11-29 09:18:43.798 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:45.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:46 np0005539564 nova_compute[226295]: 2025-11-29 09:18:46.194 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:46 np0005539564 nova_compute[226295]: 2025-11-29 09:18:46.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:47.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:47 np0005539564 nova_compute[226295]: 2025-11-29 09:18:47.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:47.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:47 np0005539564 systemd[1]: session-60.scope: Deactivated successfully.
Nov 29 04:18:47 np0005539564 systemd[1]: session-60.scope: Consumed 2min 48.534s CPU time, 1.0G memory peak, read 452.6M from disk, written 307.7M to disk.
Nov 29 04:18:47 np0005539564 systemd-logind[785]: Session 60 logged out. Waiting for processes to exit.
Nov 29 04:18:47 np0005539564 systemd-logind[785]: Removed session 60.
Nov 29 04:18:48 np0005539564 systemd-logind[785]: New session 61 of user zuul.
Nov 29 04:18:48 np0005539564 systemd[1]: Started Session 61 of User zuul.
Nov 29 04:18:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:48 np0005539564 systemd[1]: session-61.scope: Deactivated successfully.
Nov 29 04:18:48 np0005539564 systemd-logind[785]: Session 61 logged out. Waiting for processes to exit.
Nov 29 04:18:48 np0005539564 systemd-logind[785]: Removed session 61.
Nov 29 04:18:48 np0005539564 systemd-logind[785]: New session 62 of user zuul.
Nov 29 04:18:48 np0005539564 systemd[1]: Started Session 62 of User zuul.
Nov 29 04:18:48 np0005539564 nova_compute[226295]: 2025-11-29 09:18:48.799 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:48 np0005539564 systemd[1]: session-62.scope: Deactivated successfully.
Nov 29 04:18:48 np0005539564 systemd-logind[785]: Session 62 logged out. Waiting for processes to exit.
Nov 29 04:18:48 np0005539564 systemd-logind[785]: Removed session 62.
Nov 29 04:18:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:49.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:49 np0005539564 nova_compute[226295]: 2025-11-29 09:18:49.339 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:49.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:50 np0005539564 nova_compute[226295]: 2025-11-29 09:18:50.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:50 np0005539564 nova_compute[226295]: 2025-11-29 09:18:50.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:18:50 np0005539564 nova_compute[226295]: 2025-11-29 09:18:50.345 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:18:50 np0005539564 nova_compute[226295]: 2025-11-29 09:18:50.407 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:18:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:51 np0005539564 nova_compute[226295]: 2025-11-29 09:18:51.197 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:51.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:53 np0005539564 nova_compute[226295]: 2025-11-29 09:18:53.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:53 np0005539564 nova_compute[226295]: 2025-11-29 09:18:53.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:18:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:53.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:53 np0005539564 nova_compute[226295]: 2025-11-29 09:18:53.841 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:55.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:56 np0005539564 nova_compute[226295]: 2025-11-29 09:18:56.230 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:56 np0005539564 podman[328016]: 2025-11-29 09:18:56.506300377 +0000 UTC m=+0.056249685 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 04:18:56 np0005539564 podman[328015]: 2025-11-29 09:18:56.547811957 +0000 UTC m=+0.092975444 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:18:56 np0005539564 podman[328014]: 2025-11-29 09:18:56.569965711 +0000 UTC m=+0.125658985 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:18:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:57 np0005539564 nova_compute[226295]: 2025-11-29 09:18:57.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:57.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:58 np0005539564 nova_compute[226295]: 2025-11-29 09:18:58.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:58 np0005539564 nova_compute[226295]: 2025-11-29 09:18:58.844 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:59.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:18:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:59.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:01.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:01 np0005539564 nova_compute[226295]: 2025-11-29 09:19:01.233 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:01.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:03.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:19:03.802 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:19:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:19:03.803 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:19:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:19:03.803 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:19:03 np0005539564 nova_compute[226295]: 2025-11-29 09:19:03.846 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:05.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:05 np0005539564 nova_compute[226295]: 2025-11-29 09:19:05.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:05 np0005539564 nova_compute[226295]: 2025-11-29 09:19:05.702 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:19:05 np0005539564 nova_compute[226295]: 2025-11-29 09:19:05.702 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:19:05 np0005539564 nova_compute[226295]: 2025-11-29 09:19:05.703 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:19:05 np0005539564 nova_compute[226295]: 2025-11-29 09:19:05.703 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:19:05 np0005539564 nova_compute[226295]: 2025-11-29 09:19:05.703 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:19:06 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:19:06 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/607609603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:19:06 np0005539564 nova_compute[226295]: 2025-11-29 09:19:06.167 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:19:06 np0005539564 nova_compute[226295]: 2025-11-29 09:19:06.237 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:06 np0005539564 nova_compute[226295]: 2025-11-29 09:19:06.402 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:19:06 np0005539564 nova_compute[226295]: 2025-11-29 09:19:06.403 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4119MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:19:06 np0005539564 nova_compute[226295]: 2025-11-29 09:19:06.404 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:19:06 np0005539564 nova_compute[226295]: 2025-11-29 09:19:06.404 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:19:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:07.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:07.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:08 np0005539564 nova_compute[226295]: 2025-11-29 09:19:08.469 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:19:08 np0005539564 nova_compute[226295]: 2025-11-29 09:19:08.469 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:19:08 np0005539564 nova_compute[226295]: 2025-11-29 09:19:08.597 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:19:08 np0005539564 nova_compute[226295]: 2025-11-29 09:19:08.848 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:19:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/968756917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:19:09 np0005539564 nova_compute[226295]: 2025-11-29 09:19:09.079 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:19:09 np0005539564 nova_compute[226295]: 2025-11-29 09:19:09.088 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:19:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:09.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:09 np0005539564 nova_compute[226295]: 2025-11-29 09:19:09.229 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:19:09 np0005539564 nova_compute[226295]: 2025-11-29 09:19:09.232 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:19:09 np0005539564 nova_compute[226295]: 2025-11-29 09:19:09.232 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:19:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:09.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:11.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:11 np0005539564 nova_compute[226295]: 2025-11-29 09:19:11.241 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:11.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:12 np0005539564 nova_compute[226295]: 2025-11-29 09:19:12.233 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:13.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:13 np0005539564 nova_compute[226295]: 2025-11-29 09:19:13.849 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:15.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:15.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:16 np0005539564 nova_compute[226295]: 2025-11-29 09:19:16.246 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:17.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:18 np0005539564 nova_compute[226295]: 2025-11-29 09:19:18.507 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:18 np0005539564 nova_compute[226295]: 2025-11-29 09:19:18.853 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:19.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:21.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:21 np0005539564 nova_compute[226295]: 2025-11-29 09:19:21.250 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:21.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:23.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:23.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:23 np0005539564 nova_compute[226295]: 2025-11-29 09:19:23.855 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:25.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:26 np0005539564 nova_compute[226295]: 2025-11-29 09:19:26.255 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:27.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:27 np0005539564 podman[328123]: 2025-11-29 09:19:27.528888868 +0000 UTC m=+0.075335684 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 04:19:27 np0005539564 podman[328124]: 2025-11-29 09:19:27.537005078 +0000 UTC m=+0.081729408 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:19:27 np0005539564 podman[328122]: 2025-11-29 09:19:27.550756623 +0000 UTC m=+0.103867921 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:19:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:27.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:19:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451159059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:19:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:19:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451159059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:19:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:28 np0005539564 nova_compute[226295]: 2025-11-29 09:19:28.857 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:29.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:29.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:31.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:31 np0005539564 nova_compute[226295]: 2025-11-29 09:19:31.258 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:31.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:33.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:33 np0005539564 nova_compute[226295]: 2025-11-29 09:19:33.859 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:36 np0005539564 nova_compute[226295]: 2025-11-29 09:19:36.296 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:19:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:37 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:19:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:37.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:38 np0005539564 nova_compute[226295]: 2025-11-29 09:19:38.862 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:39.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:41.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:41 np0005539564 nova_compute[226295]: 2025-11-29 09:19:41.299 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:41.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:41 np0005539564 nova_compute[226295]: 2025-11-29 09:19:41.912 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:43.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:43.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:43 np0005539564 nova_compute[226295]: 2025-11-29 09:19:43.865 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:45.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:45.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:46 np0005539564 nova_compute[226295]: 2025-11-29 09:19:46.303 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:47.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:47.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:48 np0005539564 nova_compute[226295]: 2025-11-29 09:19:48.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:48 np0005539564 nova_compute[226295]: 2025-11-29 09:19:48.868 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:49.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:49 np0005539564 nova_compute[226295]: 2025-11-29 09:19:49.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:49.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:50 np0005539564 nova_compute[226295]: 2025-11-29 09:19:50.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:50 np0005539564 nova_compute[226295]: 2025-11-29 09:19:50.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:19:50 np0005539564 nova_compute[226295]: 2025-11-29 09:19:50.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:19:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:50 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:51.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:51 np0005539564 nova_compute[226295]: 2025-11-29 09:19:51.308 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:51 np0005539564 nova_compute[226295]: 2025-11-29 09:19:51.466 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:19:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:51.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:53.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:53 np0005539564 nova_compute[226295]: 2025-11-29 09:19:53.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:53 np0005539564 nova_compute[226295]: 2025-11-29 09:19:53.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:19:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:53.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:53 np0005539564 nova_compute[226295]: 2025-11-29 09:19:53.874 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:55.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:55.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:56 np0005539564 nova_compute[226295]: 2025-11-29 09:19:56.313 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:57.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:57.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:58 np0005539564 podman[328377]: 2025-11-29 09:19:58.536271462 +0000 UTC m=+0.074093920 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:19:58 np0005539564 podman[328376]: 2025-11-29 09:19:58.565396386 +0000 UTC m=+0.107795879 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Nov 29 04:19:58 np0005539564 podman[328375]: 2025-11-29 09:19:58.588286939 +0000 UTC m=+0.137662071 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:19:58 np0005539564 nova_compute[226295]: 2025-11-29 09:19:58.875 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:59.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:59 np0005539564 nova_compute[226295]: 2025-11-29 09:19:59.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:19:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:59.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:00 np0005539564 nova_compute[226295]: 2025-11-29 09:20:00.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:01.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:01 np0005539564 nova_compute[226295]: 2025-11-29 09:20:01.317 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:01.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:03.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:03 np0005539564 ceph-mon[81769]: overall HEALTH_OK
Nov 29 04:20:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:03.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:20:03.804 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:20:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:20:03.805 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:20:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:20:03.805 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:20:03 np0005539564 nova_compute[226295]: 2025-11-29 09:20:03.878 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:05.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:06 np0005539564 nova_compute[226295]: 2025-11-29 09:20:06.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:06 np0005539564 nova_compute[226295]: 2025-11-29 09:20:06.349 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:07.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:07.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:08 np0005539564 nova_compute[226295]: 2025-11-29 09:20:08.843 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:20:08 np0005539564 nova_compute[226295]: 2025-11-29 09:20:08.844 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:20:08 np0005539564 nova_compute[226295]: 2025-11-29 09:20:08.844 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:20:08 np0005539564 nova_compute[226295]: 2025-11-29 09:20:08.844 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:20:08 np0005539564 nova_compute[226295]: 2025-11-29 09:20:08.844 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:20:08 np0005539564 nova_compute[226295]: 2025-11-29 09:20:08.882 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:09.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:20:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1075469183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.290 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.528 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.529 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4129MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.530 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.530 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.598 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.598 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.615 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing inventories for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.643 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating ProviderTree inventory for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.643 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Updating inventory in ProviderTree for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.662 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing aggregate associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.685 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Refreshing trait associations for resource provider ea190a43-1246-44b8-8f8b-a61b155a1d3b, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:20:09 np0005539564 nova_compute[226295]: 2025-11-29 09:20:09.715 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:20:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:09.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:10 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:20:10 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1317951223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:20:10 np0005539564 nova_compute[226295]: 2025-11-29 09:20:10.185 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:20:10 np0005539564 nova_compute[226295]: 2025-11-29 09:20:10.190 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:20:10 np0005539564 nova_compute[226295]: 2025-11-29 09:20:10.212 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:20:10 np0005539564 nova_compute[226295]: 2025-11-29 09:20:10.213 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:20:10 np0005539564 nova_compute[226295]: 2025-11-29 09:20:10.213 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:20:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:11 np0005539564 nova_compute[226295]: 2025-11-29 09:20:11.354 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:11.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:12 np0005539564 nova_compute[226295]: 2025-11-29 09:20:12.216 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:13.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:13.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:13 np0005539564 nova_compute[226295]: 2025-11-29 09:20:13.883 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:15.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:15.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:16 np0005539564 nova_compute[226295]: 2025-11-29 09:20:16.402 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:18 np0005539564 nova_compute[226295]: 2025-11-29 09:20:18.899 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:19.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:19.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:21.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:21 np0005539564 nova_compute[226295]: 2025-11-29 09:20:21.409 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:23.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:23.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:23 np0005539564 nova_compute[226295]: 2025-11-29 09:20:23.903 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:25.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:25.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:26 np0005539564 nova_compute[226295]: 2025-11-29 09:20:26.414 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:27.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:27.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:28 np0005539564 nova_compute[226295]: 2025-11-29 09:20:28.906 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:29.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:29 np0005539564 podman[328484]: 2025-11-29 09:20:29.551444442 +0000 UTC m=+0.090304403 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:20:29 np0005539564 podman[328483]: 2025-11-29 09:20:29.579258509 +0000 UTC m=+0.131725230 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 04:20:29 np0005539564 podman[328485]: 2025-11-29 09:20:29.585992013 +0000 UTC m=+0.120255828 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:20:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:31.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:31 np0005539564 nova_compute[226295]: 2025-11-29 09:20:31.419 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:31.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:33.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:33.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:33 np0005539564 nova_compute[226295]: 2025-11-29 09:20:33.912 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:35.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:36 np0005539564 nova_compute[226295]: 2025-11-29 09:20:36.422 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:20:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:37.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:20:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:38 np0005539564 nova_compute[226295]: 2025-11-29 09:20:38.913 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:39.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:39.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:41.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:41 np0005539564 nova_compute[226295]: 2025-11-29 09:20:41.462 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:41.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:42 np0005539564 nova_compute[226295]: 2025-11-29 09:20:42.337 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:43.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:43.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:43 np0005539564 nova_compute[226295]: 2025-11-29 09:20:43.952 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:45.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:45.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:20:46 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 71K writes, 277K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s#012Cumulative WAL: 71K writes, 26K syncs, 2.68 writes per sync, written: 0.27 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1443 writes, 4639 keys, 1443 commit groups, 1.0 writes per commit group, ingest: 3.93 MB, 0.01 MB/s#012Interval WAL: 1443 writes, 626 syncs, 2.31 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:20:46 np0005539564 nova_compute[226295]: 2025-11-29 09:20:46.562 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:47.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:47.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:47 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Nov 29 04:20:47 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:47.978871) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:20:47 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Nov 29 04:20:47 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408047978971, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1676, "num_deletes": 256, "total_data_size": 4001370, "memory_usage": 4047392, "flush_reason": "Manual Compaction"}
Nov 29 04:20:47 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048025909, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 2629222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96447, "largest_seqno": 98118, "table_properties": {"data_size": 2622190, "index_size": 4102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14395, "raw_average_key_size": 19, "raw_value_size": 2608156, "raw_average_value_size": 3582, "num_data_blocks": 181, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407891, "oldest_key_time": 1764407891, "file_creation_time": 1764408047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 47171 microseconds, and 7799 cpu microseconds.
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.026040) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 2629222 bytes OK
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.026068) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.028451) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.028469) EVENT_LOG_v1 {"time_micros": 1764408048028462, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.028493) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 3993810, prev total WAL file size 3993810, number of live WAL files 2.
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.029844) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373734' seq:72057594037927935, type:22 .. '6C6F676D0034303237' seq:0, type:0; will stop at (end)
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(2567KB)], [198(11MB)]
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048030034, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14919653, "oldest_snapshot_seqno": -1}
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 12292 keys, 14789774 bytes, temperature: kUnknown
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048215233, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 14789774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14712186, "index_size": 45731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30789, "raw_key_size": 326276, "raw_average_key_size": 26, "raw_value_size": 14499254, "raw_average_value_size": 1179, "num_data_blocks": 1733, "num_entries": 12292, "num_filter_entries": 12292, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764408048, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.215655) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14789774 bytes
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.218011) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.5 rd, 79.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.7 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 12819, records dropped: 527 output_compression: NoCompression
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.218064) EVENT_LOG_v1 {"time_micros": 1764408048218044, "job": 128, "event": "compaction_finished", "compaction_time_micros": 185317, "compaction_time_cpu_micros": 42578, "output_level": 6, "num_output_files": 1, "total_output_size": 14789774, "num_input_records": 12819, "num_output_records": 12292, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048218875, "job": 128, "event": "table_file_deletion", "file_number": 200}
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048221363, "job": 128, "event": "table_file_deletion", "file_number": 198}
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.029698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.221406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.221413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.221415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.221441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:20:48.221443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539564 nova_compute[226295]: 2025-11-29 09:20:48.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:48 np0005539564 nova_compute[226295]: 2025-11-29 09:20:48.954 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:49.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:49 np0005539564 nova_compute[226295]: 2025-11-29 09:20:49.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:49.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:50 np0005539564 nova_compute[226295]: 2025-11-29 09:20:50.337 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:20:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:20:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:51 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:20:51 np0005539564 nova_compute[226295]: 2025-11-29 09:20:51.567 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:51.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:52 np0005539564 nova_compute[226295]: 2025-11-29 09:20:52.648 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:52 np0005539564 nova_compute[226295]: 2025-11-29 09:20:52.649 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:20:52 np0005539564 nova_compute[226295]: 2025-11-29 09:20:52.649 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:20:52 np0005539564 nova_compute[226295]: 2025-11-29 09:20:52.675 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:20:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:53.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:53 np0005539564 nova_compute[226295]: 2025-11-29 09:20:53.967 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:55.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:55 np0005539564 nova_compute[226295]: 2025-11-29 09:20:55.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:55 np0005539564 nova_compute[226295]: 2025-11-29 09:20:55.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:20:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:55.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:56 np0005539564 nova_compute[226295]: 2025-11-29 09:20:56.569 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:57.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:57.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:58 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:58 np0005539564 nova_compute[226295]: 2025-11-29 09:20:58.969 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:59.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:59 np0005539564 nova_compute[226295]: 2025-11-29 09:20:59.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:20:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:59.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:00 np0005539564 nova_compute[226295]: 2025-11-29 09:21:00.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:00 np0005539564 podman[328743]: 2025-11-29 09:21:00.52665137 +0000 UTC m=+0.066013300 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:21:00 np0005539564 podman[328742]: 2025-11-29 09:21:00.564885211 +0000 UTC m=+0.113987156 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:21:00 np0005539564 podman[328741]: 2025-11-29 09:21:00.569341133 +0000 UTC m=+0.116866676 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 04:21:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:01.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:01 np0005539564 nova_compute[226295]: 2025-11-29 09:21:01.572 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:01.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:03.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:21:03.806 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:21:03.806 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:21:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:21:03.806 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:21:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:03 np0005539564 nova_compute[226295]: 2025-11-29 09:21:03.971 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:05.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:05.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:06 np0005539564 nova_compute[226295]: 2025-11-29 09:21:06.576 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:07.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:07 np0005539564 nova_compute[226295]: 2025-11-29 09:21:07.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:07 np0005539564 nova_compute[226295]: 2025-11-29 09:21:07.383 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:21:07 np0005539564 nova_compute[226295]: 2025-11-29 09:21:07.384 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:21:07 np0005539564 nova_compute[226295]: 2025-11-29 09:21:07.384 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:21:07 np0005539564 nova_compute[226295]: 2025-11-29 09:21:07.384 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:21:07 np0005539564 nova_compute[226295]: 2025-11-29 09:21:07.385 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:21:07 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:21:07 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/100528066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:21:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:07 np0005539564 nova_compute[226295]: 2025-11-29 09:21:07.862 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.067 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.068 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4125MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.069 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.069 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.166 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.166 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.182 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:21:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:21:08 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3737279369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.620 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.628 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.667 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.669 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.670 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:21:08 np0005539564 nova_compute[226295]: 2025-11-29 09:21:08.973 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:09.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:09 np0005539564 nova_compute[226295]: 2025-11-29 09:21:09.670 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:09.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:11.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:11 np0005539564 nova_compute[226295]: 2025-11-29 09:21:11.579 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:11.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:13.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:13.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:13 np0005539564 nova_compute[226295]: 2025-11-29 09:21:13.975 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:15.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:16 np0005539564 nova_compute[226295]: 2025-11-29 09:21:16.582 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:17.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:18 np0005539564 nova_compute[226295]: 2025-11-29 09:21:18.980 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:19.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:19.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:21.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:21 np0005539564 nova_compute[226295]: 2025-11-29 09:21:21.586 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:21.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:23.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:23 np0005539564 nova_compute[226295]: 2025-11-29 09:21:23.983 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:25.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:25.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:26 np0005539564 nova_compute[226295]: 2025-11-29 09:21:26.588 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:27.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:27.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:28 np0005539564 nova_compute[226295]: 2025-11-29 09:21:28.986 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:29.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:29.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:31 np0005539564 podman[328884]: 2025-11-29 09:21:31.543063764 +0000 UTC m=+0.078797168 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 04:21:31 np0005539564 podman[328885]: 2025-11-29 09:21:31.55502135 +0000 UTC m=+0.083604049 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:21:31 np0005539564 nova_compute[226295]: 2025-11-29 09:21:31.589 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:31 np0005539564 podman[328883]: 2025-11-29 09:21:31.591259398 +0000 UTC m=+0.130176869 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:21:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:31.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:33.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:33 np0005539564 nova_compute[226295]: 2025-11-29 09:21:33.992 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:21:34 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 19K writes, 98K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1541 writes, 7617 keys, 1541 commit groups, 1.0 writes per commit group, ingest: 16.25 MB, 0.03 MB/s#012Interval WAL: 1541 writes, 1541 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     26.0      4.75              0.50        64    0.074       0      0       0.0       0.0#012  L6      1/0   14.10 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6     56.4     48.7     14.11              2.48        63    0.224    548K    33K       0.0       0.0#012 Sum      1/0   14.10 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     42.2     43.0     18.86              2.98       127    0.148    548K    33K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1     74.8     77.2      0.97              0.29        10    0.097     61K   2557       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0     56.4     48.7     14.11              2.48        63    0.224    548K    33K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     26.0      4.75              0.50        63    0.075       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.121, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.79 GB write, 0.10 MB/s write, 0.78 GB read, 0.10 MB/s read, 18.9 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558dc73191f0#2 capacity: 304.00 MB usage: 89.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.001106 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5310,85.33 MB,28.07%) FilterBlock(127,1.47 MB,0.484923%) IndexBlock(127,2.36 MB,0.776577%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 04:21:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:35.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:35.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:36 np0005539564 nova_compute[226295]: 2025-11-29 09:21:36.593 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:37.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:37.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:39 np0005539564 nova_compute[226295]: 2025-11-29 09:21:39.033 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:39.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:39.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:41 np0005539564 nova_compute[226295]: 2025-11-29 09:21:41.596 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:41.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:42 np0005539564 nova_compute[226295]: 2025-11-29 09:21:42.336 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:43.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:43.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:44 np0005539564 nova_compute[226295]: 2025-11-29 09:21:44.033 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:45.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:45.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:46 np0005539564 nova_compute[226295]: 2025-11-29 09:21:46.599 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:47.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:47.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:49 np0005539564 nova_compute[226295]: 2025-11-29 09:21:49.038 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:49.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:49.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:50 np0005539564 nova_compute[226295]: 2025-11-29 09:21:50.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:50 np0005539564 nova_compute[226295]: 2025-11-29 09:21:50.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:51.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:51 np0005539564 nova_compute[226295]: 2025-11-29 09:21:51.602 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:51.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:52 np0005539564 nova_compute[226295]: 2025-11-29 09:21:52.345 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:52 np0005539564 nova_compute[226295]: 2025-11-29 09:21:52.346 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:21:52 np0005539564 nova_compute[226295]: 2025-11-29 09:21:52.346 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:21:52 np0005539564 nova_compute[226295]: 2025-11-29 09:21:52.374 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:21:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:53.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:53.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:54 np0005539564 nova_compute[226295]: 2025-11-29 09:21:54.069 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:55.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:55.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:56 np0005539564 nova_compute[226295]: 2025-11-29 09:21:56.629 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:57 np0005539564 nova_compute[226295]: 2025-11-29 09:21:57.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:57 np0005539564 nova_compute[226295]: 2025-11-29 09:21:57.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:21:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:57.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:57.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:59 np0005539564 nova_compute[226295]: 2025-11-29 09:21:59.147 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:21:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:21:59 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:21:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:59.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:59 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:21:59 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:59 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:59.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.247016) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120247105, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 941, "num_deletes": 251, "total_data_size": 1913638, "memory_usage": 1946592, "flush_reason": "Manual Compaction"}
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120261097, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 1262501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98123, "largest_seqno": 99059, "table_properties": {"data_size": 1258121, "index_size": 2031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9610, "raw_average_key_size": 19, "raw_value_size": 1249427, "raw_average_value_size": 2560, "num_data_blocks": 90, "num_entries": 488, "num_filter_entries": 488, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764408048, "oldest_key_time": 1764408048, "file_creation_time": 1764408120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 14128 microseconds, and 8397 cpu microseconds.
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.261151) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 1262501 bytes OK
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.261178) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.262582) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.262604) EVENT_LOG_v1 {"time_micros": 1764408120262596, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.262626) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1908936, prev total WAL file size 1908936, number of live WAL files 2.
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.263586) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(1232KB)], [201(14MB)]
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120263641, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 16052275, "oldest_snapshot_seqno": -1}
Nov 29 04:22:00 np0005539564 nova_compute[226295]: 2025-11-29 09:22:00.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 12265 keys, 14014224 bytes, temperature: kUnknown
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120354718, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 14014224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13937492, "index_size": 44939, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 326424, "raw_average_key_size": 26, "raw_value_size": 13725466, "raw_average_value_size": 1119, "num_data_blocks": 1696, "num_entries": 12265, "num_filter_entries": 12265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400294, "oldest_key_time": 0, "file_creation_time": 1764408120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b003236-2c9b-47ac-982a-c4196705f81c", "db_session_id": "LBPX4GW5MUJF8UJGE88L", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.355008) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 14014224 bytes
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.356193) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.1 rd, 153.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.1 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(23.8) write-amplify(11.1) OK, records in: 12780, records dropped: 515 output_compression: NoCompression
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.356214) EVENT_LOG_v1 {"time_micros": 1764408120356204, "job": 130, "event": "compaction_finished", "compaction_time_micros": 91145, "compaction_time_cpu_micros": 36013, "output_level": 6, "num_output_files": 1, "total_output_size": 14014224, "num_input_records": 12780, "num_output_records": 12265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120356622, "job": 130, "event": "table_file_deletion", "file_number": 203}
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120359981, "job": 130, "event": "table_file_deletion", "file_number": 201}
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.263525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.360039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.360043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.360045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.360046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539564 ceph-mon[81769]: rocksdb: (Original Log Time 2025/11/29-09:22:00.360047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:01 np0005539564 nova_compute[226295]: 2025-11-29 09:22:01.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:22:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:22:01 np0005539564 nova_compute[226295]: 2025-11-29 09:22:01.631 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:01.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:02 np0005539564 podman[329094]: 2025-11-29 09:22:02.543445448 +0000 UTC m=+0.079730793 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:22:02 np0005539564 podman[329093]: 2025-11-29 09:22:02.569351164 +0000 UTC m=+0.112395183 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 04:22:02 np0005539564 podman[329092]: 2025-11-29 09:22:02.61142426 +0000 UTC m=+0.154746357 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:22:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:03.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:22:03.807 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:22:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:22:03.808 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:22:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:22:03.808 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:22:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:03.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:04 np0005539564 nova_compute[226295]: 2025-11-29 09:22:04.148 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:22:05 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:22:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:05.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:06 np0005539564 nova_compute[226295]: 2025-11-29 09:22:06.635 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:07.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:07.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.150 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.375 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.376 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.376 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:22:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:22:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:09.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:22:09 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:22:09 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3008762521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:22:09 np0005539564 nova_compute[226295]: 2025-11-29 09:22:09.844 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:22:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:09.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:10 np0005539564 nova_compute[226295]: 2025-11-29 09:22:10.021 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:22:10 np0005539564 nova_compute[226295]: 2025-11-29 09:22:10.022 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4133MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:22:10 np0005539564 nova_compute[226295]: 2025-11-29 09:22:10.023 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:22:10 np0005539564 nova_compute[226295]: 2025-11-29 09:22:10.023 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:22:10 np0005539564 nova_compute[226295]: 2025-11-29 09:22:10.609 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:22:10 np0005539564 nova_compute[226295]: 2025-11-29 09:22:10.610 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:22:10 np0005539564 nova_compute[226295]: 2025-11-29 09:22:10.641 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:22:11 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:22:11 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/723768468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:22:11 np0005539564 nova_compute[226295]: 2025-11-29 09:22:11.089 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:22:11 np0005539564 nova_compute[226295]: 2025-11-29 09:22:11.096 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:22:11 np0005539564 nova_compute[226295]: 2025-11-29 09:22:11.155 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:22:11 np0005539564 nova_compute[226295]: 2025-11-29 09:22:11.157 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:22:11 np0005539564 nova_compute[226295]: 2025-11-29 09:22:11.157 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:22:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:11.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:11 np0005539564 nova_compute[226295]: 2025-11-29 09:22:11.638 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:11.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:13.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:13.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:14 np0005539564 nova_compute[226295]: 2025-11-29 09:22:14.152 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:15.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:15.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:16 np0005539564 nova_compute[226295]: 2025-11-29 09:22:16.642 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:17.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:17.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:19 np0005539564 nova_compute[226295]: 2025-11-29 09:22:19.154 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:19.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:21.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:21 np0005539564 nova_compute[226295]: 2025-11-29 09:22:21.646 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:23.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:24 np0005539564 nova_compute[226295]: 2025-11-29 09:22:24.157 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:25.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:26.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:26 np0005539564 nova_compute[226295]: 2025-11-29 09:22:26.650 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:27.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:22:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:28.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:22:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:29 np0005539564 nova_compute[226295]: 2025-11-29 09:22:29.160 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:29.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:31.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:31 np0005539564 nova_compute[226295]: 2025-11-29 09:22:31.655 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:32.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:33.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:33 np0005539564 podman[329250]: 2025-11-29 09:22:33.507742444 +0000 UTC m=+0.064373445 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:22:33 np0005539564 podman[329251]: 2025-11-29 09:22:33.514340083 +0000 UTC m=+0.059976065 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:22:33 np0005539564 podman[329249]: 2025-11-29 09:22:33.524622824 +0000 UTC m=+0.084665958 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:22:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:34 np0005539564 nova_compute[226295]: 2025-11-29 09:22:34.162 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:35.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:36 np0005539564 nova_compute[226295]: 2025-11-29 09:22:36.658 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:37.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:22:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:38.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:22:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:39 np0005539564 nova_compute[226295]: 2025-11-29 09:22:39.163 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:39.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:40.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:41.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:41 np0005539564 nova_compute[226295]: 2025-11-29 09:22:41.662 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:42 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:42 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:42 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:42.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:43 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:43 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:43 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:43.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:43 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:44 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:44 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:44 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:44.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:44 np0005539564 nova_compute[226295]: 2025-11-29 09:22:44.150 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:44 np0005539564 nova_compute[226295]: 2025-11-29 09:22:44.196 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:45 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:45 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:45 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:45.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:46 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:46 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:46 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:46.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:46 np0005539564 nova_compute[226295]: 2025-11-29 09:22:46.667 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:47 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:47 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:47 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:47.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:48 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:48 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:48 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:48.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:48 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:49 np0005539564 nova_compute[226295]: 2025-11-29 09:22:49.200 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:49 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:49 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:49 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:49.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:50 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:50 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:50 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:50.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:51 np0005539564 nova_compute[226295]: 2025-11-29 09:22:51.335 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:51 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:51 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:51 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:51.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:51 np0005539564 nova_compute[226295]: 2025-11-29 09:22:51.671 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:51 np0005539564 nova_compute[226295]: 2025-11-29 09:22:51.950 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:52 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:52 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:52 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:52.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:52 np0005539564 nova_compute[226295]: 2025-11-29 09:22:52.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:53 np0005539564 nova_compute[226295]: 2025-11-29 09:22:53.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:53 np0005539564 nova_compute[226295]: 2025-11-29 09:22:53.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:22:53 np0005539564 nova_compute[226295]: 2025-11-29 09:22:53.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:22:53 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:53 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:53 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:53.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:53 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:53 np0005539564 nova_compute[226295]: 2025-11-29 09:22:53.546 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:22:53 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 04:22:54 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:54 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:54 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:54.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:54 np0005539564 nova_compute[226295]: 2025-11-29 09:22:54.203 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:54 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 04:22:54 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 04:22:55 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:55 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:55 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:55.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:55 np0005539564 radosgw[83777]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 04:22:56 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:56 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:56 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:56.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:56 np0005539564 nova_compute[226295]: 2025-11-29 09:22:56.692 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:57 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:57 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:57 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:57.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:58 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:22:58 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:58 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:58.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:58 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:01 np0005539564 nova_compute[226295]: 2025-11-29 09:22:59.207 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:01 np0005539564 nova_compute[226295]: 2025-11-29 09:22:59.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:01 np0005539564 nova_compute[226295]: 2025-11-29 09:22:59.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:23:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:59.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:00.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:01 np0005539564 nova_compute[226295]: 2025-11-29 09:23:01.344 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:01 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:01 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:01 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:01.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:01 np0005539564 nova_compute[226295]: 2025-11-29 09:23:01.697 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:02 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:02 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:23:02 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:02.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:23:02 np0005539564 nova_compute[226295]: 2025-11-29 09:23:02.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:03 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:03 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:03 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:03.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:03 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:23:03.809 139780 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:23:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:23:03.809 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:23:03 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:23:03.809 139780 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:23:04 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:04 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:04 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:04.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:04 np0005539564 nova_compute[226295]: 2025-11-29 09:23:04.210 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:04 np0005539564 podman[329316]: 2025-11-29 09:23:04.503448951 +0000 UTC m=+0.049691685 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:23:04 np0005539564 podman[329315]: 2025-11-29 09:23:04.507779539 +0000 UTC m=+0.058329660 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:23:04 np0005539564 podman[329314]: 2025-11-29 09:23:04.545275801 +0000 UTC m=+0.100726456 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller)
Nov 29 04:23:05 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:05 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:05 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:05.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:06 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:06 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:06 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:06.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:06 np0005539564 nova_compute[226295]: 2025-11-29 09:23:06.704 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:07 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:07 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:07 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:07.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:08 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:08 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:08 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:08 np0005539564 nova_compute[226295]: 2025-11-29 09:23:08.163 226310 DEBUG oslo_concurrency.processutils [None req-bfbc2e22-d063-44d1-a227-b6cb69f51f4b 7d32840c789849a29c7630e25f803b3c 532b69b8d9eb42e8a1aed36b5ddb038a - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:23:08 np0005539564 nova_compute[226295]: 2025-11-29 09:23:08.209 226310 DEBUG oslo_concurrency.processutils [None req-bfbc2e22-d063-44d1-a227-b6cb69f51f4b 7d32840c789849a29c7630e25f803b3c 532b69b8d9eb42e8a1aed36b5ddb038a - - default default] CMD "env LANG=C uptime" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:23:08 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:23:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:08 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:23:09 np0005539564 nova_compute[226295]: 2025-11-29 09:23:09.212 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:09 np0005539564 nova_compute[226295]: 2025-11-29 09:23:09.342 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:09 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:09 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:09 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:09.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:10 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:10 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:10 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:10.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:11 np0005539564 nova_compute[226295]: 2025-11-29 09:23:11.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:11 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:11 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:11 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:11.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:11 np0005539564 nova_compute[226295]: 2025-11-29 09:23:11.639 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:23:11 np0005539564 nova_compute[226295]: 2025-11-29 09:23:11.640 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:23:11 np0005539564 nova_compute[226295]: 2025-11-29 09:23:11.640 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:23:11 np0005539564 nova_compute[226295]: 2025-11-29 09:23:11.641 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:23:11 np0005539564 nova_compute[226295]: 2025-11-29 09:23:11.641 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:23:11 np0005539564 nova_compute[226295]: 2025-11-29 09:23:11.706 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:12 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:12 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:12 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:12.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:12 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:23:12 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/901617796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:23:12 np0005539564 nova_compute[226295]: 2025-11-29 09:23:12.106 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:23:12 np0005539564 nova_compute[226295]: 2025-11-29 09:23:12.254 226310 WARNING nova.virt.libvirt.driver [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:23:12 np0005539564 nova_compute[226295]: 2025-11-29 09:23:12.255 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4126MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:23:12 np0005539564 nova_compute[226295]: 2025-11-29 09:23:12.255 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:23:12 np0005539564 nova_compute[226295]: 2025-11-29 09:23:12.255 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:23:12 np0005539564 nova_compute[226295]: 2025-11-29 09:23:12.779 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:23:12 np0005539564 nova_compute[226295]: 2025-11-29 09:23:12.780 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:23:12 np0005539564 nova_compute[226295]: 2025-11-29 09:23:12.829 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:23:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:23:13 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1672143075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:23:13 np0005539564 nova_compute[226295]: 2025-11-29 09:23:13.291 226310 DEBUG oslo_concurrency.processutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:23:13 np0005539564 nova_compute[226295]: 2025-11-29 09:23:13.300 226310 DEBUG nova.compute.provider_tree [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed in ProviderTree for provider: ea190a43-1246-44b8-8f8b-a61b155a1d3b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:23:13 np0005539564 nova_compute[226295]: 2025-11-29 09:23:13.417 226310 DEBUG nova.scheduler.client.report [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Inventory has not changed for provider ea190a43-1246-44b8-8f8b-a61b155a1d3b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:23:13 np0005539564 nova_compute[226295]: 2025-11-29 09:23:13.420 226310 DEBUG nova.compute.resource_tracker [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:23:13 np0005539564 nova_compute[226295]: 2025-11-29 09:23:13.421 226310 DEBUG oslo_concurrency.lockutils [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:23:13 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:13 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:13 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:13.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:13 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:14 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:14 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:14 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:14.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:14 np0005539564 nova_compute[226295]: 2025-11-29 09:23:14.215 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:15 np0005539564 ceph-mon[81769]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:15 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:15 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:15 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:15.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:16 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:16 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:16 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:16.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:16 np0005539564 systemd-logind[785]: New session 63 of user zuul.
Nov 29 04:23:16 np0005539564 systemd[1]: Started Session 63 of User zuul.
Nov 29 04:23:16 np0005539564 nova_compute[226295]: 2025-11-29 09:23:16.709 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:17 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:17 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:17 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:17.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:18 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:18 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:18 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:18.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:18 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:23:19.144 139780 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:23:19 np0005539564 nova_compute[226295]: 2025-11-29 09:23:19.146 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:19 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:23:19.146 139780 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:23:19 np0005539564 nova_compute[226295]: 2025-11-29 09:23:19.217 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:19 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:19 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:19 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:19.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:20 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:23:20 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1018952352' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:23:20 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:20 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:20 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:20.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:21 np0005539564 nova_compute[226295]: 2025-11-29 09:23:21.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:21 np0005539564 nova_compute[226295]: 2025-11-29 09:23:21.344 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:23:21 np0005539564 nova_compute[226295]: 2025-11-29 09:23:21.465 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:23:21 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:21 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:21 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:21.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:21 np0005539564 nova_compute[226295]: 2025-11-29 09:23:21.715 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:22 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:22 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:22 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:22.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:23 np0005539564 ovs-vsctl[329893]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 04:23:23 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:23 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:23 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:23.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:23 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:24 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:24 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:24 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:24.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:24 np0005539564 ovn_metadata_agent[139775]: 2025-11-29 09:23:24.149 139780 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=011fdddc-8681-4ece-b276-7e821dffaec6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:23:24 np0005539564 virtqemud[225880]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 04:23:24 np0005539564 virtqemud[225880]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 04:23:24 np0005539564 nova_compute[226295]: 2025-11-29 09:23:24.260 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:24 np0005539564 virtqemud[225880]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:23:24 np0005539564 nova_compute[226295]: 2025-11-29 09:23:24.343 226310 DEBUG oslo_service.periodic_task [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:24 np0005539564 nova_compute[226295]: 2025-11-29 09:23:24.343 226310 DEBUG nova.compute.manager [None req-c275559b-8a2f-4ae8-b671-4ad406e2abe1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:23:24 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: cache status {prefix=cache status} (starting...)
Nov 29 04:23:24 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:24 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: client ls {prefix=client ls} (starting...)
Nov 29 04:23:24 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:24 np0005539564 lvm[330227]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 04:23:24 np0005539564 lvm[330227]: VG ceph_vg0 finished
Nov 29 04:23:25 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:25 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:25 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:25 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 04:23:25 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:25 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 04:23:25 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:25 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:23:25 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3493943781' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:23:25 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 04:23:25 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:25 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 04:23:25 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:23:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/306902972' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:23:26 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:26 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:26 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 04:23:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2264060084' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539564 nova_compute[226295]: 2025-11-29 09:23:26.741 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: ops {prefix=ops} (starting...)
Nov 29 04:23:26 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 04:23:26 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4114640580' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 04:23:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 04:23:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/213769437' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 04:23:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:23:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1044149323' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:23:27 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: session ls {prefix=session ls} (starting...)
Nov 29 04:23:27 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd Can't run that command on an inactive MDS!
Nov 29 04:23:27 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:27 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:27 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:27.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:27 np0005539564 ceph-mds[84716]: mds.cephfs.compute-1.oeerwd asok_command: status {prefix=status} (starting...)
Nov 29 04:23:27 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:23:27 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4129492656' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815208781' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:23:28 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:28 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:28 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:28.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/694171894' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1635987375' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3480468546' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 04:23:28 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1013019184' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 04:23:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 04:23:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/830609869' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 04:23:29 np0005539564 nova_compute[226295]: 2025-11-29 09:23:29.261 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:29 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:29 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:29 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:29.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:23:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2502860335' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:23:29 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 04:23:29 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/913395531' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 04:23:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 04:23:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3906609404' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 04:23:30 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:30 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:30 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:30.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:23:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2161853771' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:23:30 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:23:30 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1495824373' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468180992 unmapped: 91906048 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 419 ms_handle_reset con 0x55ba50474400 session 0x55ba50cbc1e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 419 ms_handle_reset con 0x55ba50514c00 session 0x55ba513c1a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 419 ms_handle_reset con 0x55ba51833800 session 0x55ba51729a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471064576 unmapped: 89022464 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471097344 unmapped: 88989696 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114396 data_alloc: 234881024 data_used: 30732288
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 419 heartbeat osd_stat(store_statfs(0x19e665000/0x0/0x1bfc00000, data 0x45e89d4/0x4817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471097344 unmapped: 88989696 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471097344 unmapped: 88989696 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.004964828s of 13.604809761s, submitted: 86
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba4f2b14a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba52ee7000 session 0x55ba4e560960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba506d14a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5096810 data_alloc: 234881024 data_used: 30732288
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50474400 session 0x55ba51356960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba51833800 session 0x55ba513570e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50514c00 session 0x55ba509bc000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50476400 session 0x55ba4e5c4d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba50ef72c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50474400 session 0x55ba513ed4a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19e253000/0x0/0x1bfc00000, data 0x45ea513/0x481a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19e253000/0x0/0x1bfc00000, data 0x45ea513/0x481a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464576512 unmapped: 95510528 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466681856 unmapped: 93405184 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467853312 unmapped: 92233728 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179178 data_alloc: 234881024 data_used: 31653888
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5183cc00 session 0x55ba4f2b14a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae0000/0x0/0x1bfc00000, data 0x4d55513/0x4f85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50477400 session 0x55ba51729a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 92225536 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50476c00 session 0x55ba513c1a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba50cbc1e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae0000/0x0/0x1bfc00000, data 0x4d55513/0x4f85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467189760 unmapped: 92897280 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467189760 unmapped: 92897280 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae7000/0x0/0x1bfc00000, data 0x4d55546/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5206733 data_alloc: 251658240 data_used: 35938304
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.922373772s of 13.203535080s, submitted: 82
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5207869 data_alloc: 251658240 data_used: 36007936
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae7000/0x0/0x1bfc00000, data 0x4d55546/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae7000/0x0/0x1bfc00000, data 0x4d55546/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 91103232 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dae7000/0x0/0x1bfc00000, data 0x4d55546/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468992000 unmapped: 91095040 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5208669 data_alloc: 251658240 data_used: 36028416
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.064604759s of 10.069568634s, submitted: 1
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469016576 unmapped: 91070464 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469082112 unmapped: 91004928 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19daa1000/0x0/0x1bfc00000, data 0x4d9b546/0x4fcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469082112 unmapped: 91004928 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba54441c00 session 0x55ba510f34a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba58549400 session 0x55ba513ec000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5135dc00 session 0x55ba4f856b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5062ec00 session 0x55ba4eab0960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba4f2cfc00 session 0x55ba4f856960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 90685440 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5135dc00 session 0x55ba51772780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba54441c00 session 0x55ba4f755e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba58549400 session 0x55ba4f2b0000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba56d30400 session 0x55ba510501e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 90685440 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5253967 data_alloc: 251658240 data_used: 37294080
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 90685440 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470417408 unmapped: 89669632 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19d76d000/0x0/0x1bfc00000, data 0x50ce556/0x5301000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470417408 unmapped: 89669632 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19d76d000/0x0/0x1bfc00000, data 0x50ce556/0x5301000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470417408 unmapped: 89669632 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba50510000 session 0x55ba51728d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470417408 unmapped: 89669632 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5263943 data_alloc: 251658240 data_used: 37695488
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470425600 unmapped: 89661440 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 89358336 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.657601357s of 11.733363152s, submitted: 21
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 89358336 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19d76c000/0x0/0x1bfc00000, data 0x50cf556/0x5302000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 89317376 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470810624 unmapped: 89276416 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280571 data_alloc: 251658240 data_used: 38965248
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470884352 unmapped: 89202688 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470884352 unmapped: 89202688 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19e7ac000/0x0/0x1bfc00000, data 0x50cf556/0x5302000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470884352 unmapped: 89202688 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470884352 unmapped: 89202688 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 89186304 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280571 data_alloc: 251658240 data_used: 38965248
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475119616 unmapped: 84967424 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475119616 unmapped: 84967424 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.770074844s of 10.655957222s, submitted: 265
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476471296 unmapped: 83615744 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19df2f000/0x0/0x1bfc00000, data 0x5944556/0x5b77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478388224 unmapped: 81698816 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478568448 unmapped: 81518592 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5392649 data_alloc: 251658240 data_used: 40611840
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478568448 unmapped: 81518592 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478568448 unmapped: 81518592 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 81371136 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 heartbeat osd_stat(store_statfs(0x19dd7b000/0x0/0x1bfc00000, data 0x5af8556/0x5d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 ms_handle_reset con 0x55ba5046ec00 session 0x55ba50872d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478732288 unmapped: 81354752 heap: 560087040 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 421 ms_handle_reset con 0x55ba50477c00 session 0x55ba4eab1680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 421 ms_handle_reset con 0x55ba5182f800 session 0x55ba517f54a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 421 ms_handle_reset con 0x55ba51831000 session 0x55ba506eb860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 421 ms_handle_reset con 0x55ba5046ec00 session 0x55ba4f8565a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5669524 data_alloc: 251658240 data_used: 43790336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477396992 unmapped: 95297536 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477413376 unmapped: 95281152 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 422 ms_handle_reset con 0x55ba50477c00 session 0x55ba50ffda40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477413376 unmapped: 95281152 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba50510000 session 0x55ba517dba40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba4f2cf800 session 0x55ba50ecf0e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba53eed400 session 0x55ba50872960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.476616859s of 10.182755470s, submitted: 181
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477437952 unmapped: 95256576 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba4f2cf800 session 0x55ba4f2b1860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 heartbeat osd_stat(store_statfs(0x19be2e000/0x0/0x1bfc00000, data 0x7a48ac1/0x7c7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba5046ec00 session 0x55ba517f4000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba50477c00 session 0x55ba50872960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 ms_handle_reset con 0x55ba50510000 session 0x55ba50ffda40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477446144 unmapped: 95248384 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5558206 data_alloc: 251658240 data_used: 40476672
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477446144 unmapped: 95248384 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba51833800 session 0x55ba517f54a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477454336 unmapped: 95240192 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba50474400 session 0x55ba510514a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba50477400 session 0x55ba4f6b4f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477454336 unmapped: 95240192 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba51818000 session 0x55ba51036b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 424 ms_handle_reset con 0x55ba6483f400 session 0x55ba513574a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 425 ms_handle_reset con 0x55ba4f2cf800 session 0x55ba50960780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 425 ms_handle_reset con 0x55ba5046ec00 session 0x55ba513570e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 95207424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fbd9000/0x0/0x1bfc00000, data 0x395d2a2/0x3b92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 95207424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5012763 data_alloc: 234881024 data_used: 30158848
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 95207424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 95207424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 426 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 95191040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.857404709s of 10.224763870s, submitted: 142
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 95182848 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 ms_handle_reset con 0x55ba57dc7000 session 0x55ba4f854960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 108871680 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a0cfc000/0x0/0x1bfc00000, data 0x2b7ba1c/0x2db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841558 data_alloc: 234881024 data_used: 15929344
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 ms_handle_reset con 0x55ba4ec32400 session 0x55ba4f856d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 ms_handle_reset con 0x55ba5183c400 session 0x55ba4e5603c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 108871680 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461807616 unmapped: 110886912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 ms_handle_reset con 0x55ba53ffd800 session 0x55ba513c1860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a1a1d000/0x0/0x1bfc00000, data 0x1e5d9aa/0x2091000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a1a1d000/0x0/0x1bfc00000, data 0x1e39987/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4687336 data_alloc: 218103808 data_used: 7942144
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a1a1d000/0x0/0x1bfc00000, data 0x1e39987/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.512023926s of 10.001841545s, submitted: 76
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4691334 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461815808 unmapped: 110878720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461824000 unmapped: 110870528 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461824000 unmapped: 110870528 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4691334 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4691334 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4691334 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d32c00 session 0x55ba50872780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50515c00 session 0x55ba51772960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51833c00 session 0x55ba51773860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50d7e800 session 0x55ba510f30e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.948165894s of 19.958507538s, submitted: 13
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3d000/0x0/0x1bfc00000, data 0x1e3b4d6/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 110862336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703400 session 0x55ba517db4a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703400 session 0x55ba50961c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50474400 session 0x55ba513ed0e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc7c00 session 0x55ba516f81e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffc000 session 0x55ba517dbe00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462979072 unmapped: 109715456 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a1a3d000/0x0/0x1bfc00000, data 0x1e3b4d6/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4743313 data_alloc: 218103808 data_used: 7954432
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462979072 unmapped: 109715456 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a0273000/0x0/0x1bfc00000, data 0x2465538/0x269b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462987264 unmapped: 109707264 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462987264 unmapped: 109707264 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50475000 session 0x55ba50961a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462987264 unmapped: 109707264 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50474400 session 0x55ba4f6b4b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a0273000/0x0/0x1bfc00000, data 0x2465538/0x269b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d31800 session 0x55ba516f83c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462987264 unmapped: 109707264 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba58548000 session 0x55ba517f41e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4746371 data_alloc: 218103808 data_used: 7954432
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a024e000/0x0/0x1bfc00000, data 0x2489548/0x26c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463142912 unmapped: 109551616 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a024e000/0x0/0x1bfc00000, data 0x2489548/0x26c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784611 data_alloc: 218103808 data_used: 13299712
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a024e000/0x0/0x1bfc00000, data 0x2489548/0x26c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a024e000/0x0/0x1bfc00000, data 0x2489548/0x26c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784611 data_alloc: 218103808 data_used: 13299712
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 109256704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463446016 unmapped: 109248512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.262138367s of 19.385234833s, submitted: 32
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463675392 unmapped: 109019136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463683584 unmapped: 109010944 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 108896256 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 108888064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 108879872 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816531 data_alloc: 218103808 data_used: 13565952
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 108863488 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ffd3000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.747936249s of 36.910617828s, submitted: 60
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51830000 session 0x55ba517f4960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703c00 session 0x55ba513ecd20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50474400 session 0x55ba513ede00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51830000 session 0x55ba506d0960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d31800 session 0x55ba51036000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463740928 unmapped: 108953600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4882866 data_alloc: 218103808 data_used: 13570048
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463740928 unmapped: 108953600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4882866 data_alloc: 218103808 data_used: 13570048
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5135cc00 session 0x55ba4f211860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463749120 unmapped: 108945408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 463757312 unmapped: 108937216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900626 data_alloc: 234881024 data_used: 16113664
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.276296616s of 11.377257347s, submitted: 36
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4935618 data_alloc: 234881024 data_used: 20955136
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7b7000/0x0/0x1bfc00000, data 0x2f1f5aa/0x3157000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4935618 data_alloc: 234881024 data_used: 20955136
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464805888 unmapped: 107888640 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.254467010s of 11.266688347s, submitted: 13
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465641472 unmapped: 107053056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466649088 unmapped: 106045440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 105570304 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ed27000/0x0/0x1bfc00000, data 0x39a15aa/0x3bd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 105570304 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5019204 data_alloc: 234881024 data_used: 21573632
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db7b000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021254 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.256912231s of 12.541369438s, submitted: 93
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468656128 unmapped: 104038400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468672512 unmapped: 104022016 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468672512 unmapped: 104022016 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50514400 session 0x55ba517f52c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54441000 session 0x55ba51729e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba59eb7000 session 0x55ba510f32c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468680704 unmapped: 104013824 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021078 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468688896 unmapped: 104005632 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468688896 unmapped: 104005632 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.090114594s of 36.097347260s, submitted: 2
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021254 data_alloc: 234881024 data_used: 21962752
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468697088 unmapped: 103997440 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19db83000/0x0/0x1bfc00000, data 0x39b35aa/0x3beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468705280 unmapped: 103989248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468705280 unmapped: 103989248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5023718 data_alloc: 234881024 data_used: 21950464
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468705280 unmapped: 103989248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468705280 unmapped: 103989248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eedc00 session 0x55ba506a85a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54c45800 session 0x55ba513ecd20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50516000 session 0x55ba50ef7e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7f4000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4820992 data_alloc: 218103808 data_used: 13570048
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7f4000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461553664 unmapped: 111140864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7f4000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4820992 data_alloc: 218103808 data_used: 13570048
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.146137238s of 18.246082306s, submitted: 44
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506f8000 session 0x55ba509612c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046cc00 session 0x55ba50ef72c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7f4000/0x0/0x1bfc00000, data 0x26e4548/0x291b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [1])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50516000 session 0x55ba4f854960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461578240 unmapped: 111116288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461619200 unmapped: 111075328 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461619200 unmapped: 111075328 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461619200 unmapped: 111075328 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 111067136 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 111058944 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 111050752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 111034368 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 111026176 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6b8000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461676544 unmapped: 111017984 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710155 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046ec00 session 0x55ba506434a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba4f741c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54440000 session 0x55ba51729a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba64840000 session 0x55ba506a8f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.831707001s of 57.919052124s, submitted: 26
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046ec00 session 0x55ba517dba40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50516000 session 0x55ba4f856b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54440000 session 0x55ba4f654d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba50a45680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50510c00 session 0x55ba506ea5a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4764199 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461692928 unmapped: 111001600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50479000 session 0x55ba517f4f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461701120 unmapped: 110993408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046c800 session 0x55ba51037860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461701120 unmapped: 110993408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461701120 unmapped: 110993408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50d7e400 session 0x55ba506d14a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4764199 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eeb000 session 0x55ba51356960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f010000/0x0/0x1bfc00000, data 0x25294d6/0x275e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461709312 unmapped: 110985216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461709312 unmapped: 110985216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809397 data_alloc: 234881024 data_used: 14049280
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f00f000/0x0/0x1bfc00000, data 0x25294e6/0x275f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50513000 session 0x55ba513572c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54c44800 session 0x55ba517732c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f00f000/0x0/0x1bfc00000, data 0x25294e6/0x275f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.345306396s of 16.414997101s, submitted: 11
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f00f000/0x0/0x1bfc00000, data 0x25294e6/0x275f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50513000 session 0x55ba51357860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4714942 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4714942 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 113836032 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51833c00 session 0x55ba513561e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183a400 session 0x55ba509614a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffc000 session 0x55ba506a81e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffc000 session 0x55ba51773c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.194999695s of 12.239532471s, submitted: 13
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e095400 session 0x55ba510f34a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50513000 session 0x55ba513ed0e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51833c00 session 0x55ba4f6554a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183a400 session 0x55ba4e560b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183a400 session 0x55ba50873a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4741531 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50473400 session 0x55ba510f3e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50477400 session 0x55ba51357860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4741531 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5135c400 session 0x55ba517732c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba51356960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba581a1400 session 0x55ba506d14a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 113827840 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 113631232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757023 data_alloc: 218103808 data_used: 10113024
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 113631232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 113631232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3db000/0x0/0x1bfc00000, data 0x215f4c6/0x2393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.513194084s of 13.582759857s, submitted: 17
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183c400 session 0x55ba51037860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba517f5680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51838c00 session 0x55ba510f32c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459071488 unmapped: 113623040 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 113606656 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 113598464 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4717847 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6ff000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 113582080 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.745201111s of 35.829093933s, submitted: 25
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba4de592c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5182fc00 session 0x55ba509bc960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5182fc00 session 0x55ba509bd860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf000 session 0x55ba513572c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51838c00 session 0x55ba4e5c4d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 114507776 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f6fe000/0x0/0x1bfc00000, data 0x1e3b4ef/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4767417 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f119000/0x0/0x1bfc00000, data 0x2420528/0x2655000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 114499584 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f119000/0x0/0x1bfc00000, data 0x2420528/0x2655000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50514000 session 0x55ba509bde00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458350592 unmapped: 114343936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4771654 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458350592 unmapped: 114343936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f0f4000/0x0/0x1bfc00000, data 0x244454b/0x267a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4815334 data_alloc: 234881024 data_used: 14049280
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f0f4000/0x0/0x1bfc00000, data 0x244454b/0x267a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f0f4000/0x0/0x1bfc00000, data 0x244454b/0x267a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 113795072 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4815334 data_alloc: 234881024 data_used: 14049280
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 113786880 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 113786880 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f0f4000/0x0/0x1bfc00000, data 0x244454b/0x267a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 113786880 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.488601685s of 19.623565674s, submitted: 43
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 110977024 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902710 data_alloc: 234881024 data_used: 15384576
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461537280 unmapped: 111157248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4902870 data_alloc: 234881024 data_used: 15388672
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4903030 data_alloc: 234881024 data_used: 15392768
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4903030 data_alloc: 234881024 data_used: 15392768
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.422891617s of 17.684936523s, submitted: 89
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba4f6550e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183b400 session 0x55ba51036000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461545472 unmapped: 111149056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e70e000/0x0/0x1bfc00000, data 0x2e2a54b/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba506d01e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461561856 unmapped: 111132672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461570048 unmapped: 111124480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4729858 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461586432 unmapped: 111108096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f34e000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 461602816 unmapped: 111091712 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.598426819s of 38.765792847s, submitted: 52
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801077 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba58549c00 session 0x55ba510503c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51841c00 session 0x55ba50cbc960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba52ee7400 session 0x55ba50872960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba508734a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183b400 session 0x55ba513ec000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801093 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50517400 session 0x55ba50960f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffc000 session 0x55ba503f3680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462192640 unmapped: 110501888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483e000 session 0x55ba4f6541e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba510f21e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 110616576 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 110616576 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4860738 data_alloc: 234881024 data_used: 16179200
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4860738 data_alloc: 234881024 data_used: 16179200
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462667776 unmapped: 110026752 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183ac00 session 0x55ba50ecf0e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d33400 session 0x55ba513ec780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046ec00 session 0x55ba50da1a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50515800 session 0x55ba50da0d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.848850250s of 20.036869049s, submitted: 42
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5046ec00 session 0x55ba50da1e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba50ece3c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183ac00 session 0x55ba50a45c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d33400 session 0x55ba4e5c41e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d32400 session 0x55ba4f7552c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 462823424 unmapped: 109871104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19eacf000/0x0/0x1bfc00000, data 0x265a528/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909052 data_alloc: 234881024 data_used: 16179200
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e58e000/0x0/0x1bfc00000, data 0x2b9a538/0x2dd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 104792064 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183d800 session 0x55ba50872000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4986610 data_alloc: 234881024 data_used: 18104320
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19dc88000/0x0/0x1bfc00000, data 0x34a0538/0x36d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 104357888 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5024998 data_alloc: 234881024 data_used: 23425024
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19dc67000/0x0/0x1bfc00000, data 0x34c1538/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5024998 data_alloc: 234881024 data_used: 23425024
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19dc67000/0x0/0x1bfc00000, data 0x34c1538/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468779008 unmapped: 103915520 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.104579926s of 19.442375183s, submitted: 130
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472252416 unmapped: 100442112 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472260608 unmapped: 100433920 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5097550 data_alloc: 234881024 data_used: 24334336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bd000/0x0/0x1bfc00000, data 0x3d63538/0x3f99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bf000/0x0/0x1bfc00000, data 0x3d69538/0x3f9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5093534 data_alloc: 234881024 data_used: 24481792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bf000/0x0/0x1bfc00000, data 0x3d69538/0x3f9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bf000/0x0/0x1bfc00000, data 0x3d69538/0x3f9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5093550 data_alloc: 234881024 data_used: 24481792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471588864 unmapped: 101105664 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471597056 unmapped: 101097472 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.481681824s of 13.751366615s, submitted: 73
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3bf000/0x0/0x1bfc00000, data 0x3d69538/0x3f9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470589440 unmapped: 102105088 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51833400 session 0x55ba517dba40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d32c00 session 0x55ba513c1860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x3d6f538/0x3fa5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 102096896 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba55640800 session 0x55ba50ffd680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 68K writes, 266K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 68K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3263 writes, 13K keys, 3263 commit groups, 1.0 writes per commit group, ingest: 13.75 MB, 0.02 MB/s#012Interval WAL: 3263 writes, 1301 syncs, 2.51 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 102096896 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4947026 data_alloc: 234881024 data_used: 18104320
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 102096896 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 102096896 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e19f000/0x0/0x1bfc00000, data 0x2f8a528/0x31bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470605824 unmapped: 102088704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470605824 unmapped: 102088704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 102080512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4947026 data_alloc: 234881024 data_used: 18104320
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eea800 session 0x55ba508734a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba517daf00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466214912 unmapped: 106479616 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eea800 session 0x55ba4f6b5c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4748074 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba581a1c00 session 0x55ba513c0d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba4eab0000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f7da400 session 0x55ba50ef6960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f7da400 session 0x55ba4f210780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.441654205s of 34.627315521s, submitted: 64
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53eea800 session 0x55ba506d1a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba581a1c00 session 0x55ba509603c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba517290e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba4f6b4b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba516f92c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecbb000/0x0/0x1bfc00000, data 0x246f4c6/0x26a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801494 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51839c00 session 0x55ba506d0780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecbb000/0x0/0x1bfc00000, data 0x246f4c6/0x26a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0ca400 session 0x55ba51773860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecbb000/0x0/0x1bfc00000, data 0x246f4c6/0x26a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba58549400 session 0x55ba50eced20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470800 session 0x55ba51356f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841252 data_alloc: 218103808 data_used: 13230080
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecba000/0x0/0x1bfc00000, data 0x246f4d6/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecba000/0x0/0x1bfc00000, data 0x246f4d6/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841252 data_alloc: 218103808 data_used: 13230080
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 106471424 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecba000/0x0/0x1bfc00000, data 0x246f4d6/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 106463232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 106463232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecba000/0x0/0x1bfc00000, data 0x246f4d6/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 106463232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 106463232 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4841252 data_alloc: 218103808 data_used: 13230080
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.628616333s of 18.725591660s, submitted: 11
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465870848 unmapped: 106823680 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e0fa000/0x0/0x1bfc00000, data 0x30274d6/0x325c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e0f9000/0x0/0x1bfc00000, data 0x302f4d6/0x3264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4932514 data_alloc: 218103808 data_used: 13729792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e0f9000/0x0/0x1bfc00000, data 0x302f4d6/0x3264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e0f9000/0x0/0x1bfc00000, data 0x302f4d6/0x3264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4932530 data_alloc: 218103808 data_used: 13729792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506fb000 session 0x55ba506a8f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51838000 session 0x55ba517f41e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.447827339s of 13.035610199s, submitted: 77
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50473800 session 0x55ba51773680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ee000/0x0/0x1bfc00000, data 0x1e3b4d6/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 107823104 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 107814912 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 107806720 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 107798528 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 107798528 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 107790336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 107790336 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 107782144 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 107782144 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 107782144 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 107773952 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4757963 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 107773952 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 107773952 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464928768 unmapped: 107765760 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba517f4f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50472c00 session 0x55ba4de583c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50472c00 session 0x55ba510501e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50473800 session 0x55ba509bc1e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.206523895s of 30.257295609s, submitted: 18
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506fb000 session 0x55ba51051860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51838000 session 0x55ba4f856d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba4f2b14a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5a0cac00 session 0x55ba517da960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50472c00 session 0x55ba510370e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecee000/0x0/0x1bfc00000, data 0x243b528/0x2670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802880 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecee000/0x0/0x1bfc00000, data 0x243b528/0x2670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 107749376 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802880 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 107741184 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 107741184 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50d7fc00 session 0x55ba50a45680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 107593728 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896188736s of 10.002918243s, submitted: 30
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 107585536 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851265 data_alloc: 218103808 data_used: 14245888
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506fe000 session 0x55ba50ffcf00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50514c00 session 0x55ba510503c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851133 data_alloc: 218103808 data_used: 14245888
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.686688423s of 10.693693161s, submitted: 2
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851265 data_alloc: 218103808 data_used: 14245888
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465502208 unmapped: 107192320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50700800 session 0x55ba50a45c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2ce400 session 0x55ba4eab0f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851133 data_alloc: 218103808 data_used: 14245888
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465510400 unmapped: 107184128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465518592 unmapped: 107175936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465518592 unmapped: 107175936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc7400 session 0x55ba50da0d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.676322937s of 10.815813065s, submitted: 3
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba59eb7000 session 0x55ba510f2d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465526784 unmapped: 107167744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851133 data_alloc: 218103808 data_used: 14245888
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecca000/0x0/0x1bfc00000, data 0x245f528/0x2694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [0,0,1])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465543168 unmapped: 107151360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f800 session 0x55ba513ec780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465567744 unmapped: 107126784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465633280 unmapped: 107061248 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465649664 unmapped: 107044864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465657856 unmapped: 107036672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465666048 unmapped: 107028480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465666048 unmapped: 107028480 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465674240 unmapped: 107020288 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762761 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465682432 unmapped: 107012096 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465690624 unmapped: 107003904 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f800 session 0x55ba50ecf680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba506a8b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba59eb6800 session 0x55ba50960f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba543a9400 session 0x55ba509603c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465690624 unmapped: 107003904 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.144412994s of 48.635864258s, submitted: 293
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51837c00 session 0x55ba517290e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba4e5c4d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f800 session 0x55ba4f856b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba543a9400 session 0x55ba516f9a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba59eb6800 session 0x55ba4f740f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4820033 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465788928 unmapped: 106905600 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465797120 unmapped: 106897408 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4820033 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba509605a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465805312 unmapped: 106889216 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4827873 data_alloc: 218103808 data_used: 9003008
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465838080 unmapped: 106856448 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861473 data_alloc: 218103808 data_used: 13762560
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466665472 unmapped: 106029056 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ec4e000/0x0/0x1bfc00000, data 0x24db4d6/0x2710000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466673664 unmapped: 106020864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466673664 unmapped: 106020864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861473 data_alloc: 218103808 data_used: 13762560
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466673664 unmapped: 106020864 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466681856 unmapped: 106012672 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.520923615s of 23.669008255s, submitted: 13
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e5cf000/0x0/0x1bfc00000, data 0x2b5a4d6/0x2d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4916477 data_alloc: 218103808 data_used: 14352384
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 104095744 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4933071 data_alloc: 218103808 data_used: 15302656
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4933071 data_alloc: 218103808 data_used: 15302656
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 104087552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468615168 unmapped: 104079360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468615168 unmapped: 104079360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468615168 unmapped: 104079360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4933071 data_alloc: 218103808 data_used: 15302656
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468615168 unmapped: 104079360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468623360 unmapped: 104071168 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468623360 unmapped: 104071168 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468623360 unmapped: 104071168 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468631552 unmapped: 104062976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4933071 data_alloc: 218103808 data_used: 15302656
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468639744 unmapped: 104054784 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4934031 data_alloc: 218103808 data_used: 15327232
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba4f75dc20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba51728f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468647936 unmapped: 104046592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55e000/0x0/0x1bfc00000, data 0x2bcb4d6/0x2e00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.447341919s of 32.665538788s, submitted: 58
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55f000/0x0/0x1bfc00000, data 0x2bcb4c6/0x2dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e55f000/0x0/0x1bfc00000, data 0x2bcb4c6/0x2dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4932785 data_alloc: 218103808 data_used: 15327232
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468664320 unmapped: 104030208 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffd000 session 0x55ba4f6b4b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 108232704 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 108224512 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464478208 unmapped: 108216320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464478208 unmapped: 108216320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464478208 unmapped: 108216320 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464486400 unmapped: 108208128 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464494592 unmapped: 108199936 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464510976 unmapped: 108183552 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464519168 unmapped: 108175360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464519168 unmapped: 108175360 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464527360 unmapped: 108167168 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4772419 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464535552 unmapped: 108158976 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464551936 unmapped: 108142592 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba503f3680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56f29c00 session 0x55ba51729680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba517f4b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba516f8d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 64.227188110s of 64.287414551s, submitted: 18
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464560128 unmapped: 108134400 heap: 572694528 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba53ffd000 session 0x55ba516f8960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba50ece780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba581a1000 session 0x55ba516f94a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba506434a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba509612c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e7ae000/0x0/0x1bfc00000, data 0x297b4ef/0x2bb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464723968 unmapped: 111648768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4864266 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464723968 unmapped: 111648768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e76d000/0x0/0x1bfc00000, data 0x29bc528/0x2bf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464723968 unmapped: 111648768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e76d000/0x0/0x1bfc00000, data 0x29bc528/0x2bf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51840800 session 0x55ba4f210780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba51050f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50510000 session 0x55ba517f5c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba51357860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba51357e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 111476736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 111476736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51840800 session 0x55ba50960b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba4f2b0b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50471000 session 0x55ba50ffc3c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba510372c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba50ef63c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467075072 unmapped: 109297664 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba64840800 session 0x55ba4f75d4a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51840800 session 0x55ba509bc780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5135dc00 session 0x55ba516f9680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba50ef65a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba50ef6000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51840800 session 0x55ba4f211860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba64840800 session 0x55ba51729e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e1c9000/0x0/0x1bfc00000, data 0x2f5e55b/0x3195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4926132 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 111460352 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183ac00 session 0x55ba4f2b0000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4e527c00 session 0x55ba51051a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 111362048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465797120 unmapped: 110575616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 465797120 unmapped: 110575616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.331954956s of 11.006346703s, submitted: 72
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 110141440 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba543a8000 session 0x55ba51356b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5032053 data_alloc: 234881024 data_used: 21610496
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466681856 unmapped: 109690880 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e181000/0x0/0x1bfc00000, data 0x2fa655b/0x31dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 466681856 unmapped: 109690880 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 107888640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 107888640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d33000 session 0x55ba510f34a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba52ee6400 session 0x55ba516f81e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468492288 unmapped: 107880448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51837800 session 0x55ba513563c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006935 data_alloc: 234881024 data_used: 21606400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468500480 unmapped: 107872256 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 468500480 unmapped: 107872256 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e457000/0x0/0x1bfc00000, data 0x2cd154b/0x2f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 473186304 unmapped: 103186432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475029504 unmapped: 101343232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba56d31400 session 0x55ba510f25a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703400 session 0x55ba510512c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.492013931s of 10.009179115s, submitted: 200
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 100859904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba506fb000 session 0x55ba50ffd860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5076969 data_alloc: 234881024 data_used: 20606976
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 100859904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 100859904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483f400 session 0x55ba513c0780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f000 session 0x55ba4f655a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d86d000/0x0/0x1bfc00000, data 0x38bb54b/0x3af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 475521024 unmapped: 100851712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba4f84f000 session 0x55ba51356b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 107020288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 107012096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 107012096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 107012096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 107012096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 107003904 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 106995712 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 106987520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 106979328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4797895 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 79.260581970s of 79.400283813s, submitted: 48
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ee58000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba55640800 session 0x55ba509612c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 106971136 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 106962944 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 106954752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 106954752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 106954752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 106954752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 106946560 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50479400 session 0x55ba517f4b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 106938368 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51831800 session 0x55ba4e560d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802526 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183b400 session 0x55ba4f2b0f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.520866394s of 22.554546356s, submitted: 7
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba5183b400 session 0x55ba513ec5a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469442560 unmapped: 106930176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f28b000/0x0/0x1bfc00000, data 0x1e9f4c6/0x20d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469450752 unmapped: 106921984 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f28b000/0x0/0x1bfc00000, data 0x1e9f4c6/0x20d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f28b000/0x0/0x1bfc00000, data 0x1e9f4c6/0x20d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4808003 data_alloc: 218103808 data_used: 8212480
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470400 session 0x55ba51729e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 106913792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470c00 session 0x55ba513ede00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba57dc6400 session 0x55ba51051680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f28b000/0x0/0x1bfc00000, data 0x1e9f4c6/0x20d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4805151 data_alloc: 218103808 data_used: 8212480
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469467136 unmapped: 106905600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469467136 unmapped: 106905600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 106889216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 106889216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 106889216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4805151 data_alloc: 218103808 data_used: 8212480
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 106881024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2af000/0x0/0x1bfc00000, data 0x1e7b4c6/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 106881024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 106881024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.000583649s of 17.263948441s, submitted: 29
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50703400 session 0x55ba4f6550e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470400 session 0x55ba510361e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 106872832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 106864640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 106864640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 106864640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 106864640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 106856448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4801193 data_alloc: 218103808 data_used: 7950336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 106840064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 106831872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 106831872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 106831872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 106815488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba51831800 session 0x55ba510365a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba54c44800 session 0x55ba51356d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813353 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470728704 unmapped: 105644032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813353 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470736896 unmapped: 105635840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470736896 unmapped: 105635840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.340122223s of 37.416938782s, submitted: 22
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813353 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50d7e000 session 0x55ba51728b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470753280 unmapped: 105619456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814827 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814827 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470769664 unmapped: 105603072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470777856 unmapped: 105594880 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814827 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814827 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 105570304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470810624 unmapped: 105562112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470810624 unmapped: 105562112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f2ef000/0x0/0x1bfc00000, data 0x1e3b4c6/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470810624 unmapped: 105562112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.894979477s of 23.938385010s, submitted: 2
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483e400 session 0x55ba50a83a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470827008 unmapped: 105545728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba6483e400 session 0x55ba4f6541e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4814258 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470827008 unmapped: 105545728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470843392 unmapped: 105529344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470843392 unmapped: 105529344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 ms_handle_reset con 0x55ba50470400 session 0x55ba509bc3c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470843392 unmapped: 105529344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ef4f000/0x0/0x1bfc00000, data 0x21db4c6/0x240f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470843392 unmapped: 105529344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ef4f000/0x0/0x1bfc00000, data 0x21db4c6/0x240f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843364 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ef4f000/0x0/0x1bfc00000, data 0x21db4c6/0x240f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470851584 unmapped: 105521152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843364 data_alloc: 218103808 data_used: 11358208
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470859776 unmapped: 105512960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470876160 unmapped: 105496576 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.625185013s of 13.770231247s, submitted: 26
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470892544 unmapped: 105480192 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 105472000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 105472000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 105472000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470900736 unmapped: 105472000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470908928 unmapped: 105463808 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470908928 unmapped: 105463808 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4b000/0x0/0x1bfc00000, data 0x21dd11f/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4847538 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.368459702s of 24.372455597s, submitted: 1
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba64841400 session 0x55ba50872d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba55640800 session 0x55ba4f855860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850156 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 105431040 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850156 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470958080 unmapped: 105414656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470966272 unmapped: 105406464 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850156 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470966272 unmapped: 105406464 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba5183c400 session 0x55ba51051a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba5183c400 session 0x55ba51357680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850156 data_alloc: 218103808 data_used: 11366400
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba506fb800 session 0x55ba4f2b14a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba6483ec00 session 0x55ba4f2b0780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470990848 unmapped: 105381888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470999040 unmapped: 105373696 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4877196 data_alloc: 234881024 data_used: 15167488
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba53eebc00 session 0x55ba516f9c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba5183a800 session 0x55ba50872000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.965549469s of 27.977008820s, submitted: 3
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4876844 data_alloc: 234881024 data_used: 15167488
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba506fb800 session 0x55ba50872b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4876844 data_alloc: 234881024 data_used: 15167488
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 105365504 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4877164 data_alloc: 234881024 data_used: 15175680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba50513000 session 0x55ba50960d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.959621429s of 12.253915787s, submitted: 4
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 heartbeat osd_stat(store_statfs(0x19ef4a000/0x0/0x1bfc00000, data 0x21dd191/0x2414000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba64840800 session 0x55ba50a44000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4875587 data_alloc: 234881024 data_used: 15171584
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 ms_handle_reset con 0x55ba58548800 session 0x55ba4f854960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471031808 unmapped: 105340928 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471031808 unmapped: 105340928 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 430 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x21dedcc/0x2415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 430 ms_handle_reset con 0x55ba51830000 session 0x55ba50642d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4826903 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f2e8000/0x0/0x1bfc00000, data 0x1e3edcc/0x2075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467632128 unmapped: 108740608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.197291374s of 11.322454453s, submitted: 33
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba57dc7000 session 0x55ba50ffd4a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467435520 unmapped: 108937216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba5a0c8000 session 0x55ba51050780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467435520 unmapped: 108937216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4826037 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19f2e5000/0x0/0x1bfc00000, data 0x1e4090b/0x2078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467435520 unmapped: 108937216 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba506ff400 session 0x55ba51356000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 108920832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 108920832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19f2e5000/0x0/0x1bfc00000, data 0x1e4090b/0x2078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 108920832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba4ec33400 session 0x55ba4f6b4f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba543a8000 session 0x55ba50872780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 108920832 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba5183b400 session 0x55ba506a8960
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 69K writes, 272K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 69K writes, 26K syncs, 2.69 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1858 writes, 6311 keys, 1858 commit groups, 1.0 writes per commit group, ingest: 5.23 MB, 0.01 MB/s#012Interval WAL: 1858 writes, 811 syncs, 2.29 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 109355008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467025920 unmapped: 109346816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467025920 unmapped: 109346816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: mgrc ms_handle_reset ms_handle_reset con 0x55ba581a1800
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2945860420
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2945860420,v1:192.168.122.100:6801/2945860420]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba50474400 session 0x55ba516f85a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba50514400 session 0x55ba516f9860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba51832000 session 0x55ba510f30e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467034112 unmapped: 109338624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4871142 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467042304 unmapped: 109330432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed50000/0x0/0x1bfc00000, data 0x23d690b/0x260e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467050496 unmapped: 109322240 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.196346283s of 36.326320648s, submitted: 34
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467410944 unmapped: 108961792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 ms_handle_reset con 0x55ba50703400 session 0x55ba51773a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4874144 data_alloc: 218103808 data_used: 11378688
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467410944 unmapped: 108961792 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467247104 unmapped: 109125632 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4916036 data_alloc: 234881024 data_used: 17240064
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467902464 unmapped: 108470272 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467910656 unmapped: 108462080 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4916356 data_alloc: 234881024 data_used: 17248256
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467910656 unmapped: 108462080 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467910656 unmapped: 108462080 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19ed26000/0x0/0x1bfc00000, data 0x240090b/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 467918848 unmapped: 108453888 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.444561005s of 13.455645561s, submitted: 2
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 106881024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968198 data_alloc: 234881024 data_used: 17678336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19e882000/0x0/0x1bfc00000, data 0x289690b/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470556672 unmapped: 105816064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968198 data_alloc: 234881024 data_used: 17678336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19e882000/0x0/0x1bfc00000, data 0x289690b/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19e882000/0x0/0x1bfc00000, data 0x289690b/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968198 data_alloc: 234881024 data_used: 17678336
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 heartbeat osd_stat(store_statfs(0x19e882000/0x0/0x1bfc00000, data 0x289690b/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470564864 unmapped: 105807872 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.635722160s of 14.073546410s, submitted: 48
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 105775104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba6483e800 session 0x55ba4f855860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba5062ec00 session 0x55ba4e561e00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470605824 unmapped: 105766912 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x28986d4/0x2ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470605824 unmapped: 105766912 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968153 data_alloc: 234881024 data_used: 17690624
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x28986d4/0x2ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4968153 data_alloc: 234881024 data_used: 17690624
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 105758720 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470630400 unmapped: 105742336 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x28986d4/0x2ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.553018570s of 12.511919975s, submitted: 12
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4969055 data_alloc: 234881024 data_used: 17690624
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba57dc6c00 session 0x55ba517730e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba53eea800 session 0x55ba4f6550e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x2898736/0x2ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x2898736/0x2ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972077 data_alloc: 234881024 data_used: 17690624
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e88b000/0x0/0x1bfc00000, data 0x2898736/0x2ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470638592 unmapped: 105734144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e889000/0x0/0x1bfc00000, data 0x289d736/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972077 data_alloc: 234881024 data_used: 17690624
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e889000/0x0/0x1bfc00000, data 0x289d736/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e889000/0x0/0x1bfc00000, data 0x289d736/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470646784 unmapped: 105725952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470654976 unmapped: 105717760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e889000/0x0/0x1bfc00000, data 0x289d736/0x2ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972077 data_alloc: 234881024 data_used: 17690624
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470654976 unmapped: 105717760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.641451836s of 16.856966019s, submitted: 4
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba5062ec00 session 0x55ba51356b40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471015424 unmapped: 105357312 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e85f000/0x0/0x1bfc00000, data 0x28c7736/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471023616 unmapped: 105349120 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4980975 data_alloc: 234881024 data_used: 17821696
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471031808 unmapped: 105340928 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4995083 data_alloc: 234881024 data_used: 17821696
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471040000 unmapped: 105332736 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e845000/0x0/0x1bfc00000, data 0x290e736/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4995083 data_alloc: 234881024 data_used: 17821696
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.333244324s of 15.649279594s, submitted: 16
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e7fe000/0x0/0x1bfc00000, data 0x2955736/0x2b60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5009298 data_alloc: 234881024 data_used: 18518016
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 105324544 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e791000/0x0/0x1bfc00000, data 0x29c2736/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018442 data_alloc: 234881024 data_used: 18518016
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e791000/0x0/0x1bfc00000, data 0x29c2736/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.420621872s of 12.544328690s, submitted: 16
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e790000/0x0/0x1bfc00000, data 0x29c3736/0x2bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018662 data_alloc: 234881024 data_used: 18518016
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e790000/0x0/0x1bfc00000, data 0x29c3736/0x2bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471326720 unmapped: 105046016 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5016690 data_alloc: 234881024 data_used: 18522112
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e78f000/0x0/0x1bfc00000, data 0x29c4736/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470917120 unmapped: 105455616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e78f000/0x0/0x1bfc00000, data 0x29c4736/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e78f000/0x0/0x1bfc00000, data 0x29c4736/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5016690 data_alloc: 234881024 data_used: 18522112
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba4e095400 session 0x55ba50da0000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e78f000/0x0/0x1bfc00000, data 0x29c4736/0x2bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba53eec400 session 0x55ba4f655a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.420714378s of 14.430052757s, submitted: 2
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba50d7e000 session 0x55ba51037c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba64841000 session 0x55ba51773860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5008552 data_alloc: 234881024 data_used: 18391040
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e7b9000/0x0/0x1bfc00000, data 0x299a736/0x2ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e89f9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 heartbeat osd_stat(store_statfs(0x19e3a9000/0x0/0x1bfc00000, data 0x299a736/0x2ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470925312 unmapped: 105447424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5008712 data_alloc: 234881024 data_used: 18399232
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 470933504 unmapped: 105439232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba58549400 session 0x55ba50ef7a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba51836c00 session 0x55ba517290e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 ms_handle_reset con 0x55ba51839c00 session 0x55ba4f856d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472088576 unmapped: 104284160 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.028098106s of 10.027087212s, submitted: 321
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 handle_osd_map epochs [433,433], i have 433, src has [1,433]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba5062f800 session 0x55ba50ece1e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472129536 unmapped: 104243200 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 heartbeat osd_stat(store_statfs(0x19e478000/0x0/0x1bfc00000, data 0x289b381/0x2ad6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472129536 unmapped: 104243200 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba5183b800 session 0x55ba50a45c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba63fe3c00 session 0x55ba50a83c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba5062f800 session 0x55ba517283c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4987718 data_alloc: 234881024 data_used: 18157568
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472137728 unmapped: 104235008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472137728 unmapped: 104235008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472145920 unmapped: 104226816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba51836c00 session 0x55ba509601e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472162304 unmapped: 104210432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472162304 unmapped: 104210432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850128 data_alloc: 218103808 data_used: 11419648
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 heartbeat osd_stat(store_statfs(0x19eecf000/0x0/0x1bfc00000, data 0x1e44381/0x207f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 ms_handle_reset con 0x55ba51839c00 session 0x55ba510f21e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472178688 unmapped: 104194048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472178688 unmapped: 104194048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472178688 unmapped: 104194048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472178688 unmapped: 104194048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.924956322s of 10.500700951s, submitted: 66
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 435 ms_handle_reset con 0x55ba4dec5c00 session 0x55ba513ede00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 435 heartbeat osd_stat(store_statfs(0x19eec9000/0x0/0x1bfc00000, data 0x1e47b4f/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4856000 data_alloc: 218103808 data_used: 11431936
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 435 heartbeat osd_stat(store_statfs(0x19eec9000/0x0/0x1bfc00000, data 0x1e47b4f/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 435 heartbeat osd_stat(store_statfs(0x19eec9000/0x0/0x1bfc00000, data 0x1e47b4f/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4856000 data_alloc: 218103808 data_used: 11431936
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472186880 unmapped: 104185856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 435 heartbeat osd_stat(store_statfs(0x19eec9000/0x0/0x1bfc00000, data 0x1e47b4f/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472211456 unmapped: 104161280 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472211456 unmapped: 104161280 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472211456 unmapped: 104161280 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4859134 data_alloc: 218103808 data_used: 11436032
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472219648 unmapped: 104153088 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19eec6000/0x0/0x1bfc00000, data 0x1e4968e/0x2087000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 472219648 unmapped: 104153088 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba4f2cf400 session 0x55ba50a44f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba4ec2a400 session 0x55ba4e5c54a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471736320 unmapped: 104636416 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.230606079s of 15.310397148s, submitted: 37
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba51834400 session 0x55ba517f45a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19eec6000/0x0/0x1bfc00000, data 0x1e4969e/0x2088000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4863922 data_alloc: 218103808 data_used: 11436032
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba5046f400 session 0x55ba50cbd4a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471752704 unmapped: 104620032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba6483e000 session 0x55ba51357860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19eec6000/0x0/0x1bfc00000, data 0x1e4969e/0x2088000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba6483e000 session 0x55ba513ec1e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4921366 data_alloc: 218103808 data_used: 11436032
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba4ec2a400 session 0x55ba50ffc5a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e757000/0x0/0x1bfc00000, data 0x25b869e/0x27f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4921366 data_alloc: 218103808 data_used: 11436032
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e757000/0x0/0x1bfc00000, data 0x25b869e/0x27f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471769088 unmapped: 104603648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e757000/0x0/0x1bfc00000, data 0x25b869e/0x27f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471785472 unmapped: 104587264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4921366 data_alloc: 218103808 data_used: 11436032
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471785472 unmapped: 104587264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471785472 unmapped: 104587264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471785472 unmapped: 104587264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471793664 unmapped: 104579072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.705083847s of 19.859975815s, submitted: 20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba51841000 session 0x55ba517f4d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4928906 data_alloc: 218103808 data_used: 12001280
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4979466 data_alloc: 234881024 data_used: 19152896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e733000/0x0/0x1bfc00000, data 0x25dc69e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4979466 data_alloc: 234881024 data_used: 19152896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 471941120 unmapped: 104431616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.998488426s of 13.005092621s, submitted: 1
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e32a000/0x0/0x1bfc00000, data 0x29e569e/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [0,0,0,0,12])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19e32a000/0x0/0x1bfc00000, data 0x29e569e/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477741056 unmapped: 98631680 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5098728 data_alloc: 234881024 data_used: 21159936
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbd0000/0x0/0x1bfc00000, data 0x313f69e/0x337e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477814784 unmapped: 98557952 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba50517000 session 0x55ba509bda40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba54440800 session 0x55ba51037c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477822976 unmapped: 98549760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba4ec2a400 session 0x55ba513561e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5083916 data_alloc: 234881024 data_used: 21078016
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477822976 unmapped: 98549760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477822976 unmapped: 98549760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477831168 unmapped: 98541568 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477831168 unmapped: 98541568 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477831168 unmapped: 98541568 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5083916 data_alloc: 234881024 data_used: 21078016
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477831168 unmapped: 98541568 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5083916 data_alloc: 234881024 data_used: 21078016
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477839360 unmapped: 98533376 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 ms_handle_reset con 0x55ba64840800 session 0x55ba50872780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477847552 unmapped: 98525184 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5084076 data_alloc: 234881024 data_used: 21082112
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477847552 unmapped: 98525184 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5084716 data_alloc: 234881024 data_used: 21143552
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5084716 data_alloc: 234881024 data_used: 21143552
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.302429199s of 35.634357452s, submitted: 121
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbef000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 98516992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 98508800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099740 data_alloc: 234881024 data_used: 22945792
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 heartbeat osd_stat(store_statfs(0x19dbed000/0x0/0x1bfc00000, data 0x312069e/0x335f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477872128 unmapped: 98500608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.235605240s of 28.248983383s, submitted: 3
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5107274 data_alloc: 234881024 data_used: 23556096
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba55640800 session 0x55ba513c0780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba58548c00 session 0x55ba513c0000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba59eb6000 session 0x55ba509603c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbeb000/0x0/0x1bfc00000, data 0x31222f7/0x3362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5129572 data_alloc: 234881024 data_used: 23556096
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 98484224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbe5000/0x0/0x1bfc00000, data 0x33582f7/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477896704 unmapped: 98476032 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbe5000/0x0/0x1bfc00000, data 0x33582f7/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5129572 data_alloc: 234881024 data_used: 23556096
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbe5000/0x0/0x1bfc00000, data 0x33582f7/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbe5000/0x0/0x1bfc00000, data 0x33582f7/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 98467840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5129572 data_alloc: 234881024 data_used: 23556096
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 98459648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 98459648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.124782562s of 17.165224075s, submitted: 9
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba51835c00 session 0x55ba50642d20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477945856 unmapped: 98426880 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5136046 data_alloc: 234881024 data_used: 23564288
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 98418688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5136046 data_alloc: 234881024 data_used: 23564288
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5136046 data_alloc: 234881024 data_used: 23564288
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dbc0000/0x0/0x1bfc00000, data 0x337c31a/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.188096046s of 13.255754471s, submitted: 12
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479272960 unmapped: 97099776 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184579 data_alloc: 234881024 data_used: 24928256
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 97091584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184755 data_alloc: 234881024 data_used: 24928256
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.518165588s of 11.585093498s, submitted: 14
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184403 data_alloc: 234881024 data_used: 24928256
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184227 data_alloc: 234881024 data_used: 24928256
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dabd000/0x0/0x1bfc00000, data 0x378331a/0x3491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5185283 data_alloc: 234881024 data_used: 24928256
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.545027733s of 13.577043533s, submitted: 11
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba53eec800 session 0x55ba51772f00
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba50d7fc00 session 0x55ba4f7550e0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 97083392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba4f2ce800 session 0x55ba50ef7680
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 97075200 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 97075200 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479305728 unmapped: 97067008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba54c44c00 session 0x55ba517294a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba4f2ce800 session 0x55ba51356780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 heartbeat osd_stat(store_statfs(0x19dae1000/0x0/0x1bfc00000, data 0x375f2f7/0x346c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 97042432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 ms_handle_reset con 0x55ba50d7fc00 session 0x55ba510514a0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 438 ms_handle_reset con 0x55ba50dbc400 session 0x55ba50a83a40
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5154045 data_alloc: 234881024 data_used: 24698880
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479346688 unmapped: 97026048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479346688 unmapped: 97026048 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 438 ms_handle_reset con 0x55ba50703000 session 0x55ba4eab0000
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 438 ms_handle_reset con 0x55ba5183c800 session 0x55ba50da1c20
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 438 ms_handle_reset con 0x55ba4f2ce800 session 0x55ba516f92c0
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 438 heartbeat osd_stat(store_statfs(0x19dbe4000/0x0/0x1bfc00000, data 0x3128fa4/0x336a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5142867 data_alloc: 234881024 data_used: 24698880
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.259385109s of 10.704581261s, submitted: 73
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 97017856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 ms_handle_reset con 0x55ba51831400 session 0x55ba516f8780
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 ms_handle_reset con 0x55ba4ec2b000 session 0x55ba51773860
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 98451456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 98451456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 98451456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894835 data_alloc: 218103808 data_used: 11468800
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894835 data_alloc: 218103808 data_used: 11468800
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 98443264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 98435072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 98410496 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 98402304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477978624 unmapped: 98394112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 98385920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 98377728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 98361344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 98361344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 98361344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 98353152 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 98344960 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 98320384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'config show' '{prefix=config show}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477757440 unmapped: 98615296 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477274112 unmapped: 99098624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476971008 unmapped: 99401728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476971008 unmapped: 99401728 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476200960 unmapped: 100171776 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476200960 unmapped: 100171776 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476200960 unmapped: 100171776 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476209152 unmapped: 100163584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476209152 unmapped: 100163584 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 100139008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 100139008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 100139008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 100139008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 100139008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 100139008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 100139008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 100139008 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476241920 unmapped: 100130816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476241920 unmapped: 100130816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476241920 unmapped: 100130816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476241920 unmapped: 100130816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476241920 unmapped: 100130816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476241920 unmapped: 100130816 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476250112 unmapped: 100122624 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476258304 unmapped: 100114432 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476282880 unmapped: 100089856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476282880 unmapped: 100089856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476282880 unmapped: 100089856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476282880 unmapped: 100089856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476282880 unmapped: 100089856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476282880 unmapped: 100089856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476282880 unmapped: 100089856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476282880 unmapped: 100089856 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 100073472 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 100073472 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 100073472 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 100073472 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 100073472 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 100073472 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 100073472 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 100073472 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476315648 unmapped: 100057088 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476323840 unmapped: 100048896 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476323840 unmapped: 100048896 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476323840 unmapped: 100048896 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476323840 unmapped: 100048896 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476323840 unmapped: 100048896 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476323840 unmapped: 100048896 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476323840 unmapped: 100048896 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476348416 unmapped: 100024320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476348416 unmapped: 100024320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476348416 unmapped: 100024320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476348416 unmapped: 100024320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476348416 unmapped: 100024320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476348416 unmapped: 100024320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476348416 unmapped: 100024320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476348416 unmapped: 100024320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476356608 unmapped: 100016128 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476356608 unmapped: 100016128 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476356608 unmapped: 100016128 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 99999744 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 99999744 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 99999744 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 99999744 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 99999744 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476389376 unmapped: 99983360 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476389376 unmapped: 99983360 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476389376 unmapped: 99983360 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476389376 unmapped: 99983360 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476389376 unmapped: 99983360 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476389376 unmapped: 99983360 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476389376 unmapped: 99983360 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476397568 unmapped: 99975168 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476405760 unmapped: 99966976 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476405760 unmapped: 99966976 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476413952 unmapped: 99958784 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476413952 unmapped: 99958784 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476413952 unmapped: 99958784 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476413952 unmapped: 99958784 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476413952 unmapped: 99958784 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 99934208 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 99934208 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 99934208 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 99934208 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 99934208 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 99934208 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 99934208 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 99934208 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476454912 unmapped: 99917824 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476454912 unmapped: 99917824 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476454912 unmapped: 99917824 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476454912 unmapped: 99917824 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476454912 unmapped: 99917824 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476463104 unmapped: 99909632 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476463104 unmapped: 99909632 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476463104 unmapped: 99909632 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476479488 unmapped: 99893248 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476479488 unmapped: 99893248 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476479488 unmapped: 99893248 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476479488 unmapped: 99893248 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476479488 unmapped: 99893248 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476479488 unmapped: 99893248 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476479488 unmapped: 99893248 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476479488 unmapped: 99893248 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476504064 unmapped: 99868672 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476504064 unmapped: 99868672 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476504064 unmapped: 99868672 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476504064 unmapped: 99868672 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476504064 unmapped: 99868672 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476504064 unmapped: 99868672 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476504064 unmapped: 99868672 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476504064 unmapped: 99868672 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476520448 unmapped: 99852288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476520448 unmapped: 99852288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476520448 unmapped: 99852288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476520448 unmapped: 99852288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476520448 unmapped: 99852288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476520448 unmapped: 99852288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476520448 unmapped: 99852288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476520448 unmapped: 99852288 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476528640 unmapped: 99844096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476528640 unmapped: 99844096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476528640 unmapped: 99844096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476528640 unmapped: 99844096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476528640 unmapped: 99844096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476528640 unmapped: 99844096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476528640 unmapped: 99844096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476528640 unmapped: 99844096 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476553216 unmapped: 99819520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476553216 unmapped: 99819520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476553216 unmapped: 99819520 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476561408 unmapped: 99811328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476561408 unmapped: 99811328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476561408 unmapped: 99811328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476561408 unmapped: 99811328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476561408 unmapped: 99811328 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476577792 unmapped: 99794944 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476577792 unmapped: 99794944 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476585984 unmapped: 99786752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476585984 unmapped: 99786752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476585984 unmapped: 99786752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476585984 unmapped: 99786752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476585984 unmapped: 99786752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476585984 unmapped: 99786752 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476610560 unmapped: 99762176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476610560 unmapped: 99762176 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476618752 unmapped: 99753984 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476618752 unmapped: 99753984 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476618752 unmapped: 99753984 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476618752 unmapped: 99753984 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476618752 unmapped: 99753984 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476618752 unmapped: 99753984 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476635136 unmapped: 99737600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476635136 unmapped: 99737600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476635136 unmapped: 99737600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476635136 unmapped: 99737600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476635136 unmapped: 99737600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476635136 unmapped: 99737600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476635136 unmapped: 99737600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476635136 unmapped: 99737600 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476659712 unmapped: 99713024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476659712 unmapped: 99713024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476659712 unmapped: 99713024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476659712 unmapped: 99713024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476659712 unmapped: 99713024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476659712 unmapped: 99713024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476659712 unmapped: 99713024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476659712 unmapped: 99713024 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476676096 unmapped: 99696640 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476684288 unmapped: 99688448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476684288 unmapped: 99688448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476684288 unmapped: 99688448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476684288 unmapped: 99688448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476684288 unmapped: 99688448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476684288 unmapped: 99688448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476684288 unmapped: 99688448 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 99672064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 99672064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 99672064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 99672064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 99672064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 99672064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 99672064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 99672064 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476717056 unmapped: 99655680 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 99647488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 71K writes, 277K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s#012Cumulative WAL: 71K writes, 26K syncs, 2.68 writes per sync, written: 0.27 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1443 writes, 4639 keys, 1443 commit groups, 1.0 writes per commit group, ingest: 3.93 MB, 0.01 MB/s#012Interval WAL: 1443 writes, 626 syncs, 2.31 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 99647488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 99647488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 99647488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 99647488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 99647488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 99647488 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476741632 unmapped: 99631104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476741632 unmapped: 99631104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476741632 unmapped: 99631104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476741632 unmapped: 99631104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476741632 unmapped: 99631104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476741632 unmapped: 99631104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476741632 unmapped: 99631104 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476749824 unmapped: 99622912 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476766208 unmapped: 99606528 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476766208 unmapped: 99606528 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476766208 unmapped: 99606528 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476766208 unmapped: 99606528 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476774400 unmapped: 99598336 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476774400 unmapped: 99598336 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476774400 unmapped: 99598336 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476774400 unmapped: 99598336 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476782592 unmapped: 99590144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476782592 unmapped: 99590144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476782592 unmapped: 99590144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476782592 unmapped: 99590144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476782592 unmapped: 99590144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476782592 unmapped: 99590144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476782592 unmapped: 99590144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476782592 unmapped: 99590144 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476798976 unmapped: 99573760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476798976 unmapped: 99573760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476798976 unmapped: 99573760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476798976 unmapped: 99573760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476798976 unmapped: 99573760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476798976 unmapped: 99573760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476798976 unmapped: 99573760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476798976 unmapped: 99573760 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476823552 unmapped: 99549184 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476823552 unmapped: 99549184 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476831744 unmapped: 99540992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476831744 unmapped: 99540992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476831744 unmapped: 99540992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476831744 unmapped: 99540992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476831744 unmapped: 99540992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476831744 unmapped: 99540992 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 99532800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 99532800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 99532800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 99532800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 99532800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 99532800 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476848128 unmapped: 99524608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476848128 unmapped: 99524608 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476856320 unmapped: 99516416 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476856320 unmapped: 99516416 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476864512 unmapped: 99508224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476864512 unmapped: 99508224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476864512 unmapped: 99508224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476864512 unmapped: 99508224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476864512 unmapped: 99508224 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 99491840 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 99483648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 99483648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 99483648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 99483648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 99483648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 99483648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 99483648 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 99475456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 99475456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 99475456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 99475456 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 99467264 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476913664 unmapped: 99459072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476913664 unmapped: 99459072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476913664 unmapped: 99459072 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476930048 unmapped: 99442688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476930048 unmapped: 99442688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476930048 unmapped: 99442688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476930048 unmapped: 99442688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476930048 unmapped: 99442688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476930048 unmapped: 99442688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476930048 unmapped: 99442688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476930048 unmapped: 99442688 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476946432 unmapped: 99426304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476946432 unmapped: 99426304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476946432 unmapped: 99426304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476946432 unmapped: 99426304 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476954624 unmapped: 99418112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476954624 unmapped: 99418112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476954624 unmapped: 99418112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476954624 unmapped: 99418112 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476962816 unmapped: 99409920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476962816 unmapped: 99409920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476962816 unmapped: 99409920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476962816 unmapped: 99409920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476962816 unmapped: 99409920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476962816 unmapped: 99409920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476962816 unmapped: 99409920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476962816 unmapped: 99409920 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476979200 unmapped: 99393536 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476987392 unmapped: 99385344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476987392 unmapped: 99385344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476987392 unmapped: 99385344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476987392 unmapped: 99385344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476987392 unmapped: 99385344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476987392 unmapped: 99385344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 476987392 unmapped: 99385344 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477011968 unmapped: 99360768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477011968 unmapped: 99360768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477011968 unmapped: 99360768 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477020160 unmapped: 99352576 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477020160 unmapped: 99352576 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477020160 unmapped: 99352576 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477020160 unmapped: 99352576 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477020160 unmapped: 99352576 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebd000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477044736 unmapped: 99328000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477044736 unmapped: 99328000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894995 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 406.502716064s of 406.550872803s, submitted: 29
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477044736 unmapped: 99328000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477044736 unmapped: 99328000 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477052928 unmapped: 99319808 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477061120 unmapped: 99311616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477061120 unmapped: 99311616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894115 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477061120 unmapped: 99311616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2,1])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477061120 unmapped: 99311616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477061120 unmapped: 99311616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477061120 unmapped: 99311616 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477069312 unmapped: 99303424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894115 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477069312 unmapped: 99303424 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.015229225s of 10.209302902s, submitted: 50
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477077504 unmapped: 99295232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477077504 unmapped: 99295232 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477093888 unmapped: 99278848 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477102080 unmapped: 99270656 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894115 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477175808 unmapped: 99196928 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477208576 unmapped: 99164160 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477208576 unmapped: 99164160 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477208576 unmapped: 99164160 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477208576 unmapped: 99164160 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894115 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894115 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894115 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 99155968 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477224960 unmapped: 99147776 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: osd.1 439 heartbeat osd_stat(store_statfs(0x19eebe000/0x0/0x1bfc00000, data 0x1e4ead3/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,2] op hist [])
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477372416 unmapped: 99000320 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: bluestore.MempoolThread(0x55ba4d15db60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894115 data_alloc: 218103808 data_used: 11472896
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'config show' '{prefix=config show}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477241344 unmapped: 99131392 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: prioritycache tune_memory target: 4294967296 mapped: 477028352 unmapped: 99344384 heap: 576372736 old mem: 2845415833 new mem: 2845415833
Nov 29 04:23:31 np0005539564 ceph-osd[79212]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:23:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:23:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3416208665' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:23:31 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:31 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:31 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:31 np0005539564 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:23:31 np0005539564 nova_compute[226295]: 2025-11-29 09:23:31.745 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:31 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:23:31 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3439110020' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:23:32 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:32 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:32 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:32.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:23:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/28272599' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:23:32 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 04:23:32 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/177295995' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 04:23:33 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:33 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:33 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:33.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 04:23:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/957040257' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 04:23:33 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 04:23:33 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1681149973' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 04:23:34 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:34 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:34 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:34.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2080612485' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 04:23:34 np0005539564 nova_compute[226295]: 2025-11-29 09:23:34.264 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3788781579' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1537453819' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3729422123' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 04:23:34 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2496754110' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 04:23:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 04:23:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/467088641' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 04:23:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 04:23:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4006659768' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 04:23:35 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:35 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:35 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:35 np0005539564 podman[331833]: 2025-11-29 09:23:35.541893783 +0000 UTC m=+0.095610706 container health_status 192ff512efbbef22c1b301f80b2d75d03bcabc346c77ece02618673cbead7cd1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 04:23:35 np0005539564 podman[331847]: 2025-11-29 09:23:35.541735069 +0000 UTC m=+0.095624357 container health_status 9a8a7ecef5f6f70e3c8f24d25408dc90add30f3b34ae06f324e6c7317b754d57 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:23:35 np0005539564 podman[331850]: 2025-11-29 09:23:35.55684172 +0000 UTC m=+0.100341095 container health_status aca5ae8c7345562c67a94be3dcf8160566397933c49a04b21b25d70ad41f1938 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 04:23:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 04:23:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/675610105' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 04:23:35 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 04:23:35 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3125663046' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 04:23:36 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:36 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:36 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:36.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:36 np0005539564 systemd[1]: Starting Hostname Service...
Nov 29 04:23:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 04:23:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3933232208' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 04:23:36 np0005539564 systemd[1]: Started Hostname Service.
Nov 29 04:23:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 04:23:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3886851741' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 04:23:36 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 04:23:36 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3196551018' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 04:23:36 np0005539564 nova_compute[226295]: 2025-11-29 09:23:36.747 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:37 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 04:23:37 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3165795423' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 04:23:37 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:37 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:37 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:38 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:38 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:38 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:38.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:38 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 04:23:38 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1582979067' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 04:23:39 np0005539564 nova_compute[226295]: 2025-11-29 09:23:39.266 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 04:23:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/559655583' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 04:23:39 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:39 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 04:23:39 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:39.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 04:23:39 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:23:39 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1375575306' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:23:40 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:40 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:40 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:40.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 04:23:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/395779013' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 04:23:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:23:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:23:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:23:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:23:40 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 04:23:40 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2674347029' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 04:23:41 np0005539564 radosgw[83777]: ====== starting new request req=0x7f475169f6f0 =====
Nov 29 04:23:41 np0005539564 radosgw[83777]: ====== req done req=0x7f475169f6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:41 np0005539564 radosgw[83777]: beast: 0x7f475169f6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:41 np0005539564 nova_compute[226295]: 2025-11-29 09:23:41.749 226310 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:41 np0005539564 ceph-mon[81769]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 04:23:41 np0005539564 ceph-mon[81769]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/378849710' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
